Interpret your InSight test results

When your test has finished running, you can go to the InSight results page to see what impact your marketing action had on your target metric. The InSight results screen displays the following information about your test:

  • An overview of the results.
  • Actionable insights that help you interpret your results and provide suggestions for next steps.
  • A granular breakdown of key metrics.

Glossary of terms

The following terms are used in the InSight results page and in this document to describe test results.

Incremental lift
Your campaign had a positive effect on your target metric that wouldn’t have occurred if the campaign hadn’t run.
Loss
Your campaign had a negative effect on your target metric that wouldn’t have occurred if the campaign hadn’t run.
Organic cannibalization
Events/installs that would have occurred without the marketing action taking place. Organic cannibalization data is available only when measuring Campaign start or Budget increase.
Incremental effect absolute value
The effect your action had on the metric value over the entire testing period. This is measured by comparing the average daily difference between the actual value with the action and the average predicted metric value without the action. The absolute value is calculated using the following formula: average actual value - average predicted value.
Incremental effect percentage
The effect your action had on your target metric as a percentage. This is measured by comparing the average daily difference between the predicted metric value without the action and actual value with the action.
MAPE
The mean absolute percentage error (MAPE) value measures the average difference between the actual values of your target metric during the pre-period and the predicted values produced by the model. A lower percentage of difference between these values indicates a more accurate model. A value below 10% is high, 10% to 20% is medium, and above 20% is low.
P-value
The P-value (Probability value) represents the probability that your results are due to chance. A lower P-value indicates a higher probability that your marketing action directly impacted your target metric, while a higher P-value indicates that there is a higher probability that your results were due to other changes. Values of less than 0.05 indicate your measured action had a significant impact.
  • P-value 0.01
    • A P-value of 0.01 indicates a 1% probability that the observed effect could occur due to chance. This is considered strong evidence that your marketing action had a significant impact.
  • P-value 0.2
    • A P-value of 0.2 indicates a 20% probability that the observed effect could be due to random fluctuations rather than your marketing action. It suggests your marketing action might have had an impact, but the evidence isn’t strong enough to be sure without further data or analysis.
  • P-value 0.5
    • A P-value of 0.5 indicates 50% probability that the observed effects could be due to chance, suggesting no evidence that your marketing action had any significant impact on your target metric.

Actual and predicted outcomes

The results screen allows you to analyze results for All traffic and for Organic traffic. Toggle between these views by selecting the All and Organic tabs.

The result screen shows the following information:

  • The Result of the test: Whether the marketing action had a Positive, Mixed, or Negative impact on your campaign.
  • Action: The marketing action you selected for the test.
  • Metric: The metric you chose to measure.
  • Incremental effect: The difference between what InSight predicted would happen and what actually happened in your selected measurement period.
  • MAPE: The Mean Absolute Percentage Error value.
  • Model performance: how reliable the model is.
  • P-value: the P-value of your action.
  • Action significance: whether the change was within the range of InSight’s predictions (represented by the purple area in the chart).
    • If the actual results fall within the prediction range, the result is insignificant.
    • If the result falls outside the prediction range, the result is significant.

Adjust calculates the Result of a test by measuring the impact your marketing action had on your target metric for All traffic and for Organic traffic. The table below shows how each result is calculated.

Marketing actionImpact on metric for All trafficImpact on metric for Organic trafficInSight result
Campaign StartLiftLiftPositive
Campaign StartLiftLossMixed
Campaign StartLossLiftMixed
Campaign StartLossLossNegative
Budget IncreaseLiftLiftPositive
Budget IncreaseLiftLossMixed
Budget IncreaseLossLiftMixed
Budget IncreaseLossLossMixed
Campaign StopLiftLiftMixed
Campaign StopLiftLossMixed
Campaign StopLossLiftMixed
Campaign StopLossLossPositive
Budget DecreaseLiftLiftNegative
Budget DecreaseLiftLossMixed
Budget DecreaseLossLiftMixed
Budget DecreaseLossLossPositive

Actionable insights

To help you understand your test results, Adjust provides Actionable insights. For each test, Adjust displays Details about what happened when you performed your marketing actions to help you interpret the results of your test, as well as Recommendations that provide practical advice on what to do next based on the results of your test.

For example, you’ve just completed an InSight test to evaluate the impact of your newly launched campaign and determine whether it achieved the desired incremental lift in both All and Organic traffic.

From your InSight test, you can see detailed information about what occurred when you executed your marketing actions. For example, you can assess the incremental effect of the campaign and determine whether All and Organic installs increased or decreased, or if organic cannibalization occurred. These details help you evaluate the effectiveness of your campaign.

Additionally, Adjust provides Recommendations based on the test results. If your campaign led to a significant increase in All installs but also resulted in organic cannibalization, Adjust might suggest increasing your budget for this campaign while simultaneously shortening your attribution windows. By combining detailed results with actionable recommendations, Adjust empowers you to make informed decisions and refine your marketing strategies effectively.

Advanced data

The Advanced data section shows you a breakdown of key metrics related to your InSight test. For both All and Organic traffic, you can select the plus icon to show the values of each metric Before and After you took your selected marketing action to see exactly what impact your marketing action had on your campaign. These differences are summarized in the Differences row beneath each traffic type.

For each test, InSight displays a summary of the Incremental effect your marketing action had on your target metric over the test period. This is summarized in the following metrics:

  • Incremental effect absolute value: the average daily difference between what InSight predicted would happen and what actually happened in your selected measurement period.
  • Incremental effect percentage: the average daily difference between what InSight predicted would happen and what actually happened in your selected measurement period expressed as a percentage.

The following InSight-specific metrics are also available in Datascape and in the Report Service API. See Datascape metrics for a complete list of Datascape metrics.

DefinitionFormulaMetric API ID
Average revenue per eventAverage revenue generated per your selected event from users who installed your app within the time period you selectedTotal revenue of event / number of times the event was triggeredaverage_revenue_per_event
Incremental revenueExtra revenue generated from when compared to a control group(Actual incremental value - mean incremental value) * Average revenue per eventincremental_revenue
Incremental ROASReturn on advertising spend (ROAS), calculated using only in-app revenue, for a selected cohort period-incremental_roas

Interpret your results

Once you’ve run an InSight incrementality test and its status is updated to Completed, you can check the results of the test.

Marketing actionResultInterpretationRecommendation
Starting a new campaign or adding a new networkPositive

All: Lift
Organic: Lift
This campaign has continued to perform well, and further investment should be considered. Not only did it increase total installs beyond what would have occurred without the campaign, but it also boosted organic installs.
  • Continue this campaign.
  • Increase your budget for this campaign.
Increasing the budget of a campaign or networkMixed

All: Lift
Organic: Loss
This campaign continued to perform well, but some organic cannibalization occurred due to last-click attribution.Increase budget for this campaign, but decrease attribution windows.
Stopping an existing campaign or removing an existing networkPositive

All: Lift
Organic: Lift
After stopping this campaign, both All Installs and Organic Installs increased, confirming that the campaign was ineffective.
  • Rest assured that stopping this underperforming campaign was the right decision. Consider reallocating the budget to another campaign.
  • Reallocate your budget to another campaign.
Reducing the budget of a campaign or networkNegative

All: Loss
Organic: Loss
When decreasing the budget for this campaign, All Installs dropped while Organic Installs increased, suggesting that this may have been a successful campaign driving overall installs.Run another test with a small budget for this campaign to determine if there is Lift.