Predictive LTV

Overview


What is Predictive Lifetime Value?

Predictive Lifetime Value (pLTV) allows you to predict how much money your customers will spend in your app over their entire lifetime based on their past behavior. pLTV can segment users by acquisition source and forecast projections accordingly, making it an ideal tool for determining which of your marketing channels will produce your highest spending users now--and in the future.

How do I access pLTV?

pLTV is located in the Reports tab in the Upsight Analytics dashboard. Follow the steps below to access it:

  1. Click on "Reports" in the menu bar at the top of the dashboard.
  2. Select "Acquisition" from the navigation table on the left, then click on "Revenue Analysis."
  3. Select the cohort of users you want to track using the Revenue Analysis tool.
  4. After you've run your selection, turn on pLTV by clicking the "Predictive LTV" on/off slider located in the top right corner of the "Revenue Curve (All Users)" chart.

Predictive LTV Button

Note: You can also get pLTV predictions by clicking on the "Predictive LTV: 90% Confidence" button located at the top of the Ad Links table (which is accessible through the Acquisition menu).

pLTV Video Overview

Interface


Using pLTV From the Revenue Analysis Tool

  1. Define a cohort in Revenue Analysis and select a date range along with any standard filters.

    Revenue Analysis standard filter screenshot

  2. After you've run your cohort in Revenue Analysis, click the "Predictive LTV" button in the upper right corner of "Revenue Curve" chart to turn on pLTV.

    Predictive LTV button screenshot

  3. Set the number of days you want to predict out using the pLTV control panel on the right. The line represents the cohort's projected average revenue per user (ARPU) on a given day. The shaded cone surrounding it represents the possible variance from the estimate.

    Revenue Analysis Variance

    Note For cohorts that are older than the number of days you pick, there will be no prediction.

Using Predictive LTV from the Ad Links table

  1. The Ad Links table uses pLTV with a 90% confidence bound. Click the link at the top of the table to view LTV predictions for 30, 60, and 90 days intervals.

    Ad Links Confidence Bounds

  2. You can adjust the number of days for your prediction by clicking on the header of the date column you would like to modify. From here, you can select from preset day options or enter a custom number of days in the day prediction box.

    Ad Links Number of Days Box

  3. You can adjust how far into the future you want to forecast revenue for your cohort using the "Predict days since install" box.

    Predict Days since install box

Best Practices


Choosing an Appropriate Cohort Group

Choose an appropriate cohort date range for your marketing needs. For tighter confidence intervals, or if you are not receiving sufficient predictions, consider increasing the size of your cohort. This can involve reducing segmentations or increasing the number of install days considered.

Choosing an Appropriate Date Range

pLTV can forecast predictions up to 365 days into the future from the date of install for most apps. However, the tool cannot make effective projections for periods longer than your app has been sending monetization data to Upsight Analytics.

We recommend that you choose the prediction date range that best suits your business needs. Apps with a steady and consistent history of in-app monetization will yield the most accurate results.

Choosing an Appropriate Confidence Level

Choose a confidence level based on your business needs. For risk tolerant decisions, lower confidence bounds can be used. For risk averse decisions, a higher confidence bounds are recommended.

FAQ


How does Upsight Analytics calculate predictive lifetime value?

There are two parts that make up Upsight Analytics predictions.

First, there is the prediction itself. In order to make, for example, a 90 day revenue prediction, we consider some recent installation cohorts that have at least 90 days worth of revenue. We call these the “training cohorts”. Analyzing their behaviour, we may observe that, on average, 40% of the revenue from those cohorts by day 90, has been obtained by day 10. In other words, the ratio between day 10 LTV and day 90 LTV is 0.4. Now, given a new cohort with only 10 days worth of revenue, e.g. $10, we can say that we expect their day 90 LTV to be 2.5 times that, or $25 in our example.

The second portion to making a prediction is providing confidence bounds. It is unrealistic, in general, to expect the prediction to be 100% accurate. Small, unknown, differences in how cohorts monetize, rate of monetization, willingness to spend large sums, and churn, lead to varying lifetime values. At Upsight, we model these unknown rates using probability models fit to the observed data. Once fit to the data, the probability model inherently provides statistics such as standard deviation and various percentile points around the prediction. Through extensive studies and data validation, we have established that our models predict the confidence intervals with a high degree of accuracy.

How much data do I need to effective use pLTV?

Cohorts require data to be accumulated before predictive LTV can be used accurately. The more data that the tool has to work with, the more accurate the results will be. Conversely, using the tool too early will yield less accurate results. Please be careful when making early decisions. Using only 2 or 3 days of data will not yield predictions as accurate as 5 or more days.

What are Confidence Bounds?

Confidence Bounds are the upper and lower boundaries of the range in which we expect the actual value to fall a certain percentage of the time.

For example, if the confidence bounds range from $1.00 per user to $1.50 per user for a 60 day LTV prediction at a 90% confidence level, then we expect that the 60 day actual LTV for that cohort of users to fall between $1.00 per user and $1.50 per user 90% of the time.

What does the backtest algorithm option do?

The backtest algorithm feature will allow you to compare a previous prediction against the actual data that was collected. You can select any date before the current day and as far back as one day after the install date range.

Why do I get insufficient training cohorts data during backtesting, even though I see predictions?

During backtesting, Upsight aims to reproduce the predictions as they would have appeared on the day selected. To that effect, the training cohorts used during backtesting and actual predictions are different. In some cases, this may put you in a situation where there is insufficient historical data to make the predictions.

This can be alleviated by reducing the range of predictions to allow the algorithm to use more recent training cohorts. For example, instead of backtesting a cohort for predictions of 90 days since install, use predictions for 60 days since install.

What is Partial Data?

Each part of the curve is denoted by actual data (all real data), partial data (some real data), and predictive data (no real data).

You will see partial data included in the LTV curve if your selected cohort includes multiple install dates over a date range. Your selected cohort at the time of viewing the LTV curve may or may not have had a chance to monetize within your application just yet, depending on the days since install you select as your x-axis.

Why am I not seeing a prediction?

There are two possibilities that can prevent a prediction from appearing.

  1. If none of the cohorts you have selected have enough data to make a prediction, it is because in order to establish a trend, multiple transactions are required across the cohorts that are being predicted.

    Example: A cohort of users in Canada who installed in the month of July may not have enough data to establish a trend. However, removing one of the filters will yield more likelihood of transactions across cohorts.

  2. If there is not enough historical data to accurately create a prediction, it is possible that your app has not been running for a sufficient amount of time to collect data. It is important to note that Predictive LTV can only make a prediction that is less than or equal to the length of time that we have been receiving significant monetization data through MTU messages.

    Example: If the application has only begun sending MTU messages in February, a prediction for January will not be possible because there is no monetization data for this cohort. Also, making a 60 or 90 day prediction for March will not be possible since there will only have been monetization data for under 30 days.

Why is my prediction taking so long to process?

The Predictive LTV tool uses a proprietary algorithm that calculates a lot of data. Please be patient during the calculation. If you experience a timeout error, try refreshing the page. If you are not receiving any predictions, please contact Upsight Support. We will process your predictions offline for you or make adjustments. We appreciate all feedback and want to hear from you!

Why are the 90 day predictions lower than the 60 day predictions on the ad links table?

When making the predictions for the ad table, we always use the most recent available training cohorts. That means that there are different training cohorts for the 90 day and 60 day predictions (or, in fact, any of the predictions). In some apps, there is a measurable difference over the lifetime of the app in how users interact and monetize. For example, after a major update, you may begin to see your users monetizing over a longer period of time, reduced churn, etc. These changes take some time to propagate through Predictive LTV’s algorithm. Such a change can lead to predictions for longer time frames (i.e. predictions that are using older training cohorts) to be lower than predictions for shorter time frames (i.e. predictions that are using more recent training cohorts, which reflect the change).

Why do the confidence bounds change after I refresh?

The prediction algorithm involves stochastic modeling of the data. As such, you may see slightly different results on repeated applications. This is normal, and may show up as differences of up to 1-2%. If the confidence bounds change a significant amount (i.e. more than 2%) please let us know.