4 ways you’re misinterpreting your product data

Published on
February 24, 2022

About Company

There’s no doubt that in today’s data-driven digital landscape, the amount of raw data a product manager encounters on a daily basis is enormous. Analytical product managers and their team members alike spend hours upon hours, staring at dashboards analyzing data, as they pursue their “never-ending quest” in discovering valuable insights about how users interact with their product.

Data is gathered, presented, and converted into growth opportunities – and while this sounds seamless in theory – it’s the sheer amount of data that a product manager must digest and decipher that’s not only egregious, but it also creates an environment where growth opportunities are often missed – or worse, data is misinterpreted.

Here are 4 examples.

Specific segments are underperforming

Oftentimes, companies look to aggregated metrics and averages to gain insights on the under/over-performing segments within their product. However, because there are so many segments and KPIs, the amount of permutations is enormous – making it incredibly difficult to identify specific segments that deserve ‘special product treatment’.

For instance, one of the companies we worked with had a ‘freemium to pay’ conversion that wouldn’t move despite their massive efforts. That is until they turned to Loops to help identify the specific segments (as well as combinations of segments) that were underperforming. In the end, we were able to help them identify a declining trend for a specific ‘hidden’ vertical– the company added to its onboarding flow a questionnaire about the users’ needs, yet, the questionnaire being used was not built for this vertical.

While the team was asking questions about the users’ needs, this particular solution was not the ‘answer’ for this vertical. After adjusting the onboarding flow, we saw a 500% uplift for this segment and an overall major uplift in the general conversion rate.

High value, low adopted features

As a product becomes more and more complex, it can become difficult for the user to uncover the product’s features and thus, overlook its value. In fact, studies have shown that ~80% of features are rarely or never used.

To make up for this, companies usually look at their most popular features from their data and bombard new users with either exhaustive onboarding flows, unnecessary pop-ups, or ill-placed walk-throughs and educational tutorials.

But correlation is not causation. The fact that some features are more popular does not necessarily mean that they drive long-term retention. A much better approach is to actually focus on a small number of use cases that from a causality perspective lead to higher retention.

Our platform allows companies to find the specific journeys (or features) that set up users for success by leveraging causal inference models. These models are used by companies like Uber, Netflix and Linkedin to make product decisions, without relying solely on correlation (more on that in a separate blog).

For example, a company we worked with was able to change the first session experience and focus only on the use cases that were found as retention drivers. They leveraged a new push notifications flow, onboarding screens, and a motivational boost, and were able to improve their 2nd-week retention by 13% with one single experiment.

Measuring the effect of your launch in a wrong way

When companies look to measure the impact of a recent launch – be it a new app version, new features – they’ll sometimes do so by simply looking at the effects left on their KPI post-launch. Sound familiar?

The famous pre/post conundrum: look at the KPIs before the launch and compare them to the period after the launch to see the difference.

Companies typically take this approach when they can’t perform an A/B test before launching (this can be due to a lack of resources, sufficient traffic, issues with UX, etc.). Regardless of the reason, this is a gamble and a tricky approach to measuring the impact of a product launch – why so?

  1. External factors like seasonality, other launches, etc influence insights  
  2. Selection bias – users that adopted the new feature are by definition more likely to retain/engage/convert, as they come with higher intent and that’s why they use the new feature.

The Loops platform provides companies the ability to run simulated A/B tests without actually running them. By leveraging causal inference models, we minimize the effect of the selection bias and the other external factors, so companies can finally understand the actual impact of their efforts.  

Optimizing at the wrong stage of your funnel

We’ve heard of this scenario: a company decides they want to optimize their funnel so they take a look and decide to prioritize the stage with the biggest drop in. They focus on this stage and immediately start optimizing – and in many cases, even if they see some uplift in this stage, despite their efforts, the overall conversion rate doesn’t improve.

Why is this?

Because they just “delay” the churn of low-intent users, rather than churning in the 1st stage of the funnel, they drop in a bit later stages. Instead, companies should focus on more high-intent users,  either focusing on later stages in the funnel where there are less low intent users (i.e. pricing page or paywall, etc.) or focusing on identifying high-intent users who are failing.

For example, our platform runs machine learning models to segment users by low, medium, high intent, looking at their behavior in the first few minutes/hours/days, so you can focus on the right users and customize the experience for them.

Never misinterpret your product’s data (again)

Loops platform uncovers hidden growth opportunities lying within your data. We take a proactive approach, pushing opportunities and insights that are aligned with your KPIs. The platform makes sure that you never misinterpret your product data, and focus on the biggest accurate growth opportunities at a given moment.

Share this post
Business type:
Industry:
HQ Location: