Working with a Product from an Analytic Perspective

Originally published on Dataconomy more than a year ago, this article served as the basis for my speech at Terricon Valley this month, so why not post it here too.

I have been a programmer for over 14 years, with the last six years spent leading multiple diverse product teams at VK. During this time, I’ve taken on roles as both a team lead and technical lead, while also making all key product decisions. These responsibilities ranged from hiring and training developers, to conceptualizing and launching features, forecasting, conducting research, and analyzing data.

In this article, I’d like to share my experiences on how product analytics intersects with development, drawing on real-world challenges I’ve encountered while working on various web projects. My hope is that these insights will help you avoid some common pitfalls and navigate the complexities of product analytics.

Let’s explore this process by walking through the major stages of product analytics.

The Product Exists, But There Are No Metrics

The first and most obvious point: product analytics is essential. Without it, you have no visibility into what is happening with your project. It’s like flying a plane without any instruments — extremely risky and prone to errors that could have been avoided with proper visibility.

I approach product work through the Jobs to Be Done (JTBD) framework, believing that a successful product solves a specific user problem, which defines its value. In other words, a product’s success depends on how well it addresses a user’s need. Metrics, then, serve as the tool to measure how well the product solves this problem and how effectively it meets user expectations.

Types of Metrics

From a developer’s perspective, metrics can be divided into two key categories:

  1. Quantitative Metrics: These provide numerical insight into user actions over a specific period. Examples include Monthly Active Users (MAU), user’s clicks on certain screens during the customer journey, the amount of money they spend, and how often the app crashes after this. These metrics typically originate from the product’s code and give a real-time view of user behavior.

  2. Qualitative Metrics: These assess the quality of the product and its audience, allowing for comparison with other products. Examples include Retention Rate (RR), Lifetime Value (LTV), Customer Acquisition Cost (CAC), Average Revenue Per User (ARPU), etc. Qualitative metrics are derived from quantitative data and are essential for evaluating a product’s long-term value and growth potential.

One common mistake at this stage is failing to gather enough quantitative data to build meaningful qualitative metrics. If you miss tracking user actions at certain points in the customer journey, it can lead to inaccurate conclusions about how well your product is solving user problems. Worse, if you delay fixing this problem, you’ll lose valuable historical data that could have helped fine-tune your product’s strategy.

There Are Metrics, But No Data

Once you have identified the right metrics, the next challenge is data collection. Gathering and storing the correct data is critical to ensure that your metrics are reliable and actionable. At this stage, product managers and developers must work closely to implement the necessary changes in the project’s code to track the required data.

Common Pitfalls in Data Collection

Several potential issues can arise during the data collection phase:

  • Misunderstanding Data Requirements: Even the most skilled developers might not fully grasp what data needs to be collected. This is where you must invest time in creating detailed technical specifications (TS) and personally reviewing the resulting analytics. It’s vital to verify that the data being collected aligns with the business goals and the hypotheses you aim to test.

  • Broken Metrics: As the product evolves, metrics can break. For instance, adding new features or redesigning parts of the product can inadvertently disrupt data collection. To mitigate this, set up anomaly monitoring, which helps detect when something goes wrong — whether it’s a fault in data collection or the product itself.

  • Lack of Diagnostic Analytics: Sometimes, metrics such as time spent on specific screens, the number of exits from a screen, or the number of times users return to a previous screen are crucial for diagnosing problems in the customer journey. These diagnostic metrics don’t need to be stored long-term but can help uncover issues in key metrics or highlight areas of the product that need improvement.

For maximum flexibility and accuracy, always aim to collect raw data instead of pre-processed data. Processing data within the code increases the likelihood of errors and limits your ability to adjust calculations in the future. Raw data allows you to recalculate metrics if you discover an error or if data quality changes, such as when additional data becomes available or filters are applied retroactively.

To streamline analysis without sacrificing flexibility, it can be useful to implement materialized views — precomputed tables that aggregate raw data. These views allow faster access to key metrics while maintaining the ability to recalculate metrics over time. Many analytical systems, including columnar databases like ClickHouse, which we used at VK, support materialized views, making them well-suited for handling large datasets.

Additionally, you can reduce storage requirements by isolating frequently accessed data, such as user information, into daily aggregates and joining them back into calculations when needed.

There Is Data, But No Hypotheses

Once you have collected sufficient data, the next challenge is forming hypotheses based on the information at hand. This is often more challenging than it seems, especially when dealing with large datasets. Identifying patterns and actionable insights from the data can be difficult, especially if you’re looking at overwhelming amounts of information without a clear focus.

Some Strategies for Generating Hypotheses

To overcome this challenge, here are some strategies I’ve found useful for generating hypotheses:

  • Look at the Big Picture: Historical data provides essential context. Data collected over a longer period — preferably at least two years — gives a clearer understanding of long-term trends and eliminates the noise caused by short-term fluctuations. This broader view helps in forming more accurate conclusions about your product’s health and trajectory.

  • User Segmentation: Users behave differently based on various factors such as demographics, usage frequency, and preferences. Segmenting users based on behavioral data can significantly improve your ability to forecast trends and understand different user groups. For example, using clustering algorithms like k-means to segment users into behavioral groups allows you to track how each segment interacts with the product, leading to more targeted product improvements.

  • Identify Key Actions: Not all user actions carry the same weight. Some actions are more critical to your product’s success than others. For instance, determining which actions lead to higher retention or user satisfaction can be key to unlocking growth. Using tools like decision trees with retention as the target metric can help pinpoint which actions matter most within the customer journey, allowing you to optimize the most impactful areas.

Working with an analyst

Data analysis is not very susceptible to errors in the code, unless of course it is code in Jupyter Notebook. But with errors in it, as well as with the developers' code, a review helps. Moreover, unlike developers, there are usually few analysts, often one in a team, and because of this, analysts do not check each other's calculations as actively, which leads to errors. Either try to set up a review process for the analyst's work, or check the most important calculations yourself, and try to automate simple ones.

It is also worth asking your analyst to provide calculations in the form of a step-by-step log instead of a simple summary. It will be easier to find errors. I consider a good report to be one in which I can always repeat the calculation process myself, or another analyst can.

I also taught the analyst to doubt his calculations and clearly reflect doubts in text if he could not figure them out, so that if necessary we could think about it together.

There Are Hypotheses, Now Test Them

Once you have formed hypotheses, the next step is to test them. Among the various methods available for hypothesis testing, A/B testing is one of the most effective in web projects. It allows you to test different variations of your product to see which performs better, helping you make informed decisions about product changes.

Benefits of A/B Testing

  • Isolation from External Factors: A/B tests allow you to conduct experiments in a controlled environment, minimizing the influence of external variables. This means that you can focus on the direct impact of your changes without worrying about audience variability or other uncontrollable factors.

  • Small Incremental Improvements: A/B testing makes it possible to test even minor product improvements that might not show up in broader user surveys or focus groups. These small changes can accumulate over time, resulting in significant overall product enhancements.

  • Long-term Impact: A/B tests are particularly useful for tracking the influence of features on complex metrics like retention. By using long-term control groups, you can see how a feature affects user behavior over time, not just immediately after launch.

Challenges of A/B Testing

Despite its advantages, A/B testing comes with its own set of challenges. Conducting these tests is not always straightforward, and issues such as uneven user distribution, user fatigue in test groups, and misinterpreted results often lead to the need for repeated tests.

In my experience conducting hundreds of A/B tests, I’ve encountered more errors due to test execution than from faulty analytics. Mistakes in how tests are set up or analyzed often lead to costly re-runs and delayed decision-making. Here are the most common issues that lead to recalculations or test restarts:

  • Uneven user distribution in test groups. Even with a well-established testing infrastructure, problems can arise when introducing a new feature.

    • The timing of when users are added to a test can be incorrect. For a product manager, a feature starts where the user sees it. For a developer, it starts where the code starts. Because of this, developers may insert users into A/B too early (before they’ve interacted with the feature) or too late (and you'll miss out on some of your feature audience). This leads to noise in the test results. In the worst case, you’ll have to redo the test; in the best case, an analyst can attempt to correct the bias, but you still won’t have an accurate forecast of the feature’s overall impact.

    • Audience attrition in one of the groups can skew results. The probable reason for this is that the feature in the test group has a usage limit in frequency or time. For instance, in the control group, users do not receive push notifications, while in the test group they do, but not more than once a week. As a result, the test group audience will begin to shrink if the inclusion in the test occurs after checking the ability to send a push.
      Another similar reason for the same result is caching. At the first interaction, we include the user in the test, at subsequent ones - not.

In most cases, fixing these issues requires code changes and restarting the test.

  • Errors during result analysis.

    • Insufficient sample size can prevent reaching statistical significance, wasting both time and developer resources.

    • Ending a test too early after seeing a noticeable effect can result in false positives or false negatives, leading to incorrect conclusions and poor product decisions.

In addition, conflicts with parallel tests can make it impossible to properly assess a feature’s impact. If your testing system doesn’t handle mixing user groups across tests, you’ll need to restart. Other complications, like viral effects (e.g., content sharing) influencing both test and control groups, can also distort results. Finally, if the analytics are broken or incorrect, it can disrupt everything — something I’ve covered in detail above.

Best Practices for A/B Testing

To address these issues in my teams, I’ve taken several steps:

  • During test implementation:

    • I helped developers better understand the product and testing process, providing training and writing articles to clarify common issues. We also worked together to resolve each problem we found.

    • I worked with tech leads to ensure careful integration of A/B tests during code reviews and personally reviewed critical tests.

    • I included detailed analytics descriptions in technical specifications and checklists, ensuring analysts defined required metrics beforehand.

    • My team developed standard code wrappers for common A/B tests to reduce human error.

  • During result analysis:

    • I collaborated with analysts to calculate the required sample size and test duration, considering the test’s desired power and expected effects.

    • I monitored group sizes and results, catching issues early and ensuring that tests weren’t concluded before P-values and audience sizes had stabilized.

    • I pre-calculated the feature’s potential impact on the entire audience, helping to identify discrepancies when comparing test results with post-launch performance.

By refining how tests are implemented and analyzed, I hope this guidance will make your work with product analytics more reliable, ensuring that the process leads to profitable decisions.

Conclusion

In today’s competitive market, leveraging product analytics is no longer optional but essential for product teams. Adopting a data-driven mindset enables companies to gain valuable insights, enhance product performance, and make more informed decisions.

A focus on data throughout the development cycle helps companies not only address immediate challenges but also achieve long-term success. In other words, a data-driven approach unlocks the true potential for product innovation and sustainable growth. I genuinely hope that the information provided here will help you make your work with product analytics more effective and profitable!

28
Subscribe to my newsletter

Read articles from Alexander Kolobov directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Alexander Kolobov
Alexander Kolobov

FullStack Developer, Team Lead, Product Manager, posting notes about my development experience