How much impact has your product team had? 🤔 That’s a surprisingly tricky question to answer. Sure, A/B tests are great … … assuming you’ve got the traffic ... ... and you’re measuring a single feature. But A/B tests aren’t great for measuring performance across multiple initiatives or several teams. A/B test results do NOT stack up. • 5x 2% uplift in A/B tests does not lead to a 10% increase overall. • Double counting is difficult to exclude. So how to we attribute product impact? Some thoughts below, but first: • Why do you need to know? • What decisions will this inform? It’s easy to get sucked into a laborious process of attribution seeking the Truth. In most cases, an approximation will do. Seeking a perfect answer vs. a "good enough" answer is busywork. 𝟱 𝗟𝗘𝗩𝗘𝗟𝗦 𝗢𝗙 𝗔𝗧𝗧𝗥𝗜𝗕𝗨𝗧𝗜𝗢𝗡 Ideally you build up analysis linking leading product metrics in your control to lagging business critical metrics like revenue. This is done in stages: 𝟭 - 𝗩𝗜𝗕𝗘𝗦 You don’t attribute. “It’s a team sport, who cares?” If you’re very early stage and things are going well, do you even need to attribute? 𝟮 - 𝗡𝗔𝗥𝗥𝗔𝗧𝗜𝗩𝗘 You can describe the mechanism, but not put numbers to it. “Commenting drives retention, retention drives revenue” 𝟯 - 𝗠𝗘𝗧𝗥𝗜𝗖𝗦 You have a driver tree with specific metrics, but you don’t know how much moving one moves another. “# comments a day drives D30 retention. D30 retention drives revenue per active user” 𝟰 - 𝗖𝗢𝗥𝗥𝗘𝗟𝗔𝗧𝗜𝗢𝗡 Most companies aim here. You know the link between your metrics. “10% increase in comments a day equals 3% increase in D30 retention, and 2% increase in revenue per active user” 𝟱 - 𝗛𝗢𝗟𝗗 𝗢𝗨𝗧 𝗚𝗥𝗢𝗨𝗣 You hold back all features from a subset of users for several months. (Like a prolonged A/B test) This is very expensive, but very rigorous. 𝗚𝗘𝗡𝗘𝗥𝗔𝗟 𝗧𝗜𝗣𝗦 • Work with an analyst / the CFO to come up with a simple model that everyone can buy into. • It's more important to have political buy-in than an absolutely scientifically correct answer. • Standardise the baseline so you can compare across teams and initiatives (e.g. last year’s actual revenue / traffic) • Be conscious of when you’ll actually release features, and how many months contribution they’ll deliver this year. Often shipping something in Q1 means you'll see impact from Q2. • Agree how you'll count future years. Product changes are often small but permanent, affecting all future revenue. More on different types of quantitative testing here on Hustle Badger: https://lnkd.in/eSaJ9jt8
Hustle Badger’s Post
More Relevant Posts
-
How much impact has your product team had? 🤔 That’s a surprisingly tricky question to answer. Sure, A/B tests are great … … assuming you’ve got the traffic ... ... and you’re measuring a single feature. But A/B tests aren’t great for measuring performance across multiple initiatives or several teams. A/B test results do NOT stack up. • 5x 2% uplift in A/B tests does not lead to a 10% increase overall. • Double counting is difficult to exclude. So how to we attribute product impact? Some thoughts below, but first: • Why do you need to know? • What decisions will this inform? It’s easy to get sucked into a laborious process of attribution seeking the Truth. In most cases, an approximation will do. Seeking a perfect answer vs. a "good enough" answer is busywork. 𝟱 𝗟𝗘𝗩𝗘𝗟𝗦 𝗢𝗙 𝗔𝗧𝗧𝗥𝗜𝗕𝗨𝗧𝗜𝗢𝗡 Ideally you build up analysis linking leading product metrics in your control to lagging business critical metrics like revenue. This is done in stages: 𝟭 - 𝗩𝗜𝗕𝗘𝗦 You don’t attribute. “It’s a team sport, who cares?” If you’re very early stage and things are going well, do you even need to attribute? 𝟮 - 𝗡𝗔𝗥𝗥𝗔𝗧𝗜𝗩𝗘 You can describe the mechanism, but not put numbers to it. “Commenting drives retention, retention drives revenue” 𝟯 - 𝗠𝗘𝗧𝗥𝗜𝗖𝗦 You have a driver tree with specific metrics, but you don’t know how much moving one moves another. “# comments a day drives D30 retention. D30 retention drives revenue per active user” 𝟰 - 𝗖𝗢𝗥𝗥𝗘𝗟𝗔𝗧𝗜𝗢𝗡 Most companies aim here. You know the link between your metrics. “10% increase in comments a day equals 3% increase in D30 retention, and 2% increase in revenue per active user” 𝟱 - 𝗛𝗢𝗟𝗗 𝗢𝗨𝗧 𝗚𝗥𝗢𝗨𝗣 You hold back all features from a subset of users for several months. (Like a prolonged A/B test) This is very expensive, but very rigorous. 𝗚𝗘𝗡𝗘𝗥𝗔𝗟 𝗧𝗜𝗣𝗦 • Work with an analyst / the CFO to come up with a simple model that everyone can buy into. • It's more important to have political buy-in than an absolutely scientifically correct answer. • Standardise the baseline so you can compare across teams and initiatives (e.g. last year’s actual revenue / traffic) • Be conscious of when you’ll actually release features, and how many months contribution they’ll deliver this year. Often shipping something in Q1 means you'll see impact from Q2. • Agree how you'll count future years. Product changes are often small but permanent, affecting all future revenue. More on different types of quantitative testing here on Hustle Badger: https://lnkd.in/eSaJ9jt8 ❤️ Like this post? ❤️ Check out Hustle Badger for more practical advice for product leaders and get the support you deserve. Wiki + Courses + Community + Events
To view or add a comment, sign in
-
How much impact has your product team had? 🤷♂️ That’s a surprisingly tricky question to answer. Sure, AB tests are great … … assuming you’ve got the traffic ... ... and you’re measuring a single feature. But AB tests aren’t great for measuring performance across multiple initiatives or several teams. AB test results do NOT stack up. • 5x 2% uplift in AB tests does not lead to a 10% increase overall. • Double counting is difficult to exclude. So how to we attribute product impact? Some thoughts below, but first: • Why do you need to know? • What decisions will this inform? It’s easy to get sucked into a laborious process of attribution seeking the Truth. In most cases, an approximation will do. Seeking a perfect answer vs. a "good enough" answer is busywork. 𝟱 𝗟𝗘𝗩𝗘𝗟𝗦 𝗢𝗙 𝗔𝗧𝗧𝗥𝗜𝗕𝗨𝗧𝗜𝗢𝗡 Ideally you build up analysis linking leading product metrics in your control to lagging business critical metrics like revenue. This is done in stages: 𝟭 - 𝗩𝗜𝗕𝗘𝗦 You don’t attribute. “It’s a team sport, who cares?” If you’re very early stage and things are going well, do you even need to attribute? 𝟮 - 𝗡𝗔𝗥𝗥𝗔𝗧𝗜𝗩𝗘 You can describe the mechanism, but not put numbers to it. “Commenting drives retention, retention drives revenue” 𝟯 - 𝗠𝗘𝗧𝗥𝗜𝗖𝗦 You have a driver tree with specific metrics, but you don’t know how much moving one moves another. “# comments a day drives D30 retention. D30 retention drives revenue per active user” 𝟰 - 𝗖𝗢𝗥𝗥𝗘𝗟𝗔𝗧𝗜𝗢𝗡 Most companies aim here. You know the link between your metrics. “10% increase in comments a day equals 3% increase in D30 retention, and 2% increase in revenue per active user” 𝟱 - 𝗛𝗢𝗟𝗗 𝗢𝗨𝗧 𝗚𝗥𝗢𝗨𝗣 You hold back all features from a subset of users for several months. (Like a prolonged AB test) This is very expensive, but very rigorous. 𝗚𝗘𝗡𝗘𝗥𝗔𝗟 𝗧𝗜𝗣𝗦 • Work with an analyst / the CFO to come up with a simple model that everyone can buy into. • It's more important to have political buy-in than an absolutely scientifically correct answer. • Standardise the baseline so you can compare across teams and initiatives (e.g. last year’s actual revenue / traffic) • Be conscious of when you’ll actually release features, and how many months contribution they’ll deliver this year. Often shipping something in Q1 means you'll see impact from Q2. • Agree how you'll count future years. Product changes are often small but permanent, affecting all future revenue. More on different types of quantitative testing here on Hustle Badger: https://lnkd.in/eSaJ9jt8 Visit Hustle Badger for practical advice for product leaders DM me for private workshops, coaching and advice
To view or add a comment, sign in
-
How much impact has your product team had? 🤷♂️ That’s a surprisingly tricky question to answer. Sure, AB tests are great … … assuming you’ve got the traffic ... ... and you’re measuring a single feature. But AB tests aren’t great for measuring performance across multiple initiatives or several teams. AB test results do NOT stack up. • 5x 2% uplift in AB tests does not lead to a 10% increase overall. • Double counting is difficult to exclude. So how to we attribute product impact? Some thoughts below, but first: • Why do you need to know? • What decisions will this inform? It’s easy to get sucked into a laborious process of attribution seeking the Truth. In most cases, an approximation will do. Seeking a perfect answer vs. a "good enough" answer is busywork. 𝟱 𝗟𝗘𝗩𝗘𝗟𝗦 𝗢𝗙 𝗔𝗧𝗧𝗥𝗜𝗕𝗨𝗧𝗜𝗢𝗡 Ideally you build up analysis linking leading product metrics in your control to lagging business critical metrics like revenue. This is done in stages: 𝟭 - 𝗩𝗜𝗕𝗘𝗦 You don’t attribute. “It’s a team sport, who cares?” If you’re very early stage and things are going well, do you even need to attribute? 𝟮 - 𝗡𝗔𝗥𝗥𝗔𝗧𝗜𝗩𝗘 You can describe the mechanism, but not put numbers to it. “Commenting drives retention, retention drives revenue” 𝟯 - 𝗠𝗘𝗧𝗥𝗜𝗖𝗦 You have a driver tree with specific metrics, but you don’t know how much moving one moves another. “# comments a day drives D30 retention. D30 retention drives revenue per active user” 𝟰 - 𝗖𝗢𝗥𝗥𝗘𝗟𝗔𝗧𝗜𝗢𝗡 Most companies aim here. You know the link between your metrics. “10% increase in comments a day equals 3% increase in D30 retention, and 2% increase in revenue per active user” 𝟱 - 𝗛𝗢𝗟𝗗 𝗢𝗨𝗧 𝗚𝗥𝗢𝗨𝗣 You hold back all features from a subset of users for several months. (Like a prolonged AB test) This is very expensive, but very rigorous. 𝗚𝗘𝗡𝗘𝗥𝗔𝗟 𝗧𝗜𝗣𝗦 • Work with an analyst / the CFO to come up with a simple model that everyone can buy into. • It's more important to have political buy-in than an absolutely scientifically correct answer. • Standardise the baseline so you can compare across teams and initiatives (e.g. last year’s actual revenue / traffic) • Be conscious of when you’ll actually release features, and how many months contribution they’ll deliver this year. Often shipping something in Q1 means you'll see impact from Q2. • Agree how you'll count future years. Product changes are often small but permanent, affecting all future revenue. More on different types of quantitative testing here on Hustle Badger: https://lnkd.in/eSaJ9jt8
To view or add a comment, sign in
-
As is the case with any new product or idea, one of the best ways to verify if the idea you are working on has any merit is by “Talking to customers”. But before you can do that you as a product manager need to have some data or hypothesis about the problem you want to solve for the customer. In my last post I spoke about how you could use existing product usage or drop off data for this. But what happens you don’t have an existing product or service that you could look out to, for data. Especially when you are a new startup. While it is understandable that you won't have any usage and drop off data. I still would not recommend going into a conversation without any data at all. If you do go without one, in most cases the conversation tends to go in directions which might not be of any help to you, when it comes to identifying a problem to solve and building a product. To make sure that customer conversations stay on track, I advise at least having a hypothesis if not actual data in hand. But figuring out a hypothesis can be tricky. While this “hypothesis” can be a figment of your imagination, I've rarely seen this work in a product manager's favor. What you can do, is look at data to come to a hypothesis about the customer problem and then take this to the customer. This becomes easier in case of consumer businesses where you have a pretty large customer or potential customer base who individually are the users and decision makers of the product or service. In case of B2B businesses though, the best way to go about getting this data is to look at market trends. Followed by looking at the current processes that your target customer follows to deliver their products to their customers. Usually this market trend data can be found in publicly available industry reports and in products that serve the customer base. #productmanagement #customertalk #problemidentification #productdevelopment #hypothesis
To view or add a comment, sign in
-
Baselines are essential for measuring improvements. Without clear impact, products or features fail, senior stakeholders lose interest, and focus shifts elsewhere for traction and revenue. In B2C products, most metrics are measurable. User behavior is directly linked to business outcomes, with a consensus on what, why, and how to measure. Baselines can be established. But what happens when baselines are unclear or don’t exist? Measuring KPIs to determine if a product is adding value or trending towards failure isn’t always straightforward or agreed upon. Stakeholders often lack consistency across different types of products. In my past experience working on niche 0-to-1 solutions, we always set qualitative and quantitative metrics aligned with stakeholder outcomes. These metrics showed clients and users how our solution brought value, saved money, and improved efficiency. Yet, measuring KPIs was far harder than in the B2C space. Here’s why: 1 - In certain environments, especially built ones, baselining KPIs is tough. For instance, reducing a backlog of maintenance tickets was a goal in one project, but we discovered 30% of maintenance issues weren't recorded in the system, making it impossible to set a baseline. Organizations needed time to adapt, and without immediate change, showing improvement became difficult. 2 - Some important metrics aren't measured simply because no consistent method exists. One organization wanted to track user satisfaction, but factors influencing satisfaction were too complex to measure systematically, unlike in B2C apps where the environment is more controlled. 3 - The value of some products accumulates over time, making it hard to attribute short-term outcomes. 4 - Client priorities can change mid-project, altering which KPIs matter most, leaving previously established baselines less relevant. In complex environments like the built world, setting accurate KPI baselines is a challenge. Without a clear starting point, demonstrating progress is difficult. Product teams need to innovate in data collection and baseline setting to ensure meaningful measurement and decision-making. We, as product managers, want to show clients the impact of our products, but we need their help to establish baselines using their data—a process that takes time. Start baselining today!! #digitaltransformation #productmanagement #kpis
To view or add a comment, sign in
-
No matter how many times I’ve done it, I am often at a loss when creating good (actionable, measurable, feasible) KPIs. What I liked about the framework in this link is that is product-oriented, placing you in the right mindset for designing indicators. Selecting the right product metrics (KPIs) https://buff.ly/43vwF4h
To view or add a comment, sign in
-
𝗟𝗲𝗮𝗱𝗶𝗻𝗴 𝘃𝘀. 𝗟𝗮𝗴𝗴𝗶𝗻𝗴 𝗠𝗲𝘁𝗿𝗶𝗰𝘀: 𝗛𝗼𝘄 𝘁𝗼 𝗕𝗮𝗹𝗮𝗻𝗰𝗲 𝗜𝗻𝘀𝗶𝗴𝗵𝘁𝘀 𝗳𝗼𝗿 𝗣𝗿𝗼𝗱𝘂𝗰𝘁 𝗦𝘂𝗰𝗰𝗲𝘀𝘀 In product analytics, metrics drive decisions—but not all metrics are created equal. To make your product thrive, it’s crucial to leverage both leading metrics that predict future success and lagging metrics that confirm outcomes. Here’s why mastering these metrics matters for your business: • 𝗟𝗲𝗮𝗱𝗶𝗻𝗴 𝗠𝗲𝘁𝗿𝗶𝗰𝘀: Your Product’s Early Warning System Leading metrics are predictive. They highlight trends and behaviors that signal future performance, allowing teams to take proactive steps. 𝗘𝘅𝗮𝗺𝗽𝗹𝗲𝘀 𝗼𝗳 𝗟𝗲𝗮𝗱𝗶𝗻𝗴 𝗠𝗲𝘁𝗿𝗶𝗰𝘀: ✅ Activation Rate: Percentage of users completing onboarding. (A smooth onboarding flow predicts higher retention.) ✅ Feature Adoption Rate: How many users engage with new features. (Higher engagement predicts sustained usage.) ✅ Time to First Value (TTFV): How quickly users gain value from your product. • 𝗟𝗮𝗴𝗴𝗶𝗻𝗴 𝗠𝗲𝘁𝗿𝗶𝗰𝘀: The Scorecard of Success Lagging metrics are reflective. They measure the outcomes of past efforts and provide a clear picture of product performance. Examples of Lagging Metrics: ✅ Revenue: Total income generated from your product. ✅ Retention Rate: Percentage of users returning over time. ✅ Customer Lifetime Value (CLV): Average revenue generated per user over their lifecycle. 𝘏𝘰𝘸 𝘵𝘰 𝘜𝘴𝘦 𝘉𝘰𝘵𝘩 𝘔𝘦𝘵𝘳𝘪𝘤𝘴 𝘧𝘰𝘳 𝘔𝘢𝘹𝘪𝘮𝘶𝘮 𝘐𝘮𝘱𝘢𝘤𝘵 1️⃣ Predict and Act with Leading Metrics: Track onboarding completion rates to optimize the customer journey. Monitor feature usage to decide which features to invest in further. 2️⃣ Reflect and Learn with Lagging Metrics: Use retention rate to validate if onboarding changes improve long-term engagement. Analyze revenue growth to assess the effectiveness of pricing changes. 3️⃣ Balance Both for a Complete View: Leading metrics drive short-term actions, while lagging metrics evaluate long-term outcomes. Together, they form a feedback loop that ensures continuous improvement. 𝘉𝘺 𝘧𝘰𝘤𝘶𝘴𝘪𝘯𝘨 𝘰𝘯 𝘭𝘦𝘢𝘥𝘪𝘯𝘨 𝘢𝘯𝘥 𝘭𝘢𝘨𝘨𝘪𝘯𝘨 𝘮𝘦𝘵𝘳𝘪𝘤𝘴, 𝘺𝘰𝘶𝘳 𝘱𝘳𝘰𝘥𝘶𝘤𝘵 𝘵𝘦𝘢𝘮 𝘤𝘢𝘯: 📊 Drive Proactive Decisions: Identify potential risks before they escalate. 📈 Deliver Results: Validate strategies with tangible outcomes. 🚀 Align with Business Goals: Ensure product performance directly impacts revenue, retention, and customer satisfaction. Metrics are more than numbers—they’re the map to your product’s future. #ProductManagement #ProductAnalytics #MetricsMatter #DataDrivenDecisions #BusinessGrowth
To view or add a comment, sign in
-
Last week, I shared why unshipping features can be a win for your product. Someone asked me, “It’s important but tough—any advice on making the decision?” They’re absolutely right—it’s a tough call. But in my experience, it’s also one of the most strategic decisions you can make. When my team faced this challenge, I gathered them to collaborate on these 3 key areas: 1️⃣ Our perspectives: We shared why we thought certain features weren’t being used (based on our assumptions). 2️⃣ Feature usage analytics: We looked at feature usage over specific time frames (last quarter, last semester, and last year) to get hard data. 3️⃣ Opportunity cost: We evaluated the effort required to maintain those features vs. the potential upside. But here’s what moved the needle: keeping everyone aligned with our North Star. I focused on ensuring the team was aligned with the strategy we wanted to follow. The combination of the data-driven report and a clear direction helped build momentum with stakeholders. There were tough conversations, especially with those experiencing HIPPO FOMO, but the evidence gave us confidence. In the end, we unshipped 95% of that feature list, and the product was better for it. My takeaway: gather evidence, be bold, but don’t underestimate the value of stakeholder alignment. What about you—how do you decide when to eliminate features? 🚀
To view or add a comment, sign in
-
What's the difference between leading indicator and input metric? TL;DR It's jargon. And I'm guilty of using jargon. Longer answer: There are two wordings out there. 1️⃣ Input/Output Input metric leads to output metric. Output metric itself is the input metric of the next output metric etc. An output metric is an end goal. Like revenue. Or smaller retention. Or smaller engagement. Or smaller activation. Or smaller signup. Or smaller traffic. You see where I'm getting? I'm going backwards and list out input metrics. But wait! Aren't these rather "leading indicators"? 2️⃣ Leading/Lagging Leading indicator leads to its own lagging indicator. This lagging indicator is the leading indicator of its next lagging indicator. Which is a leading indicator of its next lagging indicator etc. And we could create the same list starting from revenue all the way down to traffic. ❓So what is the difference then? If you ask me it's just the ring that it bells in our heads. When we say "input metric", then we immediately think of the connections between input, output, outcome, impact. This thinking can help us organize our work as much as it can help us find the right metrics to measure and monitor. When we say "leading indicator", then we rather think in customer journeys / behavior, and how we can translate business outcomes into user outcomes. But in essence they are the same concept of breaking a big fat difficult to move metric down into easier to measure and influence metrics that have a correlation with that big fat metric. In a way that improving those metrics will improve the big fat number. Inspired by John Cutler's recent post. #productmanagement #BCnoBSprodmgmt #productMetrics #productAnalytics
To view or add a comment, sign in
-
Understanding and using the right metrics can be a game-changer for any company. Many organizations look to growing financial metrics as a mark of success - increasing ARR and Retention Rate are common. While these trailing indicators offer insights into past performance, sometimes we need to orient ourselves to shorter-term metrics that can predict this future success. Successful companies focus on driving leading indicators through product work. These metrics predict future outcomes and guide proactive decisions. For example, tracking user behaviors can signal when clients are considering competitors, giving you a chance to intervene before losing them. Good metrics aren’t just numbers—they’re tools that drive action and guide strategy. We don’t want to always be looking at things that go up and to the right. We want to diagnose where issues are in our metrics and learn how to react. To be effective, good metrics should be: ✅ Clear: Easy to understand for stakeholders across the company. ✅ Comparable: Useful across periods or user segments to track progress. ✅ Actionable: Directly tied to decisions that drive improvement and growth. By refining your approach to metrics and aligning them with your goals, you’re not just collecting data. You’re building stronger client relationships and setting your company up for lasting success. What metrics have been most valuable in your product decisions? Let me know in the comments! #productinstitute #businessmetrics #goodmetrics #productmanagement #metrics #productleaders
To view or add a comment, sign in
6,773 followers
Growth Ninja 🥷 | Master of Mobile Marketing 🔥 | User Acquisition & Retention Guru 📲 | Turning Downloads into Die-Hard Fans 🚀
2wAttributing impact in product teams can indeed be tricky. Your breakdown of attribution levels is a fantastic guide for navigating this complexity. Great insights on balancing rigorous analysis with practical approximations!