Test out this simple A/B test on your product/service page! The Experiment: Sample Size: 1,655 visitors – a representative slice of our site's traffic. Control Version: "Select a Plan" Test Variation: "Select My Plan" Hypothesis: We believed that personalizing the call-to-action button text would increase user engagement by making it feel more tailored to individual users. Results: The personalized "Select My Plan" button led to a 10.25% increase in clicks compared to the control. Although the increase might seem modest, small changes like this can accumulate over time, significantly impacting your overall performance. On the other hand, the bigger the change, the bigger the impact you'll likely see so always keep that in mind when running A/B tests. Note: We didn’t use statistical tests like calculating a p-value or confidence intervals for this experiment. Sometimes, leveraging your understanding of your audience and trusting your instincts can guide your optimizations just as effectively. Give it a shot!
Bridget Eriksen’s Post
More Relevant Posts
-
Let's talk about a tool that's often underestimated: A/B testing! It's amazing how a simple A/B test can revolutionize your website's performance. Whether it's changing the color of a CTA button or tweaking the headline text, each small change can drive significant improvements in user engagement and conversions. But here's where it gets interesting—data sometimes reveals surprising things about user preferences that defy conventional wisdom. I recommend frequently revisiting your assumptions with fresh tests. Have you ever been surprised by test results? Share your insights or questions below! Let's learn from each other's experiences in optimizing for success.
To view or add a comment, sign in
-
The marketing or product team needs to make their user experience personalized. Should you reach for a recommendation system or a contextual bandit algorithm? I wasn't sure of the distinction either! So I bothered Sven Schmit, repeatedly, to help me learn about the ideal use cases for each and the key considerations that can help you pick the right tool for the job. Then I wrote it all up in a handy blog post for everyone else :) The tl;dr - here's what you'll need to consider: -How big is the problem at hand? Does it warrant the heft of a recommendation system, or are there fewer arms and decisions to make? -Do you have enough historical data to overcome a cold start problem? If not, you'd be best served by a contextual bandit -How many times do we need to make this decision for each user? You can read through all these distinctions (and a bunch more color) on the Eppo blog: https://lnkd.in/dUvKpdAy
To view or add a comment, sign in
-
The conclusions are correct, but comparing Experimentation vs. MAB vs. Recommendation system on these dimensions without calling out the fact that they serve completely different purposes, is misleading. cc: Sven Schmit Giorgio Martini It's like comparing the the material of the handle or the weight when you compare a hammer vs. a screwdriver. Sure such comparisons will be seemingly "correct", but they miss the point. I'd suggest at least add this row at the top: Main purpose: - Experiment: Hypothesis testing - MAB: Iterative allocation of traffic - Recommendation: Better matching of supply and demand through personalized features
The marketing or product team needs to make their user experience personalized. Should you reach for a recommendation system or a contextual bandit algorithm? I wasn't sure of the distinction either! So I bothered Sven Schmit, repeatedly, to help me learn about the ideal use cases for each and the key considerations that can help you pick the right tool for the job. Then I wrote it all up in a handy blog post for everyone else :) The tl;dr - here's what you'll need to consider: -How big is the problem at hand? Does it warrant the heft of a recommendation system, or are there fewer arms and decisions to make? -Do you have enough historical data to overcome a cold start problem? If not, you'd be best served by a contextual bandit -How many times do we need to make this decision for each user? You can read through all these distinctions (and a bunch more color) on the Eppo blog: https://lnkd.in/dUvKpdAy
To view or add a comment, sign in
-
Butterfly Effect: How Core Updates Make Subtle User Intent Changes https://lnkd.in/dYtNCxeW
To view or add a comment, sign in
-
When to use Bandits has a simple answer (“When you want to optimize copy for a short-lived promotion, like Black Friday.”) and a right answer. If you are interested in the later, Ryan Lucht and Sven Schmit have you covered.
The marketing or product team needs to make their user experience personalized. Should you reach for a recommendation system or a contextual bandit algorithm? I wasn't sure of the distinction either! So I bothered Sven Schmit, repeatedly, to help me learn about the ideal use cases for each and the key considerations that can help you pick the right tool for the job. Then I wrote it all up in a handy blog post for everyone else :) The tl;dr - here's what you'll need to consider: -How big is the problem at hand? Does it warrant the heft of a recommendation system, or are there fewer arms and decisions to make? -Do you have enough historical data to overcome a cold start problem? If not, you'd be best served by a contextual bandit -How many times do we need to make this decision for each user? You can read through all these distinctions (and a bunch more color) on the Eppo blog: https://lnkd.in/dUvKpdAy
To view or add a comment, sign in
-
The difference between Multi-Arm Bandit and Recommendation Systems This is inspired by my discussion with Sven Schmit in this post: https://lnkd.in/ggcan9yJ Sven's comment -- "both recommendation systems and contextual bandits can personalize experiences, so it seems reasonable to compare the two and highlight in which scenarios each of them is most appropriate." -- signals a common misconception about MAB versus recommendation systems. At first glance, MAB and recommendation systems appear to share the same goal of dynamically finding the best-performing variant to present to customers. However, the crucial difference lies in the details. MAB dynamically allocates "overall" traffic. That is, MAB treats users roughly the same, simply identifying which variant performs best "on average" and directing the optimal number of users to that arm. Recommendation systems, on the other hand, emphasize "personalization." Each user, with their features (considered as the right-hand side variables in an ML model, not product features), will undergo the ranking algorithms and be served the best choice, individually. Indeed, you can incorporate these features into MAB, which actually falls under a subset of the reinforcement learning paradigm, for more "individualized" allocation. But at that point, you shouldn't limit yourself to MAB and should instead build an optimized recommendation system based on first principles. Optimizing a ML system in production is hard. You shouldn’t impose such a structural constraint on your Machine Learning Engineers / Applied Scientists without a clear reason. As for the analogy, consider the difference between a chef's tasting menu and a bespoke meal crafted just for you. A Multi-Arm Bandit (MAB) is like a chef who prepares a tasting menu based on what dishes have been most popular with diners on average. The chef dynamically adjusts the menu to ensure the overall satisfaction of the restaurant's clientele, but each diner receives the same selection. On the other hand, a recommendation system is akin to a chef who consults with each diner individually, takes note of their preferences, dietary restrictions, and past dining experiences, and then prepares a meal tailored just for them. This system ensures that each person's meal is optimized for their personal enjoyment, much like how a recommendation system personalizes experiences by considering the unique characteristics of each user. In practice, you would seldom face the choice of MAB vs. recommendation systems, therefore my point -- you should understand the different purposes these different systems serve, before you chase some difference in irrelevant details.
The marketing or product team needs to make their user experience personalized. Should you reach for a recommendation system or a contextual bandit algorithm? I wasn't sure of the distinction either! So I bothered Sven Schmit, repeatedly, to help me learn about the ideal use cases for each and the key considerations that can help you pick the right tool for the job. Then I wrote it all up in a handy blog post for everyone else :) The tl;dr - here's what you'll need to consider: -How big is the problem at hand? Does it warrant the heft of a recommendation system, or are there fewer arms and decisions to make? -Do you have enough historical data to overcome a cold start problem? If not, you'd be best served by a contextual bandit -How many times do we need to make this decision for each user? You can read through all these distinctions (and a bunch more color) on the Eppo blog: https://lnkd.in/dUvKpdAy
To view or add a comment, sign in
-
💥 Want to get deeper insights into your website’s performance? Here are three A/B Testing methods to consider: 🔹 Single Element Testing 🔹 Multivariate Testing 🔹 Redirect or Split URL Testing Each method offers a unique view of user preferences. Which one are you curious to try? #ABTesting #WebDevelopment #UXResearch"
To view or add a comment, sign in
-
5 mistakes that prevents you from CRO success (and a 6-step solution to follow) → Not running tests long enough. → Ignoring statistical significance. → Testing without a clear hypothesis. → Failing to prioritise high-impact tests. → Testing without understanding your users. Here is a top-down approach in 6 steps: Step 1: Research → Understand user pain points and motivations. → Review competitor websites for UX gaps and opportunities. → Analyse user behaviour to identify drop-offs and friction points. Step 2: Hypothesis → Identify specific areas for improvement from research findings. → Create testable hypotheses linked to clear conversion goals. → Define expected outcomes and key metrics for success. Step 3: Planning → Prioritise tests using the ICE (Impact, Confidence, Ease) framework. → Define goals, target audience, and test duration. → Prepare test variations for execution. Step 4: Testing → Monitor performance regularly and address any anomalies. → Avoid launching tests that might overlap or affect one another. → Ensure no changes are made to the testing environment during the test. Step 5: QA → Check audience targeting, variation display, and functionality. → Verify accurate data collection and consistent tracking. → Test across all relevant devices and browsers. Step 6: Analysis → Measure results with statistical significance. → Analyse data across different user segments. → Validate or reject the hypothesis based on the results. Is your CRO strategy missing any of these steps?
To view or add a comment, sign in
-
Butterfly Effect: How Core Updates Make Subtle User Intent Changes https://lnkd.in/gmakAr_i
Butterfly Effect: How Core Updates Make Subtle User Intent Changes
searchenginejournal.com
To view or add a comment, sign in
-
This is how I got a ~30% click rate on an email campaign to validate a new idea. + 17% conversion rate to sign up. 1. Created 3 ICPs that differ on industries. 2. Setup 12 domains & 12 mailboxes. 3. Wrote 4 subject lines & 4 body emails (16 a/b tests) 4. Identified 1 industry significance & 6 golden emails (high open rate & click rate). 5. Doubled down on data as emails started to get warm. Now? Calling everyone that clicked to get their feedback on the entire user journey. After that? Rinse and repeat. This is my journey of building product in public. This week I'm focusing on the following user behaviors: 1. What type of platform do users have the most trust in? (Extension, provider plugin, provider integration) 2. Would they refer us to their colleagues to be the first to try it? 3. How are sellers going to react to automatic replies? Building in public is fun.
To view or add a comment, sign in