Вячеслав Потапов’s Post

View profile for Вячеслав Потапов, graphic

Head of Product Analytics at Leroy Merlin with expertise in Data Analysis and Machine Learning

Should we test everything? Or about AB again Hello, friends! Recently, my colleague and I asked ourselves whether we need such a large number of AB tests? I say that all data drive companies are for the approach: more hypotheses, more tests. Their statistics are 1 or 2 out of 10 tested tests My colleague says that they test all sorts of crap and ruin the site (hello booking), and we will only make really high-quality cool changes. This will reduce the number of hypotheses and improve the quality. It will not be 1 out of 10. But 2 out of 5) And what about you? What approach do you follow?

  • graphical user interface
Andrei Nokhrin

CPO | Head of Product | Product Lead with 8+ years of experience in Fintech, E-commerce, PropTech & FoodTech domains | Launched 12+ products and expanded the customer base by 24% during the pandemic

4mo

The need to conduct A/B tests largely depends on the specific feature you are developing and the potential consequences of its implementation. This can be: 1. Significance for the business - impact on key metrics 2. Impact on the user experience 3. Lack of confidence in how users will accept the feature 4. Scope of changes (A/B is necessary to reduce the risk of error) 5. Availability of resources - do you have the time, budget, and audience to conduct a quality A/B test. In general, the more important the feature, the more reasons there are to conduct A/B testing. But this always needs to be evaluated in the context of a specific situation.

Aleksandr Karteshkov

Kameleoon. Технологичная A/B платформа для роста конверсий сайта и приложения

4mo

just don’t generate crap hypotheses and test all 😁

See more comments

To view or add a comment, sign in

Explore topics