You Need To Iterate Properly.
95% of people who do outbound have decided to put their results in the hands of luck.
If you aren’t properly iterating, you're part of that 95%.
Most people will send a few messages, lazily track, switch something, repeat.
Here’s how you should be doing outbound:
REAL Metric Tracking
First of all, if you aren't tracking, start.
If you are tracking, make sure you're tracking the right things. Just your ABR isn't enough, track the main metric for each “sub system” of your outbound process. Doing this will help you establish your bottlenecks.
If you're doing both, make sure your metrics are real. Spend the time every weekend to go over the week's metrics and recount. False metrics will ruin you more than no metrics.
Use regression to the mean.
Averages don't show themselves in small datasets. If you flip a coin twice and it lands on heads twice, does the coin have a 100% chance of landing on heads? No, you're just a bad scientist.
Make sure your tests (in 3) have enough volume of data to allow for regression to the mean.
Be scientific
When testing an outreach system, you should be using the scientific method.
Create a testing pool (amount of outreaches)
Do the outreaches, not changing anything
Track your results
Iterate where you believe the bottleneck to be
Don’t change anything else
Run the test again
Track your results
Repeat the last 4 steps
Head of Product, Product @ GM Automotive Limited /Just Pass School of Motoring | Member Board of Trustees @ The GM Foundation | Former Neuroscientist and Co-Founder at Hope for Men
8moQuick question: Not trying to sound daft just curious. What would you call a data science team doing AB and multivariate regression on user behaviour? What if they are doing quantitative surveys? Are they doing data scientist work or Quant UX Research? I've always wondered this.