Paramark.com reposted this
If you run ads only on one channel - Meta, Amazon, Google Search, TikTok - is it OK to rely on last-click attribution? How wrong can the results be? The answer: Potentially 100%. The issue is that the rules of causal inference and math do not change for smaller companies or if you only use 1 or 2 media channels. Selection effects can completely mislead you, even for a single channel, as shown in my IAG case study example shared recently: https://lnkd.in/geQ9ysFi Now, the risk of being 100% wrong is much smaller if you are a lesser-known brand and only operate online. However, the issue is - incrementality still matters. Customers may convert anyway if they know your website after their first purchase, heard about your great product from a friend (offline word of mouth), or remember you and then Google you later and convert. As a result, last-click or last-touch attribution inflates allocated attributions and suggests a very inaccurate CPA. Maybe you're only wrong by 40-70% in terms of absolute CPA, but this still matters when every dollar counts for SMEs or startups. This becomes even more critical as soon as you start comparing two campaigns for the same platform - maybe one prospecting and one retargeting campaign, both running on Facebook. Last-click/ touch will typically tell you to invest more and more in retargeting as the CPA will always look better here. Even when this primarily shows ads to anyway-converters. So, are there situations where last-click attribution can be correct? Yes, but these are rare and usually for a short period of time: when you sell to your very first customers. What can you do instead? Well, experimentation is free (and fun). The main costs are opportunity costs, which are just learning costs and an important investment in the future. For example, you can always vary your single channel or tactic and simply compare sales: 🔁Switchback experiments: we turn a tactic on and off every second day. 🌍Geo-tests: select some regions where you turn off one tactic. 📊A/B tests: Most platforms also have A/B test functions (some randomized, others optimized ones) that you can use. See: Meta - Conversion Lift and A/B Testing https://lnkd.in/gr6nyg7m and https://lnkd.in/gTip8uJ9 TikTok - Split Test https://lnkd.in/gaCiZ-G7 Amazon - A/B Test and Brand Lift https://lnkd.in/gF2YBmsN and https://lnkd.in/g5KDzgek Google - Conversion Lift https://lnkd.in/gWCYnwsu Trade Desk - Conversion Lift https://lnkd.in/gWBaaGdS While it’s tempting to rely on last-touch and last-click attribution as a default starting point for a company’s measurement journey, let’s not encourage people to trust a broken clock. With all the free alternatives available, there’s no reason not to move on to more reliable marketing effectiveness measurement as soon as possible.
Time for Nico Neumann and Eric Seufert to debate on a podcast?! I do actually think you both agree here. Having followed Eric for a while, my sense is that his post is not so much an endorsement for last click measurement, but ever since incrementality has become the buzzword, a growing segment of SMBs think that they need advanced measurement and expensive tooling before they have a real measurement problem. Yes, these brands can and should run experiments, lift studies and more simple on/off frameworks to understand the best way to buy within a channel (e.g. prospecting v retargeting), but last click provides marketers with a day-over-day look at performance, and anyone who has worked brand side can confirm the need to provide a more real-time view.
Does anyone really use LPC these days? I’d be shocked if they do. However I found, empirically, that LPC can be a fair attribution for affiliate marketing, specifically for traffic from good old “top 10 ranked “insert product name” LPs. Never researched the reasons for it, my hypothesis is of course the social proof bias. Whilst one should never use LPC when comparing different acquisition channels, it can be useful to compare one affiliate channel to another
Eric Seufert has recently been speaking about this, though it seems like he sees value from last click over more complex attribution models more often. I suspect that perhaps this is because Eric has focused heavily on the app install/DR space, where indeed last click may be suitable more often than it is for non-App campaigns. Curious if both of you chalk up your different views to that reason, or if there's truly a difference of opinion here on like-for-like campaigns?
This doesn’t even account for AdFraud where publishers trigger fake clicks to “claim” last click attribution. So this actually will result in organic (or other sources) traffic to be highjacked by this malicious publisher
Not so fast cowboy. 😉 Due to personalization, the in-platform A/B tests produce a divergent delivery that distorts the ATE. Here’s a great article from Stefano Putoni (and friends). 👍
Thanks Nico Neumann ! So basically if you do a/b you can just observe incrementality.
Nico Neumann WRONG AGAIN! You gotta get your vocabulary right. Attribution=how results happened. Causality=results if you spent $0 on ads. THEY ARE NOT THE SAME. So quit talking smack about attribution when you're following it with a point about causality. they do different jobs! you said "the risk of being 100% wrong is less if you're smaller". technically true, however only a handful of businesses (like top 3k companies) have enough brand equity to even worry about this. 95% of companies didnt have enough brand awareness to worry about causality. I like the alternatives you suggested, but they are detached from reality. As Eric Seufert said, there's high overhead and lots of complexity. I highly recommend you get some practicing professionals to input on your ideas and posts. Attribution, MMM, or other causal tests are so often designed in an ivory tower of academia without the realities of how the data is used by media pros. I'll volunteer my services if you'd like
Attribution alone only shows efficiency, not effectiveness. To truly connect marketing to revenue, pair it with experiments and a unified data strategy. This way, you can start making smarter, revenue-driven decisions :)
Thanks for sharing Nico Neumann, really valuable insight into a question/challenge that comes across a lot. As you point out, those singular channels have multiple tactics for execution, so you can still be led down the wrong path. In addition, you create a skewed mindset for when you start adding additional channels as you grow as a brand, which again, leads you into the performance marketing cul-de-sac.
Commercial Strategy & Marketing Effectiveness
2wEven perfect attribution doesn't tell you anything about marketing effectiveness...only efficiency (how fast you're spending budget). Causality always requires an answer to the question, "Compared to what?" Did the traffic from the retargeting layer create incremental revenue? You can only know if you isolate a randomized portion of the retargeting layer and NOT show them the retargeting ads. Now, you can compare revenue from those shown the ads vs. a randomized sample from the same retargeted group who were NOT shown the ads. Now you have the answer to the "compared to what?" question. It literally takes 10 minutes to set this up in most retargeting platforms using a PSA hold-out, but I've never seen a SINGLE retargeting campaign actually ever do that.