Jeremy Lee’s Post

Genuine question - why is System1 now considered the gold standard for advertising effectiveness when its "star rating" seems to be based on very little at all (ie "potential to drive long-term growth")? What does that even mean? I'm happy to be proved wrong if there's consensus that it's got rigour - but I can't see it from where I'm sat.

Jonathan Trimble

Co-Founder, CEO @ And Rising // Building the future's favourite brands

1w

It is a collective hallucination. System1’s approach is a shiny reinvention of the Millard Brown Link Test, which fell out of favor after being acquired by Kantar. Using facial recognition, it claims to offer a more accurate read of audience response, backed by comparative data from testing every TV ad. Despite criticism (the Link Test was famously shunned by agencies like Wieden+Kennedy who refused to work with clients that used it), System1 has gained traction, aided by figures like Jon Evans (Uncensored CMO pod) and Orlando Wood, whose work champions strong creative and long-term branding. The trace videos are addictive, and make for instant hyped viral content right here on LinkedIn. And now even referred to by the likes of Marina Hyde. But it’s a research result, and the use of the word “effective” is misleading. All nonsense? Quite possibly, but, do not underestimate System 1. In the absence of any other way to prove anything, it’s what keeps CMOs in jobs, it’s what gets large budgets signed off. It what fuels Coca Cola’s AI PR stunt. And at one level, it’s the only read we have on what the real world thinks (if paid to watch it front to back). Please enjoy this PSA.

John Woodward

CSO, McCann Enterprise. Global brand strategy lead on 6 Interbrand Top 100 brands: Samsung, Orange, AXA, UBS, Nescafe, Zurich Insurance (and one to come). Comfortable in English, French, Italian.

1w

Perhaps the answer is that they sell something ad folk want to buy?

Carl Ratcliff

Founder @ THIS IS THE DAY | 'All models are wrong, but some are useful'

1w

Viewers track their emotions as they watch (how are you feeling now, and now... and now ...) ... it's no different really to link testing, the blunt instrument of old - pre testing remains something that zillions of folk believe in because you need something to show due diligence when you throw money against the wall. I mean media plan. If enough says it's marketing science, then it must be. Two words when thrown together I still feel edgy about xxx

Nicola Kemp

Editorial Director at Creativebrief

1w

Can I be really basic here, but surely we should be looking primarily at Christmas sales data?

James Cross

CCO & Founder at Meanwhile

1w

💯- ‘effectiveness’ on Xmas ads for example surely isn’t clear until midnight on Xmas Eve?!?

Rachel Haslam

Brand Expert: Creative Strategist, Change Maker and Storyteller

4d

Can't vouch for the rigour of their science Jeremy Lee but as a brand in the research testing space they have nailed it on every front don't you think?

Mark R.

Brand Strategist at Meta

1w

I think the better question is what we mean by rigor? We all instinctively understand that creativity/marketing/complex systems have profound measurement limitations. Link (which I started my career on, and they were the Bible for some clients, and if you looked at how info was collected you would laugh or cry) was the gold standard, or essentially the tallest midget. System 1 is solid thinking, Orlando Wood is one of the smartest people in the advertising industry. What most people don’t take into account is the limitation of what is knowable and our industry’s talent level. Knowledge limitation is math based, all linear differential equations have been solved, most non-linear ones have not been solved. Creativity/taste is a non-linear system. As for talent, we simply do not have the level (we’re not competing for quants in investment banking/hedge funds) of quant talent to get all that much better than we are. We have many more people doing quant that have never heard of the Fields Medal (Noble Prize in math) than people who either may be a winner or have a friend who won. Our industry has never had a Field’s Medal winner, it never will.

Dave Roberts

Consultant | Branded Content & Marketing | Supporting Brands, Studios & Publishers | 25+ Yrs Global Agency Experience

5d

From my perspective, the most worrying thing about System 1 is that I've started to hear rumblings about it being used to test branded entertainment - work that's far longer in duration (who's clicking emojis for an hour?), work that's designed to engage not interrupt (watching in isolation isn't representative), work that has a far more complex narrative (maybe feeling sad, or scared isn't a bad thing), work that may not even feature a product or brand mark (brand recall anyone?). Luckily there's an alternative in development Jeremy Lee - I'll give you a shout when it's ready to roll 😜

Nick Owen

Business Lead at Mother

1w

We spent a lot of time digging into the data behind the correlation between the star system and actual business effects in the early days of Brainjuicer before they became System1. There were very few. I’d hope that they have a more robust database after all these years but I can’t imagine you’d ever get to see it. There are enough examples of high scoring star ads that haven’t performed well in market to warrant the question being asked and, of course, I’m sure even they know the potential flaws of a methodology that asks viewers for their emotional response... But in far as pre-testing goes, there are clearly more painful options.

Simon Gregory

Joint Chief Strategy Officer at BBH London and BBH Dublin / Crosby Housing Association Board Member

5d

As long as clients keep buying it, it'll be relevant. In the meantime, I implore every client and agency bod to try the testing system you use first hand. Only then can you understand the pros and cons directly.

See more comments

To view or add a comment, sign in

Explore topics