You can hear it in the distance, you saw the first ripples of what’s coming in 2024. Get ready for its arrival in 2025: the Beige Wave. You’ll have seen signs of it already, but trust me, the dull, insipid content you’ve already seen is just a drop in the ocean against the wave of beige nonsense you’ll see next year. The problem is not the tool used to create it, generative AI. The problem is people – as both producers and consumers of content. And this will have a profound effect on L&D. I used genAI to help me with my recent post on tips for enlivening compliance training, to sift through 70 comments. I then used genAI's suggestions as my starting point for writing. The result was something useful. The problem: too many people don’t get that far. They consider genAI’s output as the finished product. Publishing what genAI produces unedited allows you to massively increase output, but most of the time, the content is vague, non-committal, bland. In a word, it’s beige. Most of the time, we recognize it as such, particularly if it is headed up by yet another bland image created by a genAI tool. (Apologies for using such an image on this post.) But who cares if most people recognise it as bland nonsense? The cost of producing it is next to zero. It’s like email spam. It doesn’t matter if most people know it’s spam. Some won’t. The cost of email is almost nothing. So send out thousands of mails. Some will land. The same approach works for beige content. Churn it out. Some people will consume it. That will generate clicks, interaction, follows. If this was just a matter of a little more bad content, maybe it wouldn’t matter. But the Beige Wave will be an exponential increase in what’s also been called ‘slop’- bland, untrustworthy nonsense. The more of this we see, the less we will trust and value *all* content. And this will have a profound effect on anyone creating content – including L&D. If you tie your professional identity to something that people no longer entirely trust or value , they will cease to value your identity. For me, the answer it simple: L&D needs to move away from associating itself with content, to associating itself instead with strategic business value. Content might be created along the way, but it’s a means to an end, nothing more. I’m indebted to Egle Vinauskaite for this insight. It’s something that I’ve been thinking about a great deal since our last report together a couple of months ago. The movie poster is light-hearted, but this message isn’t. This is a profound issue of how much L&D will - or will not - be valued in the future. If you work in L&D, it's time to put content to one side and start building your strategic value. #learninganddevelopment #slop #genai #BeigeWave
I think content production is a red herring – AI is so much more than that. It can change the learner experience from “learn this now – whether you need it or not” to “here’s the answer to your question and some supporting assets that you might find useful”. AI Helps us hit the “70” in the 70/20/10 – and this is where the performance uplift comes in. AI can help us become the strategic partner by unlocking performance support.
I think there is a deeper cause related to this. Most L&D professionals (and vendors) work at/for ‘check & trek’ organisations - these organisations seem to prefer ‘quick & dirty’ solutions - the ones that are easy to check the box and then move on to the next action. Most organisations are very transactional by nature when it comes to learning initiatives. It’s not the I like it - it is what I see happening all the time.
Couldn't agree more. Great post. It also presents opportunities, and I think we're at real inflexion point. Coming at it from a subject expertise example, the 'beige wave' is going to force people to work out where they add actual value, simply because far too many operate in 'beige' already, and are going to get swept up. I'm not advocating being contrarian for the sake of it, but people aren't going to be able to sit on the fence anymore, they'll simply be overlooked. Developing unique perspectives, having spiky points of view, sharing the stories and scars they've picked up through their years of experience, applying the theory with actual context, niching down - these'll all be incredibly important to tease out and communicate to be able to 'surf it' (to continue the analogy). Personally I'm excited by it. Just need to hope the algorithms do a good job of flushing out the beige (but haven't yet thought through if this is in their interests or not...)
This struck a nerve, Donald H Taylor: "For me, the answer is simple: L&D needs to move away from associating itself with content, to associating itself instead with strategic business value." Hasn't L&D been trying to do that for some time? Or at least saying it is? I'm not at all for beige content, but is it possible that allowing AI to offload some (SOME) of the content development effort - I agree it can't produce end product - could free up more L&D headspace for business impact efforts? And gad! I wish we could move away from emphasizing engagement and focus on performance. Who wakes up and says, "Well, I hope I'm more engaged in training today."
Great post Donald H Taylor. It's ironic that this is an industry that is built on content and for the last two decades has been happy to push it. So why hasn't L&D been building strategic value for the last 20 years? And how is this profession going to reinvent itself to do so? The AI conversation tends to be tactical and operational, not strategic. I'd like to see institutes and events start to focus on this because there seems to be little discussion of strategic intent and impact. Or have I missed that?
Is 2025 the year L&D moves away from content? 🤞
I am on the learner side of this and an amateur creator with decades of leading global change management projects. I agree that pure genAI content is bland, and can be spotted a mile away but in some cases bland delivered at speed is better that perfect delivered slowly. (1) better prompt engineering yields better output - push your AI to do better. (2) it might be bland but if it hits the mark, it can provide value. (3) human expertise should be added where it makes the material stronger, but that is not always required (4) listen to the learners - use genAI to get a minimum viable product launched and adjust based on feedback. learner reaction can sometimes be unpredicatable (5) use it as a material multiplier - once you have a good enough product, use genAI to put it into different formats to appeal to larger audiences. genAI is good for helping to outline and create version 1.0. if the content ideas are strong enough, you will be propelled by your audience into 2.0+
‘Gen AI has made the value of content plummet, and so will the value of L&D if it ties itself to content’ I’ve heard it from vendors as well: they create AI-assisted tools to empower non-LDs, yet without the knowledge of what good content looks like, they end up using AI drafts as the final product, not a starting point.
Younger folk than you and I, Donald H Taylor, who've already experienced 'the wave', call it AI Slop, Dead Internet, Machine Mush, Fake Meat, Echo Spam, NPC content. The idea that Gen Z (who are now in the workforce) universally love AI is for the birds.
Learning Design Manager @ McKinsey & Company | Learning Technology | Doctoral research on Learning Analytics
2dI would like to offer a different perspective. I think it's unfair to say, "The problem is the people." Gen AI is deceptively good. It produces drafts that do not look like drafts. When you iterate with a person, you might handwrite notes or type rough sentences on a Word doc. Iterating with Gen AI looks like a perfectly finished document. It is much easier (or natural, given that the system hasn't changed for a few centuries until today) to polish a human draft than a document written by Chat GPT. Learning people love citing cognitive overload studies: Gen AI produces overwhelming amounts that would overload anyone's brain. Add this to the increasing pressure of rapid output "because there's AI" and you get yourself in a dangerous situation. The individual can show only so much responsibility and self-discipline unless there's external support. If Gen AI is a means or a vehicle, we ought to treat it like driving cars on the street. Imagine hundreds of millions of vehicles in a system without driver's licenses, vertical and street signs, and traffic lights. If we only relied on the self-discipline and goodwill of drivers, we would live in unbearable chaos. The question is, what does it look like with AI?