Embrace AI or Fall Behind: Yes, Resistance IS Futile
Generated by DALL-E 3

Embrace AI or Fall Behind: Yes, Resistance IS Futile

"AI often makes up things and presents them as facts." - true

"AI sometimes generates garbage, such as 'poor' code." - true

"Because of the above, AI will never do what I can do, AI is not worth using today, my customers/audience will always prefer products/services created by real humans, ..." - false

"There's not much risk to me (or my team or my organization) in waiting a while longer to see where 'this AI thing' goes" - FALSE

Resistance Isn't Exactly New

When the printing press was invented, there was significant resistance, driven by fear - scribes feared mass unemployment. The automobile faced societal concerns about safety, reliability, and practicality. Personal computers (PC) were initially met with great skepticism by both consumers and businesses, who could not see their practicality and dismissed them as expensive toys for hobbyists.

Today, I doubt you know any scribes. And I'm pretty sure you're not commuting to work or going for groceries in your horse and carriage. You're reading this on a PC or some successor to the PC, such as a phone or tablet.

Resistance to innovations that threaten to significantly alter our reality is a natural part of being human. But history proves that economics and convenience nearly always win in the long run. Consider this: how many people do you know say they're vehemently opposed to China because of its policies, and yet continue to buy Chinese-made products en masse?

AI is no different in terms of human reaction. Many are dismissing it - some actively, others passively (pretending it doesn't exist or matter). How do I know this? Well, I talk to people about it almost daily. I'm paying very close attention to what I hear people say about AI and what I read in various forums I participate in, both at the office and in my personal life.

Flawed Assumptions or Wishful Thinking?

And, what I hear and read sounds a lot like a mix of misconception and wishful thinking. Misconception: "AI just isn't good enough". Wishful thinking: "Our product is special. Consumers will continue to choose us because we only use real humans and reject AI".

It seems to me these perspectives are based on flawed premises:

  • In order for something to be useful, it first must be perfect.

  • People will continue to choose human-generated output on principle alone.

The first premise assumes that total quality (versus "good enough") is always preferable to efficiency. That is simply not so. Neither individuals nor businesses can survive on the basis of "pure perfection".

The second assumes that the majority of people will continue to care enough about the backstory of how the products or services they're consuming were created. Nice, warm, fuzzy thought to be sure. And I respect the sentiment. And history proves again and again that this is a complete fantasy.

A blogger who can generate quality, useful content on the same topic twice as fast and automate their newsletter and marketing campaigns are going to take audience from other bloggers.

Media creators - digital art, video, etc. - who can generate content equal to or better than their competition using 10% of the budget in 20% of the time and then achieve similar savings in promoting and distributing that media will also outperform, if not put out of business, competition that dogmatically sticks to their anti-AI principles.

A single programmer that can create functioning applications with an acceptable level of bugs 50% faster than another developer will come out on top. Today's idea of code "maintainability" will shift as AI becomes better and better at generating code AND then updating that code for new requirements. I do not believe this future is far off.

If you find an auto mechanic who's known to fix cars twice as fast for half the labor costs and does a great job, you won't hesitate to use that mechanic to repair your Honda. If they have a robot that does most of the work, that won't stop you from getting your car fixed faster and cheaper. You know it. I know it.

Economics and convenience - whatever we may say, we value these in a big way!

We're Just Getting Started, Folks!

For this article, I'm going to ignore the standard "let's check out ChatGPT!", only because I assume that, by now, anyone reading this article is more than aware of that product and to some degree at least, it's capabilities.

Meet OpenAI's Sora. It was just released and can already generate very sophisticated 60-second videos from just a chat prompt. It's early days for AI video generation, but this shows significant progress. And AI advancement is on an exponential, NOT linear curve.

I use AI coding assistants a lot and, in my experience, even today I get very useful results around 70% of the time, because I've learned the strengths of current AI and how to work around many of its weaknesses. I'd estimate that today I can probably generate a brand-new app about 20-30% faster than I could have a year ago.

This is with current models. OpenAI, Meta and others are working on much more powerful models. GPT-4 was far better at generating code than GPT 3.5. But, even though the most recent GPT 4 Turbo is pretty darn good at generating fairly small sections - functions, for example - of quality code, it still cannot reliably generate complete applications of even medium complexity. But again, the AI curve is exponential, NOT linear.

Me, Using Github Copilot to Write Code for Me and Generate Documentation for the App

My bet is that the next generation of AI models (e.g. GPT 5, Llama 3, Gemini 1.5, etc.), will present another giant leap in all sorts of tasks, including generating code. Who knows, they might even be able to generate entire working applications, complete with all the tests you need! Yeah, as a software developer, that thought makes me a little sad too - until I remind myself that it might just mean that I get to shift my attention to bigger, even cooler work!

Ah, But We Need a Plan, Now Don't We?

Of course, we need to approach AI with care and consideration. I hope we already do that with most high-impact technologies. And of course, caution is required. We need to carefully consider aspects, such as Cyber Security, AI bias, AI hallucinations, etc. And devising a strategy to mitigate these risks takes time and directed attention.

All the more reason to start now to understand AI and have open dialogs about its place in our personal lives and in our organizations. Ignoring it, or taking a wait and see approach, won't do anything to change the trajectory of AI. The only impact of that thinking will be to place you or your team at risk - risk of being blindsided, becoming obsolete, falling so far behind the competition that you cannot recover before it's too late.

Harvard Business Review (HBR) wrote in 2018 that the "fast follower" approach for AI is very risky. Now, assuming the author was onto something back in 2018, given the incredible advancements in AI since then, how much more so spot on is that message today?

Then, in August 2023 HBR again published an article that echoes what many AI experts have also said:

“AI Won’t Replace Humans - But Humans With AI Will Replace Humans Without AI" ~ Karim Lakhani , (2023, August 4), Harvard Business Review

It's natural for us to feel trepidation about AI. Yes, the uncertainty of our future can be a scary thing. I understand. And yet, fear is not a useful tool here, as it does nothing to alter the future, but keeps us frozen, preventing us from making the real progress that is possible - creating things we thought impossible, creating more of what we've always created in much more efficient and fun ways, giving us precious time back to focus on the things that bring us joy and help our organizations thrive.

And, if we can't yet bring ourselves to look at the issue through a positive lens, then let's try it this way: look at AI as "the enemy". Then follow the sage advice of Sun Tzu: know thy enemy.

“If you know the enemy and know yourself, you need not fear the result of a hundred battles." ~ Sun Tzu, The Art of War

You cannot possibly understand AI or its capabilities by just reading an article here and there. You also cannot know AI by trying a handful of ChatGPT prompts. You need to commit to pushing yourself and your organization a bit further in your exploration of AI. You need to be able to have open, informed discussions on where we are today and where AI may be leading us.

Finally, many years ago, when I was in a full-time year-long intensive Russian language program, students would often say to our instructor "I just can't learn this part". Our instructor, who was Russian, would always smile big and then respond, in his thick Russian accent: "You will learn it. You know why? (pause) NO CHOICE!" :-)

Jimmy Benoit

Vice President | CISO | MBA | CISSP | Veteran

9mo

Great write up, Tim Kitchens, thanks for sharing! I completely agree with the false premise you hit on that "in order for something to be useful, it first must be perfect." I'm a huge believer to the contrary like you noted. Also, I appreciate your call out to the risks of being a fast follower of AI. I think every organization should assess and incorporate different AI capabilities to their business practices, but shouldn't rush haphazardly into it without ample thought and caution about the risks. Thanks again for sharing! 👍

Like
Reply
Rich Puderbaugh

President /Owner of zAnswer LLC

10mo

Great perspective on AI Tim. Well done!

Like
Reply
Alan Howlett

Principle Systems Engineer at some place you don't need to see

10mo

BTW, you can get 2 free months of Gemini now.

Like
Reply
Alan Howlett

Principle Systems Engineer at some place you don't need to see

10mo

Nice article Tim. The current AI offerings are overwhelming. ChatGTP 3.x and 4.x, SORA, Gemini (former BARD), DALL-E, Grok, Stable DIffusion, Retrieval-Augmented Generation... I try to watch a video per day on this and still feel like I'm in a series of recursively unmapped tunnels. I may need to focus on depth-first and avoid the context switching overload I get now chasing depth and breadth.

Like
Reply

To view or add a comment, sign in

Explore topics