I find it extremely interesting when people dismiss those that played a key and hands-on role in the development of something as "not knowing what they're talking about". I am seeing this more and more throughout society.
In this case, the insights provided by Dr. Hinton, aka the Godfather of AI, shouldn't be dismissed merely because it's comforting to do so. These are extreme cases that still have a likelihood of playing out within the next 5 to 20 years.
However, even the best case scenarios will provide challenges to society as it operates today because widespread workforce disruption is guaranteed to happen.
It's why I posted my thoughts on 'redefining what a home is' in an earlier post.
Thank you for sharing this story with me Fraser Trottier, CFA, it's much appreciated!
--------------------
Hinton: For emotions we have to distinguish between the cognitive and the physiological. Chatbots have different physiology from us.
When they’re embarrassed, they don’t go red. When they’re lying, they don’t sweat. So in that sense, they’re different, because they don’t have those physiological affects. But the cognitive aspects, there’s no reason why they shouldn’t have those.
....
Hinton: The point is, what makes most people feel safe [from intelligent machines] is that we got something they ain’t got. We have subjective experience, this inner theatre that differentiates us from mere computational machines. And that’s just rubbish. Subjective experience is just a way of talking about what your perceptual system’s telling you when it’s not working “properly.”
So that barrier is gone. And that’s scary right? AIs have subjective experiences just as much as we have subjective experiences.
Brown: I suppose you could say that.
Hinton: This is a bit like telling someone in the 16th century, “actually, there isn’t a God.” It’s such an extraordinary idea that they’ll listen to your arguments, and then say, “yeah, well, you could say that,” but they’re not going to behave any differently.
Brown: What about the idea that human beings die and are mortal, whereas AI doesn’t? And so AIs do not have the quickened, tragic sense of existence humans have, no matter how much AI can think and accomplish?
Hinton: That’s certainly all true. We are mortal and they are not. But you have to be careful what you mean by immortality. The machines need our world to make the machine that they run on. If they start to do that for themselves, we’re fucked. Because they’ll be much smarter than us.
....
Brown: How far off?
Hinton: I would estimate somewhere between five and twenty years. There’s a 50-50 chance AI will get smarter than us. When it gets smarter than us, I don’t know what the probability is that it will take over, but it seems to me quite likely.
https://lnkd.in/gKZVrz7s
Chief Innovation Officer@Leadership Innovation Lab | #MarshallGoldsmith100 MG100 | #AI Workplace Expert #FutureOfWorkExpert | Board Member | International Keynote Speaker
2moExcellent article full of insights on how to optimize human creativity AND the power of #AI ! Thanks as always for sharing Diane! @GlobalCoachTom via Instagram/ Twitter Board VP Northwestern Alumni