Advanced AI users develop special cognitive models When we encounter a stranger, we make swift, often unconscious judgments about who they are and what they are capable of. A person who speaks our language with barely a hint of an accent? We assume they are fluent. Someone who drops a reference to a complex scientific theory? We peg them as well-educated, likely to be literate, and probably knowledgeable about a range of topics from current events to social norms. These snap judgments form the backbone of our social interactions. They are mental shortcuts, honed over millennia of human evolution, allowing us to navigate the complexities of social life with remarkable efficiency. Most of the time, they serve us well. We can usually guess whether someone will understand a joke, follow a complex argument, or need help using a smartphone. These are cognitive models. But when we step into the realm of artificial intelligence, these time-tested models crumble... https://lnkd.in/gYBEJBWD
Alexander "Sasha" Sidorkin’s Post
More Relevant Posts
-
There is a lot said about AI bias but what about human bias? And is there a benefit of bringing AI and Human bias together? Would they help to address each other? I think, generally, we need a more nuanced discussion of bias.
Bias (AI and human)
http://techandlearning.wordpress.com
To view or add a comment, sign in
-
🙌 Happy Friday Everyone! I was recently made aware of 3 ‘interesting trends’ in Ai applications: https://talktoyourex.com/ https://textsfrommyex.com/ 👉 While both of these domains sound similar, the first trains off your ex’s texts creating a virtual clone of that person to converse with indefinitely. 👉 The 2nd site, trains off the history of your texts with your ex and offers hindsight analysis of your relationship. 👉 The 3rd type of site (which I won’t list here) leverages advances in Ai multimodal advancements for you to not only clone the images and texts of your ex(es), but also allows future creations of their imagery at will (uncensored if you ask), so you can continue your relationship virtually, indefinitely. 🤖 When I asked GPT what it thinks of these, it gave a pretty darn good answer: “On the surface, it sounds intriguing—perhaps even therapeutic. But delve deeper, and questions emerge. Is it healthy to perpetuate emotional ties to an ex through AI? Could it hinder personal growth and closure? And what about consent? Is it fair to the ex, whose words/images are used without their knowledge?” ❓ What are your thoughts on these latest Ai applications 🤔
To view or add a comment, sign in
-
AI (LLMs, in particular) seem to operate like the inverse of the human brain. The brain inside out, almost. Here's the line of reasoning... The often-cited criticism of ChatGPT or other such conversational agents is "𝘓𝘓𝘔 𝘣𝘢𝘴𝘦𝘥 𝘤𝘩𝘢𝘵𝘣𝘰𝘵𝘴 𝘢𝘳𝘦 𝘫𝘶𝘴𝘵 𝘱𝘳𝘦𝘥𝘪𝘤𝘵𝘪𝘯𝘨 𝘵𝘩𝘦 𝘯𝘦𝘹𝘵 𝘸𝘰𝘳𝘥. 𝘛𝘩𝘦𝘺 𝘥𝘰𝘯'𝘵 𝘳𝘦𝘢𝘭𝘭𝘺 𝘬𝘯𝘰𝘸 𝘰𝘳 𝘵𝘩𝘪𝘯𝘬 𝘧𝘰𝘳 𝘵𝘩𝘦𝘮𝘴𝘦𝘭𝘷𝘦𝘴, 𝘢𝘯𝘥 𝘰𝘧𝘵𝘦𝘯 𝘩𝘢𝘭𝘭𝘶𝘤𝘪𝘯𝘢𝘵𝘦". It's also often the argument used when someone is skeptical of 'AI hype'. Here are some perspectives from brain research (sources and links in comments), that might make you see the above in a different light. The human brain is a prediction machine (the Bayesian brain, with predictive processing). Our perception of the world is based on a prediction model, a 'best guess' of a stimulus. Your experience of the world is a 'controlled hallucination' (term coined by neuroscientist Prof. Anil Seth, 𝘐 𝘵𝘩𝘪𝘯𝘬, more than 7 years ago). When we agree on the hallucination, we call it reality. Example - 'color' (the way we know it) doesn't exist. It is perceived a certain way, but color only exists as wavelengths, and not how we experience it. Neuroscientists say that we construct reality in real time, creating the shapes, colors, objects, and motions that we see. So - the human brain perceives by prediction, generates reality by controlled hallucinations, all with the goal of survival and such. The model of the world is constantly updated when the prediction doesn’t match up to evidence. The brain has a self-correcting mechanism in the neo-cortex. What also follows is that "learning is destruction". [This literally blows my mind]. Now, here's how AI seems to operate like the inverse of the human brain: AI / LLM based agents -> 𝗼𝘂𝘁𝗽𝘂𝘁 is based on prediction, therefore we see 𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗲𝗱 hallucinations (often uncontrolled) Human brain -> 𝗶𝗻𝗽𝘂𝘁 is based on prediction, therefore we experience 𝗽𝗲𝗿𝗰𝗲𝗶𝘃𝗲𝗱 hallucinations (often controlled) This line of reasoning could be flawed, and is an oversimplification I'm sure. I'd love to hear it, though. Reference links in comments
To view or add a comment, sign in
-
Dive into the intriguing link between Artificial Intelligence and the loneliness epidemic. Discover the potential of AI to not only streamline processes but also foster genuine human connections. Explore how thoughtful AI design can combat loneliness, providing insights from recent research. Let's shape a future where technology enhances, not hinders, our social fabric. https://bit.ly/3HKBazt #AIandConnection #TechForGood #LonelinessSolutions #FutureTech
Can Artificial Intelligence Help Us Become Less Lonely?
greatergood.berkeley.edu
To view or add a comment, sign in
-
In a world increasingly shaped by AI, how do we preserve what makes us human? As AI evolves, so too must our understanding and application of emotional intelligence (EI). In our latest Fast Company article, @dr.robinstern and I discuss how combining AI with EI can create more meaningful connections in workplaces, schools, and communities. By prioritizing EI, we ensure that technology enhances, rather than diminishes, our humanity. Let us know what you think about the article in the comment section. https://buff.ly/3WGOxaZ #artificialintelligence #emotions #emotionalintelligence #ai #EI #whatmakesushuman
https://www.fastcompany.com/91166778/how-to-make-ai-more-emotionally-intelligent
fastcompany.com
To view or add a comment, sign in
-
I find it extremely interesting when people dismiss those that played a key and hands-on role in the development of something as "not knowing what they're talking about". I am seeing this more and more throughout society. In this case, the insights provided by Dr. Hinton, aka the Godfather of AI, shouldn't be dismissed merely because it's comforting to do so. These are extreme cases that still have a likelihood of playing out within the next 5 to 20 years. However, even the best case scenarios will provide challenges to society as it operates today because widespread workforce disruption is guaranteed to happen. It's why I posted my thoughts on 'redefining what a home is' in an earlier post. Thank you for sharing this story with me Fraser Trottier, CFA, it's much appreciated! -------------------- Hinton: For emotions we have to distinguish between the cognitive and the physiological. Chatbots have different physiology from us. When they’re embarrassed, they don’t go red. When they’re lying, they don’t sweat. So in that sense, they’re different, because they don’t have those physiological affects. But the cognitive aspects, there’s no reason why they shouldn’t have those. .... Hinton: The point is, what makes most people feel safe [from intelligent machines] is that we got something they ain’t got. We have subjective experience, this inner theatre that differentiates us from mere computational machines. And that’s just rubbish. Subjective experience is just a way of talking about what your perceptual system’s telling you when it’s not working “properly.” So that barrier is gone. And that’s scary right? AIs have subjective experiences just as much as we have subjective experiences. Brown: I suppose you could say that. Hinton: This is a bit like telling someone in the 16th century, “actually, there isn’t a God.” It’s such an extraordinary idea that they’ll listen to your arguments, and then say, “yeah, well, you could say that,” but they’re not going to behave any differently. Brown: What about the idea that human beings die and are mortal, whereas AI doesn’t? And so AIs do not have the quickened, tragic sense of existence humans have, no matter how much AI can think and accomplish? Hinton: That’s certainly all true. We are mortal and they are not. But you have to be careful what you mean by immortality. The machines need our world to make the machine that they run on. If they start to do that for themselves, we’re fucked. Because they’ll be much smarter than us. .... Brown: How far off? Hinton: I would estimate somewhere between five and twenty years. There’s a 50-50 chance AI will get smarter than us. When it gets smarter than us, I don’t know what the probability is that it will take over, but it seems to me quite likely. https://lnkd.in/gKZVrz7s
For Geoffrey Hinton, the godfather of AI, machines are closer to humans than we think
theglobeandmail.com
To view or add a comment, sign in
-
The world of AI right now is SO EXCITING! It feels like we are edging closer and closer to a state only seen in science fiction 10 years ago. This is affecting lots of industries, including the world of VoC and Insights. In the last week, we've heard that GenAI can find insights humans simply cannot. This follows lots of vendors offering a Question and Answer like-feature (ask any question, get an answer) as part of their product. As someone that's been in AI for the last 16-years, this is the most exciting time for the field I can remember!!! But, I've caught myself getting over-excited more than once. Remember, these GenAI models can do lots of things really well, like summarize dense texts and compose elegant prose far more quickly than any human can. However, it falls short on common sense problems whose solutions appear obvious to humans like recognizing logical fallacies and playing tic-tac-toe (noughts and crosses for those using British English). When the model encounters these kinds of problems, it often “hallucinates” bogus information (source for this in the comments). So, an AI that *CAN'T* grasp tic-tac-toe *CAN* find insights that a human can't without hallucinating? A healthy dose of skepticism might well be advised... for now.
To view or add a comment, sign in
-
Yesterday someone asked me about what I love to do most My answer was that I always check what problems people are facing and try to give solutions. I asked them the below question👇and added more about telling me something interesting about this. 𝗛𝗼𝘄 𝗔𝗜 𝗶𝘀 𝗲𝘃𝗼𝗹𝘃𝗶𝗻𝗴 𝗮𝗻𝗱 𝘄𝗵𝘆 𝗽𝗲𝗼𝗽𝗹𝗲 𝗮𝗿𝗲 𝗯𝗲𝘁𝘁𝗲𝗿 𝘁𝗵𝗮𝗻 𝗔𝗹𝗶𝗲𝗻𝘀 𝗻𝗼𝘄? Thier answer: Artificial Intelligence (AI) has rapidly transformed from a futuristic concept into a powerful tool reshaping various aspects of our lives. From simple automation to complex problem-solving, AI’s evolution has impacted industries, healthcare, and even our day-to-day activities. As AI continues to evolve, it brings humans closer to achieving new frontiers in knowledge, efficiency, and creativity—closing the gap with what we might imagine as “alien intelligence.” Today, people are using AI to solve problems faster than ever. In fact, in certain ways, we might be outpacing even the hypothetical abilities of aliens. Advanced algorithms in machine learning, deep learning, and neural networks enable us to process information with incredible accuracy and speed, driving forward scientific discoveries and innovations that might otherwise seem otherworldly. Moreover, while alien life remains speculative, human intelligence continues to grow in partnership with AI, empowering us to push beyond traditional limits. As AI develops, humans are not only creating solutions but also learning to approach problems from perspectives that were once unimaginable. In this sense, with AI as our ally, we’re evolving to become smarter, more resourceful, and potentially better than the aliens we imagine. Pov: I understood that they took the answer from chatgpt and didn’t add their own view. Here’s the difference, you just can’t copy & paste anything from AI. You have to understand how to use AI and write your own thoughts with the help of AI. That’s how your brand scales.
To view or add a comment, sign in
-
Will AI enable us to quantify and understand all risks, or will it bring new uncertainties? In his latest op-ed in Newsweek, “How AI Is Redefining Our Understanding of Risk” WorldQuant Founder, Chairman and CEO Igor Tulchinsky explores this question, offering insight into how we can harness the power of predictive technologies, while understanding the new uncertainties they may introduce. https://lnkd.in/eNB43mUN
AI is poised to reshape how we think about risk – and, in turn, our relationship with the future. As predictive technologies become more advanced, it’s possible that we’ll be able to forecast outcomes with greater levels of certainty. While this presents exciting opportunities, we must balance leveraging these new tools with human decision-making and critical thinking. In an op-ed for Newsweek, I share my thoughts on how we can adapt our thinking now to avoid potential pitfalls like cognitive atrophy, AI echo chambers and the illusion of certainty that predictive technologies may bring. https://lnkd.in/eY-7A2GF
How AI Is Redefining Our Understanding of Risk | Opinion
newsweek.com
To view or add a comment, sign in
-
As AI continues to evolve, so must our approach to emotional intelligence. How can we ensure AI supports human connection rather than undermines it? Dive into this insightful article. #AI #EmotionalIntelligence #TechEthics
How to make AI more emotionally intelligent
fastcompany.com
To view or add a comment, sign in
Crafting practical futures with GenAI
2moI would respectfully disagree. This situation can be explained in a different way — much simpler and definitely more actionable. We encounter vast gap between expectations and reality. Casual users of AI chatbots have no idea what exactly these models can and cannot achieve.