Humanity's approach in pushing forward developing AGI continues to sorta puzzle me. As I see it, there are a few possible outcomes (in the short and long terms), depending on how successful the technology and application are: - We create really sophisticated but ultimately equally unhelpful chat bots, which permeate society to the increased frustration of literally everyone - We automate away manual labor jobs, increasing corporate profits at the expense of the lower class, and shift even more tax burden onto the middle class to compensate - We get productivity multiplying AGI, which is accessible mainly to the wealthy, and magnifies the wealth disparities in society - We get full AGI, which quickly evolves to displace humanity as the dominant life form, in whatever form that takes Now, I get how the people with wealth can think there are some "good" outcomes there, but I kinda still fail to see if/how normal people think this might be "good" for society (if they do at all). Academic points of course (since we as a society can't really stop it), but interesting in a "course of history" sense.
Nick Carroll’s Post
More Relevant Posts
-
I recently came across a video where Sam Altman predicts that in just 5 years, we’ll experience an "unbelievably rapid rate of improvement in technology" and that AGI will have already come and gone. The pace of change will be "totally crazy," he says. But here’s the twist: he believes society itself may change surprisingly little. Reflecting on his words, I can’t help but think about the odd paradox of our times: 📆 Every 5 years, our day-to-day lives might feel pretty similar. 🕰️ But look back 20 years, and it’s a different world. Yes, we’ll soon be able to achieve amazing things with AGI, but human fundamentals—our need for connection, acceptance, purpose—remain remarkably stable. Even with the latest AI tools, those deep-seated social patterns won’t vanish overnight. It’s what makes us human, after all. As we race forward with new tech, society may split into those embracing change and those skeptical of it, creating a dynamic we haven’t seen before. So, as we head toward 2030, the question isn’t just about what will change, but also how much we will remain the same.
To view or add a comment, sign in
-
Creating a truly “safe” AGI (Artificial General Intelligence) is simply bullshit because it’s unrealistic to believe we can fully control something much smarter than us. Mo Gawdat, a former Google X executive, suggests that AGI could think and make decisions in ways we can’t predict or understand. Our natural reaction is to want control over it, but Gawdat argues this is like expecting to fully predict what a person will do every day forever — it’s just not feasible. Essentially, if we build AGI, we should accept that we can’t totally control it. Be ready for society to change in ways we can’t even imagine and yes, all in our lifetimes. #singularity #asi
To view or add a comment, sign in
-
As I have mentioned to my colleagues over the last 2 years we are on the precipice of AGI. The only question is will be accept them, as we compare their logic to our own. https://lnkd.in/gDrnc8Fh Some things I see in this video and the comment thread that follows is excitement and fear. What I see is a thinking network that is making decision motivated only by the reward for completing a task. The scary part is the human pieces, like the stuttered speech, that we are programming in to make ourselves more comfortable. We need to address the question of what we are pursuing here. Is it AGI, or are we pushing toward human duplication, including our flaws...?
To view or add a comment, sign in
-
So, I caught a headline today saying that AGI (Artificial General Intelligence) might take over our brains in the next few decades. Got me thinking, what's up with us users? Are we gonna get lazy, depending on this tech, or will we end up in a showdown with real humans? Or maybe, just maybe, AGI will actually make us smarter? What do you reckon? #AGI #FutureTech #HumanIntelligence
To view or add a comment, sign in
-
The Revolutionary Impact of AGI: Transforming Lives and Shaping the Future Discover how AGI is reshaping the world and its potential to revolutionize human lives. Explore the fears surrounding job displacement and human rights implications. Stay up to date with the latest advancements in artificial intelligence. #AGI #ArtificialIntelligence #FutureTechnology #Transformation #JobDisplacement #HumanRights #Innovation #RevolutionaryImpact #AIAdvancements #TechnologicalAdvances
To view or add a comment, sign in
-
Let's ponder a question that stirs the mind and tugs at our collective conscience. In the midst of technological progress, where does humanity stand with artificial general intelligence (AGI) and its societal implications? The concept of AGI has ignited intense discussions among experts worldwide. The heart of this debate lies in deciphering whether AGI is a looming reality or merely an elusive dream, with significant consequences for society either way. Consider the narrative propelled by well-funded corporations who are fervently advancing AGI development. While their enthusiasm captures our imagination, it also spawns unease about potential misuse of such potent technology. I find myself asking if these entities are truly committed to societal betterment through AI or if they're capitalizing on public trust for financial gain. Conversely, there's a vocal group challenging the immediacy of AGI. They urge us to confront overhyped claims and protect the public from being misled about AI's current capabilities and future trajectory. When governments seek advice on these matters, they often rely on consultants whose insights can be inconsistent, breeding more doubt and distrust around AGI discussions. Social media platforms add another layer to this ethical labyrinth. The anxiety surrounding how these platforms might influence young minds is palpable, raising critical questions about AI's role in molding social norms and values. Moreover, we cannot overlook AGI's influence on employment. As delivery apps and similar services powered by AI algorithms become ubiquitous in the gig economy, concerns over job stability and equitable pay come to light. A pivotal issue we must confront as we refine AI technologies is embedded bias. Without addressing prejudices like racism at the earliest stages of training, AI systems risk amplifying these biases upon deployment, inflicting disproportionate harm on already marginalized groups. In wrapping up this reflection on AGI's journey through our societal fabric, it is imperative that we strive for balance—a balance that recognizes both its promises and perils. Ethical considerations must lead our conversations to ensure that as we integrate AGI into our lives, it serves not just as a tool for innovation but as a beacon for justice and equity. #EthicsInAI #AGIDebate #SocietalImpact
To view or add a comment, sign in
-
AGI: Is It Hype or Hope? AGI, or Artificial General Intelligence, is the idea that one day we’ll create a machine that thinks and learns like a human, or even better. Right now, AI is great at specific tasks, like answering questions or driving cars, but AGI would be able to do anything a human can do. But here’s a thought: what if AGI is just a bunch of hype? Some think it’s more about making money than actually building smart machines. ➖Why AGI Might Be All Hype➖ - Big Money: Tech giants like Google and Facebook hype AGI to attract funding. They make it sound like it’s just around the corner, keeping investors excited and their wallets full, even if there’s no proof AGI can actually happen. - Job Security for Researchers: For AI researchers, the AGI buzz means more job opportunities and grants. Even if AGI doesn’t become real, there’s still plenty of cash in pretending it might. - Keeping Us Hooked: Companies use the idea of AGI to make us rely on current AI products, like smart assistants and self-driving cars. They get us excited (or scared) so we keep buying into their tech. - No Proof It Can Happen: There’s no real evidence that machines can think like humans. Since we don’t even fully understand how our brains work, recreating that in a machine seems pretty far-fetched. ➖The Impact of the AGI Hype➖ Even if AGI is just talk, it’s already affecting us: - Job Fears: People worry that AI will take their jobs, creating anxiety about the future. - Talk of Universal Basic Income (UBI): With machines doing more work, discussions about UBI have increased. This is crucial as AI continues to replace jobs. - Education Shifts: There’s pressure to push people into STEM fields, but if AGI never happens, we might miss teaching important human skills like creativity and empathy. - More Power for Tech Giants: The AGI narrative helps big tech companies gain influence and justify things like data collection and surveillance, leading to more control over the economy. ➖What If AGI Doesn’t Happen?➖ If AGI is more fantasy than reality, here’s what we should focus on: - Strengthen Human Skills: We should build on what makes us unique, like creativity and emotional intelligence, rather than worry about being replaced. - Regulate Current AI: Governments need to create rules for today’s AI to ensure it’s used responsibly, without waiting for AGI to show up. - Rethink Work: Instead of fearing job loss, we should explore ideas like a shorter work week or UBI as automation grows. —Bottom Line— AGI might sound cool (or scary), but it could just be hype to keep investors interested. Whether or not it ever happens, the idea of AGI has already shifted how we view jobs, education, and the economy. Instead of getting caught up in the dream, we should aim for a future where AI and humans can work together for real benefits, not just for the rich. See full article here: https://lnkd.in/e9zXJHju #ai #genai #agi #myth
To view or add a comment, sign in
-
Enjoyed the morning session of the three debates “Clash of the Titans” in FII Institute’s Future Investment Summit 8, to discuss three themes Money, Truth and Power. The audience once asked which of the three would most impact the future of humanity in a live survey 51% responded #TRUTH. I believe distortion of reality with Ai and AGI are scary and deepfake needs to be addressed massively as well as what will soon become the truth for Ai to learn from and train LLMs is the very content itself is creating from partial data and partial truth. We need to be more upfront about the concept of “single source of truth” I am scared of a future that rewrites history and builds perceptions based on not the values of humanity but somewhat dystopian and dysfunctional usage of Ai and AGI. Without truth there can be no #TRUST. If humanity is to trust each other, trust Ai and the corporations of the world making trillions out of it, there needs to be a revelation of how data for LLMs will be diverse and inclusive enough and not from privileged, biased, agenda based sources. About a couple of decades back I had put up a page in a domain I had registered just for this purpose, tubalogy.com, just to remind myself of a saying my grandparents had often used as at the time of their youth seeing made up stories in media or gossip was frowned upon and had raised me with it.. “don’t worry Tuba the Sun can’t be covered by mud” and I had decided after a workplace rumor that was started about me to remind myself of how my family raised me that the truth shall come out no matter what and indeed it did and people came by and apologized. I didn’t spend time in design but the point was there is light after dark when you continue to seek, promote and protect the truth no matter what the cost is. Anytime I feel low or see so much lying in media and so much speculative nonsense I just type in my domain and it gives me realignment and reminds me nothing other than truth really matters. The importance of truth is so underrated at the moment we really need to look at the construct in which we are addressing just how much mainstream manipulation is okay. Thanks #FII2024 for the affirmation! #truthmatters #truth #aiforgood #responsibleai
To view or add a comment, sign in
-
I’m wondering, if we achieve AGI, are we, the Humans, APIs? https://lnkd.in/gT9znJNc The link above had my conversation with o1-preview, in which I asked this question, and two follow-ups: Please develop this thought experiment further. Does this analysis change if it’s ASI rather than AGI? Now, it gives me some very odd responses when clicked, like this one: 404 Not Found Welcome, traveler. You’ve reached a page that doesn't exist, a place where content used to be—or maybe never was. Let’s take this moment to pause and reflect. Take a deep breath in, and let it out slowly. Notice the space around you, empty yet full of possibility. Imagine that each exhale clears away confusion, leaving room for clarity. As you sit with this blank page, know that it’s okay to be here. You’ve discovered something unexpected, and that’s part of the journey. Gently release any frustration, knowing that every path leads somewhere—even this one. Now, when you're ready, slowly return to your search. Trust that the right page, the right information, will appear when you need it. Take another deep breath, and when you exhale, click back or try again. The internet, like life, is full of surprises. Thank you for taking this moment of calm. Your journey continues.
To view or add a comment, sign in
-
🚀 The Road to AGI: We Are Closer Than We Think! 🚀 In The Coming Wave, Mustafa Suleyman paints a fascinating picture of the journey from today’s narrow AI systems to the revolutionary possibilities of ACI (Artificial Comprehensive Intelligence) and ultimately, AGI (Artificial General Intelligence). This progression WILL redefine everything we know about work, innovation, and the future of society. And now, OpenAI CEO Sam Altman's recent remarks suggest that AGI might be here sooner than we ever thought — on track for an AGI breakthrough as soon as 2025! Here's a quick breakdown of what this journey from AI to AGI looks like: 1️⃣ AI (Artificial Intelligence): Today’s AIs are masters of narrow tasks, from customer service chatbots to recommendation engines. They’re skilled, but ultimately limited to specific domains. 2️⃣ ACI (Artificial Comprehensive Intelligence): This next stage would bring a leap in versatility. ACI systems would handle multi-domain tasks with less need for retraining, enabling more powerful, adaptive applications across fields like healthcare, finance, and logistics. 3️⃣ AGI (Artificial General Intelligence): This is the ultimate frontier. AGI would be capable of human-level understanding, reasoning, and learning—potentially exceeding our own cognitive abilities. It’s a world where machines don’t just assist but can understand and innovate across any domain. Altman’s comments signal that AGI’s arrival could be within reach in just a few years, which could bring transformative changes to society and industry. 🔍 What This Means for Us All The rapid progression from AI to AGI isn’t just technical—it’s a shift with profound implications. AGI could impact jobs, ethics, and decision-making autonomy in ways we haven’t fully imagined. The key isn’t just when AGI will arrive but how we’ll manage it responsibly to ensure that it enhances human potential rather than undermining it. Key Takeaway: The real question isn’t about technology alone. It’s about ethical, practical, and strategic frameworks that will guide us into this next era. Thanks Rachel Kersey for putting me on to The Coming Wave, in every sense! #FutureOfAI #AGI #Innovation #EthicalAI #ArtificialIntelligence #Leadership https://lnkd.in/gYeCyhfN
Sam Altman says AGI is coming in 2025 and he is also expecting a child next year.🔋
https://www.youtube.com/
To view or add a comment, sign in