AI systems routinely outperform humans, affirms the Stanford AI Index Report 2024; Should we be worried? There's hardly a day that goes by without someone speculating if AI systems like GPT-4, Gemini 1.5 Pro, Claude 3, and now LLaMA 3 are becoming sentient, the fear being that AI's intelligence, what we better know as artificial general intelligence (AGI) or artificial super intelligence (ASI), will exceed that of the most intelligent humans, making it a sort of Alpha Intelligence that may eventually even enslave humans. While #Nvidia CEO Jensen Huang proposed that AGI would emerge within five years, and Ben Goertzel foresees it happening in just three years, Elon Musk predicts that by the end of 2025 or early 2026, AI will be smarter than any single human, and probably smarter than all humans combined by 2029. Meta CEO Mark Zuckerberg, too, has announced that his company is entering the race to make an AGI but did not provide any timeline. These are all credible voices and we must pay attention. But one also needs to pay heed to equally-accomplished voices like those of Yann LeCun, Andrew Ng, and Gary Marcus, who have begged to differ and have consistently tempered expectations in the context of AGI. Early this month, even Stanford University's seventh edition of the AI Index report weighed in on the same topic, insisting that AI systems like GPT-4, Gemini, and Claude 3 are "impressively multimodal" and "routinely exceed human performance on standard benchmarks". The report, however, qualifies that current AI technology continues to have limitations and cannot reliably deal with facts, perform complex reasoning, or explain its conclusions. What do you think? LiveMint https://lnkd.in/g8VmStwv
Leslie D'Monte’s Post
More Relevant Posts
-
Don't overestimate LLMs; it distracts attention from real issues: I have often alluded to the sharp divide in opinions on artificial intelligence (AI) and generative AI (GenAI). Proponents highlight the technology’s benefits and current assistance to humans, often downplaying its limitations. Conversely, critics focus on potential risks like hallucinations, deepfakes, plagiarism, job displacement, high energy consumption, and the hypothetical threat of machines surpassing human intelligence, known as artificial general intelligence (AGI). This debate is unlikely to die in a hurry, with emotions running high on either side. Meanwhile, new research from the University of Bath and the Technical University of Darmstadt suggests that large language models (LLMs) do not pose an existential threat to humanity. It suggests that while #LLMs can follow instructions and handle language proficiently, they cannot independently acquire new skills or develop complex reasoning abilities without explicit guidance. Much of the angst is directed at GenAI that is technically a subset of AI (which is so much more with mature tech like ML, deep learning (GANs, VAEs), image recognition, computer vision, NLP, etc.), which remains an adolescent but has been mistakenly been given the status of an adult. The young GenAI should be accompanied by adult humans -- read: humans-in-the-loop, when working in companies. More importantly, a sensible approach is to identify the business problem and find the right technology to solve it. If #AI or #GenAI is the right answer, then go ahead. If not, explore a better and cheaper option. I would love your thoughts. LiveMint https://lnkd.in/gU269pKa
Tech Talk by Leslie D'Monte
livemint.com
To view or add a comment, sign in
-
'Small is Beautiful' even in the world of AI, GenAI In his 1973 seminal work, 'Small Is Beautiful: A Study of Economics As If People Mattered', E.F. Schumacher advocates for a shift towards sustainable and human-centered economics. Among other things, introduces the concept of "Buddhist economics", which emphasizes the importance of considering human needs and the well-being of individuals within the economic framework. Does this ring a bell in the world of AI? Indeed! 'Small' is not only becoming an acceptable but also a "beautiful" word even in the field of generative artificial intelligence (GenAI). The reason: Small language models, or SLMs, are promising to deliver more bang for the buck to enterprises than large language models, or LLMs. Cases in point are Meta's LLaMA-3, Microsoft's Phi-3, and Apple's OpenELM -- all of which were released in April. Here's my take. I would love to hear from you too. LiveMint https://lnkd.in/g7kdmMRz
Tech Talk by Leslie D'Monte
livemint.com
To view or add a comment, sign in
-
Since 2023, 80% of organizations have increased their investment in generative AI (GenAI), while 20% have maintained their investment, according to the latest report from the Capgemini Research Institute, titled: 'Harnessing the value of generative AI: 2nd edition'. The report shows that 24% of organizations have integrated GenAI into some or most of their locations or functions, a significant increase from just 6% a year ago. #Capgemini #GenAI
Tech Talk by Leslie D'Monte
livemint.com
To view or add a comment, sign in
-
Five points Bulls, Bears may want to consider when investing in AI, GenAI; If AI is the new electricity, how should we measure ROI? There's a clear thinking divide when discussing #artificialintelligence (AI) and #generativeAI (GenAI). Those bullish on AI and GenAI will find many reasons to try and convince us that the tech will help society, and is already assisting humans, even as they conveniently gloss over the numerous limitations and legitimate reservations that skeptics offer. On the other hand, those who fear the misuse of AI and GenAI go to the other extreme of focusing on just the limitations, which include hallucinations, deepfakes, plagiarism and copyright violations, the risk to human jobs, the guzzling of power, the perceived lack of #ROI and the fear that these technologies will make machines smarter than humans, and even help them enslave our race at some point in time-- a trend called artificial general intelligence, or #AGI. Those tempted to believe that the truth lies somewhere in the middle also err, since the "middle" ground will keep on shifting, given the incredible pace at which AI is evolving. That's why I'm listing five points to consider when approaching this complex subject, fully cognizant that even these will have to be revisited periodically. LiveMint
Tech Talk by Leslie D'Monte
livemint.com
To view or add a comment, sign in
-
GenAI is moving out of adolescence; mainstream adoption likely by 2027: For the last 7-8 years, enterprises in India have been using #artificialintelligence to acquire better insights from their data, increase their product efficiency, enhance their supply chain and reduce their time-to-market, among other things. About 59% of enterprise-scale organizations (companies with over 1,000 employees) surveyed in India are actively using AI in their businesses, according to an IBM study. It adds that early adopters are leading the way, with 74% of those Indian enterprises already working with AI, having accelerated their investments in AI in the past 24 months in areas like R&D and workforce reskilling. Yet, the AI we are referring to here is what we term as traditional as opposed to the new wave of generative AI, ushered in by OpenAI, which gave AI a Generative AI (GenAI) face with ChatGPT. A major reason for this is that while traditional ML, an AI technique, was largely limited to observing and classifying patterns in content with the help of predictive models, GenAI models rely on self-supervised learning to pre-train on humungous amounts of data, and not only analyze data but also create new designs and propose ways to improve upon existing ones. The result: we have many so-called Big Daddy tech AI models today that include LLMs, large multimodal models (LMMs) and small language models (SLMs). Gartner predicts that about 40% of enterprise applications will have used Generative AI by end of 2024. But GenAI's pitfalls, too, have to be addressed. We bring you some updates. LiveMint #generativeai #llm #enterprise #tech #ibm #gartner https://lnkd.in/gKjsAjiH
Tech Talk by Leslie D'Monte
livemint.com
To view or add a comment, sign in
-
Why tech firms love our voices, faces, other biometrics, and why we should care about AI audio, video deepfakes: If you are a public speaker, a podcaster, or simply love sharing your videos and audios on social media sites, you need to be aware that AI tools can mimic your voice in seconds and impersonate you. The latest such tool is OpenAI's Voice Engine that can impersonate your voice from just a short 15-second recording. However, this is not the first time that such a tool has hit the market. Text-to-speech, or TTS systems, have been around for a while. Free TTS tools include Microsoft's VAll-E, Natural Reader; WordTalk; ReadLoud; Listen (which uses Google's TTS application programming interface to convert short snippets of text into natural-sounding synthetic speech); Free TTS (again from Google); Watson Text to Speech (a tool from IBM that supports a variety of voices in different languages and dialects); and Neosapience (which allows users to write out the emotion they want virtual actors to use when speaking). While the positives include the use of these AI tools in the edtech sector, for storytelling, and to create personalise human-like avatars to cater to a variety of content—from product marketing to sales demos, big tech usage policy documents are typically boiler plate responses that do not necessarily comfort people who are finding it increasingly difficult to differentiate between real and fake images, videos and now voices. Here's my take. I would love to hear yours. LiveMint #generativeai #voicecloning #deepfakes https://lnkd.in/gHBkhB77
Tech Talk by Leslie D'Monte
livemint.com
To view or add a comment, sign in
-
Curious about #AI agents' role in decision-making? AI agents are quickly becoming indispensable in driving automation, real-time decision-making, and operational efficiency across industries. These intelligent systems can learn from data, interact with users, and optimize processes without human intervention, freeing up resources and minimizing errors. However, as businesses adopt AI agents, they must ensure these systems operate ethically and without bias to achieve optimal outcomes. Despite their benefits, AI comes with challenges, such as data biases and ensuring transparency in decision-making. Companies often face difficulties in integrating AI into legacy systems while maintaining trust and fairness. Without addressing these concerns, AI agents may generate flawed or biased outputs, undermining their potential. ExperienceFlow.ai’s #EnterpriseAI solution enables organizations to deploy AI agents seamlessly while mitigating these challenges. Our solution ensures data integrity, reduces bias, and embeds ethical practices into AI-driven processes. #ExperienceFlow.ai empowers businesses to fully leverage AI agents for real-time decision-making, improved operations, and responsible, transparent outcomes. Partner with ExperienceFlow.ai to unlock the full potential of AI and elevate your decision-making capabilities. Click here to know more! https://lnkd.in/d4W6hTBt Giri Srinivas ATG Atul Bhatnagar A Anand R. Arjun I. Gokul Solai, MD Rama Mohan Venkata Kadayinti #KPIs #AIAgents #AIforBusiness #EnterpriseDigitalNervousSystem #AutonomousEnterprise
AI agents now make their own decisions; why enterprises should care
livemint.com
To view or add a comment, sign in
-
Imagine knowing what hundreds of millions of people are thinking. It's a user researcher's wet dream! xAI's just launched its API, giving you access to the 560Gb of data generated every day by the millions of users on X (formerly Twitter). A real-time newspaper! This API offers remarkable opportunities to integrate the world's first mindreading feature into your applications. And you've never before had access at scale to the thoughts and behaviours of your current and future customers. Let's dive in! https://lnkd.in/ddRNSF2q
Eye of Horus with xAI's API by AI Today
podcasters.spotify.com
To view or add a comment, sign in
-
Let's be careful on the tactics Big Tech companies use to get investment and avoid security issues just by looking at numbers without looking into their real capabilities. The recent announcement of Meta’s LLaMA 3, touted as one of the most powerful LLMs to date, has reignited the hype around this technology. However, along with excitement, there’s a concerning amount of misinformation being spread by companies like Meta, Google, and Twitter, misleading users and investors about LLMs’ capabilities and limitations. Learn more on my most recent blog post.
Lies and Misinformation Surrounding Large Language Models (LLMs)
link.medium.com
To view or add a comment, sign in
-
Reddit has an almost inexplicable pull on the tech world. On one hand, it’s a great source for unvarnished discussion. On the other hand, well, it’s a bit of a mess of controversy. Google and now Reddit’s close partnership is enabling LLMs and Search a direct line to the content and quality signals that site behind them. Ignoring the concerns about LLMs and intellectual property, we have seen Reddit content surface as direct answers in Google AI Overviews without a real sense of fact-checking or correctness. Google, OpenAI, and other tech companies will scrape the data one way or another, so will the direct relationships eventually remove the kinks and ensure that they add more value to the web echo system? https://buff.ly/4dMsp70
https://openai.com/index/openai-and-reddit-partnership/
openai.com
To view or add a comment, sign in