Last week at Chatham House, I sat down with the Financial Times’ Gillian Tett (also the Provost of Kings College Cambridge) to discuss what’s ahead for AI in 2025 - and we got some really good questions, including one from Kayode Adeniyi (see below). 2024 has brought about remarkable progress in so many areas, including scientific discovery, societal impact in areas like health and climate, and helpful products. As we look ahead to the new year, here are a few of the possibilities I’m most excited about and areas we are working on: Progress on the frontiers of AI: Gemini 2.0 and our research prototypes with agentic capabilities (e.g. Project Astra and Project Mariner) will help us explore the practical application of AI agents, so we can responsibly make them more widely available in products in the future. AI powering growth and productivity: AI - if harnessed correctly - could contribute to reversing long-term declines in economic productivity raising living standards for everyone. But such gains will not be automatic, as I have discussed elsewhere including in the FT. AI accelerating research breakthroughs in science: Breakthroughs in AI-enabled science (e.g. AlphaFold, Connectomics etc) will enable more researchers to advance their work to treat disease, solve environmental challenges and more. Progress in Quantum AI is helping us do some cool science too. https://lnkd.in/gJj9QMtt AI helping advance progress on the Sustainable Development Goals: AI holds the potential to drive measurable progress on societal challenges like climate change, including continued improvements on modeling and forecasting, mitigation, and adaptation. Stronger international alignment on AI: International efforts to align around a coherent and inclusive global governance framework that accounts for the opportunities offered by AI as well as addressing risks and complexities. At the end of our talk, Kayode Adeniyi, a student from LSE, put his hand up to ask me a question. He said: “If you were in my shoes, what one question would you be asking?” My answer was a question I ask myself often: How do we make sure everyone benefits from AI? Growing up, I saw and experienced what it meant for technology to not benefit everyone equally. Covid was another more recent example: the world invented extraordinary vaccines, but some places got them and some people didn't. And we’ve all seen the enduring implications of the digital divide for economies and people. So, as we look ahead to 2025, the questions we should all ask ourselves is: What can we do to make AI different from technological advances in the past? How do we make sure everyone benefits? What actions are needed now?
Following the announcement of their new AI model, Gemini 2.0, Google's senior vice president of research, technology and society, James Manyika, joined us to discuss what 2025 will bring for AI in science, economics, global governance and international cooperation. Watch the event recording: https://lnkd.in/eyHwphrk
Strategy & Operations, New Ventures, and Partnerships Executive | Driven by Tech x Impact | Board Member | Media & Emerging Tech
21hJames Manyika love the intention behind your question "How do we make sure everyone benefits from AI?" I would follow up that question with this: How do we reconcile is the exponential growth of AI, and the simultaneous pullback on DEI initiatives and backlash against ESG? In the rush to sell AI solutions across every sector, *who* AI companies are selling to is just as important as *what* they are selling.