#Topics Google's best Gemini demo was faked [ad_1] Google’s new Gemini AI model is getting a mixed reception after its big debut yesterday, but users may have less confidence in the company’s tech or integrity after finding out that the most impressive demo of Gemini was pretty much faked. A video called “Hands-on with Gemini: Interacting with multimodal AI” hit a million views over the last day, and it’s not hard to see why. The impressive demo “highlights some of our favorite interactions with Gemini,” showing how the multimodal model (that is, it understands and mixes language and visual understanding) can be flexible and responsive to a variety of inputs. To begin with, it narrates an evolving sketch of a duck from a squiggle to a completed drawing, which it says is an unrealistic color, then evinces surprise (“What the quack!”) when seeing a toy blue duck. It then responds to various voice queries about that toy, then the demo moves on to other show-off moves, like tracking a ball in a cup-switching game, recognizing shadow puppet gestures, reordering sketches of planets, and so on. It’s all very responsive, too, though the video does caution that “latency has been reduced and Gemini outputs have been shortened.” So they skip a hesitation here and an overlo...
AIPressRoom’s Post
More Relevant Posts
-
ICYMI: How to create generative AI themes for Google Chrome
How to create generative AI themes for Google Chrome
https://9to5google.com
To view or add a comment, sign in
-
Is this a real generator of images or just an innovative image search engine?
BlinkShot AI generates images in a fraction of a second. So fast that it changes as you type words: No account is needed to start generating images. BlinkShot is ad-free, censorship-free, and has no hidden paywalls. Plus, it's 100% free to use. With BlinkShot, you just type in a description of the image you want to generate. Once ready, you can download it or share it directly from your browser. BlinkShot is also an open-source project, powered by Flux. This means it's constantly evolving and improving, thanks to a community of developers. ♻️ Repost this if you think it's the future. PS: This post is written by Ruben's AI. Get your $10 off EasyGen by: 1. Clicking on "Visit my website" at the top. 2. Using the code "AXELLE" at the checkout.
To view or add a comment, sign in
-
🦑 AI-Powered Video Summarization: A Closer Look at Eightify https://lnkd.in/d_Bi4t4X
AI-Powered Video Summarization: A Closer Look at Eightify
undercodenews.com
To view or add a comment, sign in
-
Google Levels Up AI with Grounding! 🚀 Google's Gemini models just got a major upgrade! 🧠 With the introduction of Grounding with Google Search, these models can now access and process real-time information from the web, making their responses more accurate, relevant, and up-to-date. This is a game-changer for AI, paving the way for more reliable and informative AI-powered tools and applications. #AI #ArtificialIntelligence #MachineLearning #Google #Gemini #Technology #Innovation #FutureofAI
Gemini API and Google AI Studio now offer Grounding with Google Search- Google Developers Blog
developers.googleblog.com
To view or add a comment, sign in
-
Bing now offers the option to turn off AI copilot responses, giving users more control over their search experience. Explore the reasons behind this move and its potential impact on user satisfaction. #Bing #AI #UserExperience #SearchEngine
Bing Lets You Turn Off AI Copilot Responses In Search
seroundtable.com
To view or add a comment, sign in
-
It's #TransparencyTuesday with a win for #ExplainableAI! A new tool from Google DeepMind called Gemma Scope helps you explore how a few of Google's #LLMs work. You can even play with an interactive demo version at https://lnkd.in/guyZqkuz. Wouldn't it be cool if we had a similar tool for all models? #AITransparency #ResponsibleAI
Gemma Scope: helping the safety community shed light on the inner workings of language models
deepmind.google
To view or add a comment, sign in
-
BlinkShot AI generates images in a fraction of a second. So fast that it changes as you type words: No account is needed to start generating images. BlinkShot is ad-free, censorship-free, and has no hidden paywalls. Plus, it's 100% free to use. With BlinkShot, you just type in a description of the image you want to generate. Once ready, you can download it or share it directly from your browser. BlinkShot is also an open-source project, powered by Flux. This means it's constantly evolving and improving, thanks to a community of developers. ♻️ Repost this if you think it's the future. PS: This post is written by Ruben's AI. Get your $10 off EasyGen by: 1. Clicking on "Visit my website" at the top. 2. Using the code "AXELLE" at the checkout.
To view or add a comment, sign in
-
Google Gemini 1.5 Pro's 1,000,000+ token context length can understand a whole movie. Here's a quick tutorial on how to try it for free: First, in case you missed it, here's my thread on Gemini 1.5 and its use cases: https://lnkd.in/gVxVSEgw Some capabilities I covered: -Breaking down + understanding a movie -Translating text into a rare language -Watching an AI video and determining if it's AI or not Step 1: Go to deepmind.google. Click 'Technologies' at the top of the webpage. Now, scroll down and click 'Gemini'. Step 2: From the Gemini website, you can learn more about what Gemini does, watch videos, or read the paper. But for the sake of this tutorial, we're skipping all that. Scroll down to where it says 'Introducing Gemini 1.5', now click 'Try Gemini 1.5' Step 3: After signing in, you should get this 'You're all set!' page. It can be a bit buggy, so try refreshing or using Chrome as your browser if you don't see it. Once you see it, click 'Get started' and it will take you to Google AI Studio. Step 4: In Google AI Studio, just go to the Model settings on the right, and make sure you've selected Gemini 1.5 Pro over Gemini 1.0 Pro. That's it! Start prompting away with your 1M+ context token context length. That's it! Hope you found this AI workflow as useful as I do. I post a new AI tutorial every day, so follow me Rowan Cheung for more. If you found this helpful, support my content with a like/repost ♻️
To view or add a comment, sign in
-
Google Gemini 1.5 Pro's 1M+ token context length can understand a whole movie. Or 11 hours of audio. Or 30k+ lines of code. Or 700k+ words. 🤯 What do you intend to do with it? #AI #Google #Gemini #Shirute #CXPAFinland #CX #asiakaskokemus
Google Gemini 1.5 Pro's 1,000,000+ token context length can understand a whole movie. Here's a quick tutorial on how to try it for free: First, in case you missed it, here's my thread on Gemini 1.5 and its use cases: https://lnkd.in/gVxVSEgw Some capabilities I covered: -Breaking down + understanding a movie -Translating text into a rare language -Watching an AI video and determining if it's AI or not Step 1: Go to deepmind.google. Click 'Technologies' at the top of the webpage. Now, scroll down and click 'Gemini'. Step 2: From the Gemini website, you can learn more about what Gemini does, watch videos, or read the paper. But for the sake of this tutorial, we're skipping all that. Scroll down to where it says 'Introducing Gemini 1.5', now click 'Try Gemini 1.5' Step 3: After signing in, you should get this 'You're all set!' page. It can be a bit buggy, so try refreshing or using Chrome as your browser if you don't see it. Once you see it, click 'Get started' and it will take you to Google AI Studio. Step 4: In Google AI Studio, just go to the Model settings on the right, and make sure you've selected Gemini 1.5 Pro over Gemini 1.0 Pro. That's it! Start prompting away with your 1M+ context token context length. That's it! Hope you found this AI workflow as useful as I do. I post a new AI tutorial every day, so follow me Rowan Cheung for more. If you found this helpful, support my content with a like/repost ♻️
To view or add a comment, sign in
188 followers