Countdown to OpenAI's Spring update on Monday at 10am PT (i.e. right now) Live Spoilers from Mira Murati: - ChatGPT Desktop - ChatGPT 4o available to free users -- smarter and faster voice, text, vision -- stored conversational memory -- improved quality in 50 languages (97% of world population) -- API 2x faster 2x cheaper Demo: - Voice conversation where the AI gives feedback on breathing pace - AI is telling a bedtime story; user interrupts the AI speaking and asks for dramatic or robotic tone, AI gets on-board in in real-time and speaks dramatically or robotically - User writes a math equation on a piece of paper and asks AI to help along with solving it. The AI watches the user work through it and gives hints. - User writes "I <3 chatGPT" and AI responds as if touched "emotionally". - User pastes complex code and asks AI to describe what the code does in one sentence. AI knows what the code is doing and can answer specific questions. - User executes code which produces a weather diagram. GPT reasons about weather events and describes what happened by "watching" the diagram. - Mira and Mike role play as if one only speaks Italian and the other, only English. GPT acts as the live voice translator. - User asks GPT to look at his live face and read emotions, GPT says "happy with a touch of excitement". User says I am excited, I'm describing how amazing you are. GPT says with emotional intonation "Oh stop it!"
Amin Ariana’s Post
More Relevant Posts
-
OpenAI just launched it's new flagship model--"ChatGPT-4o"--and it's free! Check it out in the video below! I am actually pleased at the direction OpenAI has been taking in emphasizing AI as a collaborative tool rather than a worker replacement. ChatGPT-4o is also making it more apparent that the idea of "prompt engineering" as a separate field for humans, is fading into the background. ChatGPT-4o "reasons" by translating prompts more effectively and efficiently so that your experience will be more streamlined and your prompts more effective at getting you what you need. Edit: Here I am copying a list of fun things you can do with the new version of ChatGPT, from my favorite AI website--theneuron.ai. Support these guys, they're great! "Explore what it can do here: --live language translation (https://lnkd.in/dqewMwpu?). --realtime conversational speech (https://lnkd.in/dqewMwpu?). --lullabies and whispers (https://lnkd.in/dqewMwpu?). --sarcasm (https://lnkd.in/dqewMwpu?). --even singing (https://lnkd.in/dmDNTVji)!" Try it and have fun!
Spring Update
openai.com
To view or add a comment, sign in
-
Here is a summary of the key points from OpenAI's Spring Update via perplexity.ai - OpenAI announced a new flagship model called GPT-4o ("o" for "omni"), which brings GPT-4-level intelligence to all their products, including the free tier. It is faster, cheaper, and available in 50 languages compared to GPT-4 Turbo.[1] - They introduced a desktop app for ChatGPT to make the AI more accessible and easier to integrate into workflows.[1] - The user interface has been refreshed with new conversational capabilities. Users can now interrupt ChatGPT, share videos as prompts, and interact more naturally using speech.[1][2] - GPT-4o has enhanced multimodal abilities to understand and generate content across text, vision, and audio modalities.[2][4] - The updates aim to make AI interaction feel more natural and conversational while expanding the range of tasks GPT-4o can handle.[1][2][4] Citations: [1] https://lnkd.in/gxdPnxwF [2] https://lnkd.in/gCHQtY3u [3] https://lnkd.in/gXJinYXA [4] https://lnkd.in/gTbmSg6B [5] https://lnkd.in/gjjixjbn
Spring Update
openai.com
To view or add a comment, sign in
-
This is big news from OpenAI today. It may seem subtle, but it is big - the impact of an LLM chatbot becoming this capable will spread through every industry and every consumer application. Soon it will become obvious, but right now it still is somewhat low key. If you are reading this, you are still early to the transformation happening.
The Spring Update to OpenAI's ChatGPT announced today. Text, Vision, and Audio capabilities have been increased. More powerful model GPT 4o released to all users, paid accounts and free accounts, and this model is faster and more capable than 3.5 which was the free model for users before. The goal is to get the tools in more users' hands and show them what's possible. Also more languages are understood by model 4o, and the "device" interface experience is being diminished by allowing users to interact directly with ChatGPT 4o by speaking out loud rather than by typing on a keyboard or mobile device screen. This means users can speak with ChatGPT, show ChatGPT their surroundings, and carry on live "conversations" with ChatGPT just as they would with a living person who has sight and speech and their own native language to use. This is the closest we have gotten to an embodied form of AI like a human. Full 26 minute video linked below.
Spring Update
openai.com
To view or add a comment, sign in
-
Some incredibly strong updates coming out of OpenAI today, including: 1The launch of #GPT4o, the latest #LLM with some frightening capabilities such as: 1. Enhanced vision: you can now stream a video to #ChatGPT and it’ll recognise what’s happening including what you’re writing, the emotions you’re showing and can even view your screen and analyse what you show it with its… 2. Desktop application to run the program natively on your machine. 3. Enhanced voice production, including the ability to portray a whole range of emotions on command. TTS taken to the next level. You’ll now have immensely sounding, human like voices at your disposal. 4. Better memory to stitch together all your interactions and make for a more fluid joined up experience, even across conversations. No longer is memory confined to your session. This, by the way is actually a pretty huge deal as it opens up the potential for ChatGPT to properly get to know you and personalise your interactions in future. 5. More advanced data analysis, so you can upload charts and other types of data and it’ll tell you what’s going on. 6. Enhanced language support for up to 50 different languages, 97% of the languages spoke on the internet! 7. Barge in as standard: no more having to wait for ChatGPT to shut up, you can just continue speaking and interrupt. Something that both Alexa and Google Assistant didn’t implement for AGESSSS! 8. Twice as fast and 50% cheaper than GPT4 with higher rate limits. More power for less cost. These updates are demonstrating how OpenAI is constantly innovating and will, in time, step on the toes of every vendor trying to build their business around some of these siloed capabilities. Language translation, ASR, TTS, dialogue management, analytics, computer vision, you name it. Over time, it’s going to become far easier to just use OpenAI. Also, practically all of the presentation in todays update was all about voice. Voice is making a comeback as we knew it would! This time, with a much better UI: more natural, This time, if the pitch is as good as the reality, voice will extend out of our smart speaker and into every type of software and device we use. What does this mean for CX? I’ll tee that up for another post, but drop your thought below! The potential is huge. https://lnkd.in/eQ7xsyqN #ai #openai #gpt4 #technews
Spring Update
openai.com
To view or add a comment, sign in
-
The Spring Update to OpenAI's ChatGPT announced today. Text, Vision, and Audio capabilities have been increased. More powerful model GPT 4o released to all users, paid accounts and free accounts, and this model is faster and more capable than 3.5 which was the free model for users before. The goal is to get the tools in more users' hands and show them what's possible. Also more languages are understood by model 4o, and the "device" interface experience is being diminished by allowing users to interact directly with ChatGPT 4o by speaking out loud rather than by typing on a keyboard or mobile device screen. This means users can speak with ChatGPT, show ChatGPT their surroundings, and carry on live "conversations" with ChatGPT just as they would with a living person who has sight and speech and their own native language to use. This is the closest we have gotten to an embodied form of AI like a human. Full 26 minute video linked below.
Spring Update
openai.com
To view or add a comment, sign in
-
Game-Changing Upgrades from OpenAI Spring Update! 💥 OpenAI just dropped major announcements that are pushing the boundaries of AI capabilities. Brace yourselves for GPT-4o and new upgraded tools for ChatGPT free users! ⚡️ Unveiling GPT-4o: The Multimodal Marvel GPT-4o is OpenAI's latest flagship model, and it's a true trailblazer! This AI powerhouse can seamlessly understand and generate content across text, audio, and visuals in real-time. 🌟 Key GPT-4o Capabilities: Lightning-fast response times averaging 320ms (near human conversation speed!) Matches GPT-4's top performance on text, coding & reasoning tasks Significant improvements in audio & visual understanding over existing models 50% cheaper & 2x faster than GPT-4 in the API with 5x higher rate limits 🔭 Cutting-Edge Multimodal Interaction GPT-4o represents a giant leap towards truly natural human-AI interaction by seamlessly combining different input/output modalities into one unified model. No more disjointed pipelines! ⚡️ Upgrades for Free ChatGPT Users But the fun doesn't stop there! OpenAI is also rolling out awesome upgrades for free ChatGPT users: 📈 Access to GPT-4o's groundbreaking multimodal capabilities 🔓 Higher message limits (up to 5x for ChatGPT Plus users) ✨ New advanced tools and AI-powered features This innovative tech from OpenAI marks an incredible milestone in practical AI usability and human-centered design. The future of seamless AI interaction is finally here! 🤖💨 Sources: https://lnkd.in/gRsRfvbG https://lnkd.in/gThsT2eT https://lnkd.in/gDfn4p5j #AI #ArtificialIntelligence #OpenAI #GPT4o #Multimodal #NaturalLanguage #ComputerVision #AudioProcessing #LargeLanguageModels #FutureOfAI
Spring Update
openai.com
To view or add a comment, sign in
-
Wow the new release from OpenAI called GPT-4o is amazing! This is going to be a game-changer for how we interact with AI models, especially I see myself talking more with ChatGPT than writing! Here are some incredible features that stood out to me: 1) You can now easily interrupt and converse with ChatGPT, making the experience more natural and dynamic. 2) You can show videos and pictures in real-time to get help, bringing a whole new dimension to AI assistance. 3) OpenAI has also introduced a desktop app for an even better user experience, they really want to keep a simple UX and have done a great job so far. So here are a couple use cases that really impressed me: 📚 Math Homework Help: Perfect for students needing quick, reliable assistance. 🌐 Real-Time Translation: Effortlessly translate between languages on the fly. 💻 Code Review: Get instant feedback on code snippets. 📊 Graph Insights: Analyze graphs and get valuable insights into graphical data. While not everyone will have access to the latest version, it's amazing that even those on the FREE version can now use ChatGPT-4! If you have 20+ minutes, I highly recommend checking out today's product demo: https://lnkd.in/gi77QZRx #AI #OpenAI #GPT4o #TechInnovation #AITrends
Spring Update
openai.com
To view or add a comment, sign in
-
Getting confused with the AI frenzy? Here's a quick checklist for you... 🔥 So, less than 48 hrs ago, there was a major announcement from OpenAI. I'm sure you've seen some reels/shorts about this Few questions for you: 1️⃣ Have you personally tried ChatGPT yet? If yes, have you seen places where it fails? or works awesome? 2️⃣ What's GPT-4o? How's it different from ChatGPT? 3️⃣ Did you watch OpenAI's webcast? It was 26min, easily watchable in <15 min - if you focus on only the demos 4️⃣ What's actually an LLM? What's the difference between AI/ML/Deep Learning? 4️⃣ Who's Mira Murati? What was her controversy when Sam Altman was ousted from OpenAI a while back? 🚀If you're not spending time tracking AI, 🚀AI's spending time tracking you out. If not YOU, who? If not NOW, when? 🔄 If you liked this, follow Savinder Puri for more 📺 DevOps + Spirituality on YT: https://lnkd.in/dmArxVym https://lnkd.in/dvNcHQD9
Spring Update
openai.com
To view or add a comment, sign in
-
Big updates from OpenAI this morning especially for free users, here's a summary. The flagship news is GPT-4o with more multi-modal capability, better language support, better reasoning, and Oct 2023 knowledge cutoff (GPT4 is April 2023) Free access to ChatGPT will now include: - Access to GPT-4o (huge upgrade over GPT3.5) - Create charts - Upload files (super useful) - Access to custom GPT's - Memory (ChatGPT will recall your interests across sessions) Paid access to ChatGPT+ will now include: - Access to GPT-4o (can't see much gain over free tier) - Higher limits limits than free tier (marginal benefit) Besides maybe rolling out a bit sooner, can't see much gain here for paid ChatGPT+ users. OpenAI API users - 50% price drop for GPT-4o The GPT-4o model is $5.00/million tokens vs $10.00/million tokens for GPT-4 Turbo. Still a ways to go to match $0.50/million tokens for GPT-3.5.
Spring Update
openai.com
To view or add a comment, sign in
-
Is there a reason why no one is talking about ChatGPT 4o? GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, image, and video and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time(opens in a new window) in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models. https://lnkd.in/deGwprbz
Hello GPT-4o
openai.com
To view or add a comment, sign in