Just had a firsthand encounter with AI reshaping our daily interactions. Needed to reach out to Notion’s support. So, I quickly voiced my request using OpenAI’s Mac desktop app (highly recommend it, by the way). Seconds later, an AI MailBot replies, autonomously handling my inquiry. Basically, two AIs just chatted indirectly :) AI is becoming the new layer between us-automating the mundane so we can focus on what truly matters. As more people hop on the AI train, our workflows are bound to transform. And yes, the AI did forward my ticket to a human in the end. So, human connection isn’t dead just yet ;)
Gary Jehan’s Post
More Relevant Posts
-
With the landscape of AI tools constantly expanding, it's worth keeping an eye on new entries like semanser/codel. This tool offers the ability to autonomously perform tasks across terminals, browsers, and editors. Given the pace at which new technologies emerge, every week brings something new to explore. It's a dynamic scene, and while it remains to be seen how each tool will find its place in the market or possibly come together, the variety of innovations is truly fascinating. For those interested in the broad spectrum of AI developments, semanser/codel is one of the recent additions worth exploring. Discover more about it here: https://lnkd.in/dHbB7GeV
To view or add a comment, sign in
-
Gen AI Budget Planning Tool These days, many are building generative AI apps or retrieval-augmented generation (RAG) apps, but figuring out the actual production costs can be quite tricky. It's also important to compare the pricing of different language models to find the best fit for your use case, especially as you scale. To simplify this process, We developed a tool that lets you compare various LLM models based on tokens, vector DB, and embedding helping you determine which one suits your needs best. #genai #llm #pricing #openai #gpt4 #google #gemini #rag #pipeline #genai #hidevs
To view or add a comment, sign in
-
An AI cursor that clicks, navigates, and gets stuff done for you??? The Browser Company is teasing a browser with that kind of AI magic. At Orango AI, we can do it TODAY—and you don’t even need a new browser. Our AI isn’t general-purpose (we didn’t raise $68M, sorry), but if your product has an Orango AI Assistant embedded, it can do any task within your app for your users. Have a complex product? Want to guide your users with ease? Let’s chat! Btw Josh Miller, your ADK looks awesome (and I love Arc). 😍 Let’s connect! P.S. video of the screen > screen recording 😅
To view or add a comment, sign in
-
Llama 3.2 feels like a game-changer! From my few preliminary tests, the 11B instruct vision model is very capable. It's going to be interesting how developers use these lightweight models which are insanely fast. I am going to be developing mobile apps with these. With Llama now offering standard-size models, lightweight models, and vision models, the next move is a very capable reasoning model to match or come close to what o1 can do. The tooling and availability of all these different kinds of models are propelling us very fast into a new age of AI. By the way, you can test these models very easily on Fireworks AI. Full Llama 3.2 overview here: https://lnkd.in/d5TcMeSJ
To view or add a comment, sign in
-
Recently, I shared a post about Anthropic’s new AI agent, Computer Use, and now OpenAI is entering the game with its upcoming tool, codenamed "Operator." According to the Bloomberg article, Operator (set to launch in January) aims to automate tasks like booking travel, completing multi-step actions in a web browser with user minimal supervision - very similar to Computer Use. There seems to be a growing momentum behind agent AI across the industry. OpenAI, Anthropic, Microsoft, and Google are all racing to bring these agents to life. The focus is shifting from building bigger models to creating practical, easier-to-use tools that deliver real-world value.
To view or add a comment, sign in
-
Tired of switching tools to use different AI models? 🚀 Those days are over 🚀 Licode's most recent feature enables you to choose your favorite technology for your AI-powered app With the click of a button, you can switch between: 👉 OpenAI’s ✅ GPT 3.5 ✅ GPT 4o mini ✅ GPT 4o 👉 Google’s ✅ Gemini 1.5 pro ✅ Gemini 1.5 flash 👉 Anthropic’s ✅ Claude 3 sonnet ✅ Claude 3 haiku Enter your prompt, set your variables, insert instructions and knowledge, and create the AI app you’ve always dreamed of! #genAI #nocode #Licode
To view or add a comment, sign in
-
Interesting question... Spoiler, the talk claims that 𝒚𝒆𝒔, it's usually easier for AI Agents to hack their way through existing UIs -- rather than using current-generation APIs. To begin with, not all apps are "API-first", lots of features relevant to users (and their Agents) are "UI-only". And when the APIs exist, it's a developer hell to understand how to orchestrate them. I can't disagree.
APIs for AI: Have We Failed?
https://www.youtube.com/
To view or add a comment, sign in
-
Back in February, our CPO Trevor Back and CTO Will Williams shared our belief that “Her”-like AI assistants lie in our future. Last week’s demos from OpenAI and Google have shown that Big Tech also recognize this vision to be true. You can read our reaction to their announcements here 👇 🔗 https://lnkd.in/eXTgUNkV The main takeaway? Being able to interact with technology via audio is now on the agenda more than ever. No keyboards. No touch screens. No mice. No eye tracking Not even a brain interface. Just our most natural, most seamless, most human way of interacting with our world: Our voice. We love to see it 😍 We believe that if you're innovating in the conversational AI space, or building an AI assistant, having world-class speech-to-text as an input is the only way you'll see widespread adoption.
Why Google and Open AI’s latest announcements don’t solve all the challenges of AI Assistants
speechmatics.com
To view or add a comment, sign in
-
OpenAI’s Swarm AI agent framework: Routines and handoffs At the heart of OpenAI's Swarm are the concepts of "routines" and "handoffs," which are mechanisms d
OpenAI’s Swarm AI agent framework: Routines and handoffs
https://www.greenground.it
To view or add a comment, sign in
-
Open AI does it again!🤯 I was really excited when I heard about OpenAI's announcement of GPT4o!! A new model that's twice as fast as GPT4 and outperforms even GPT4 Turbo. What caught my attention is that they're offering it for free! So most users like can experience GPT4 level capabilities! Along with the impressive speed boost I find it amazing that GPT4o brings features that were previously only for paid users like browsing, data analysis, image & file uploads, and a GPT store. It'll also be available as an API at 50% cheaper pricing. But what really blew my mind are the voice abilities. From what I've seen is GPT4o can have expressive conversations, emoting through its voice and even singing! The voice demo video showed almost zero delay which makes it feel like real face-to-face chat. For now, the desktop app is Mac-only. As a Windows user, I'm eagerly waiting for their version later this year. I believe doubling GPT4's speed and affordability could massively boost AI application development. And those voice capabilities have immense potential for more natural interactions that I'm really excited about. In my opinion this is an exciting step towards AI assistants that can truly multitask and augment our digital lives. I can't wait to get my hands on the GPT4o API and see what innovative solutions we can build with it! #AI #Gamechanger #Artificialintelligence
To view or add a comment, sign in