Yesterday I had a unique opportunity to interact and discuss projects with some of the brightest minds working on the most cutting edge tech including my buddy Arun Vijayvergiya whose working on o1. From scaling companies of 1 employee, predicting user behaviour and fraud before it happens, to breaking barriers on language the applications are huge! I also realised that while we all talk about the final goal of AGI, OpenAI and their peers may not have adequate data to train their models on EQ and SQ. A very interesting member asked the team if the recent reasoning models could help her on daily guidance how she should grow-up her son and answer his life questions. I think even the research team at OpenAI was caught off guard on the depth of that question ;) Interesting to see how EQ/SQ eventually is incorporated in years to come..
Karam M.’s Post
More Relevant Posts
-
What are your thoughts on this interview with Mira Murati, CTO OpenAI? I felt as if she was very underprepared to answer basic questions, and I was left feeling a little curious. Does the CTO really not know how OpenAI Sora was trained? Comment your opinions below.
To view or add a comment, sign in
-
Founders and fellow GenAI enthusiasts. Remember that the underlying large language model you use is NOT your moat. If you believe it is your moat then you will likely get “steamrolled” by OpenAI or a foundation model lab. Build something in your zone of genius. Exploit your uniqueness. And if you don’t know what it is yet, keep exploring.
To view or add a comment, sign in
-
OpenAI’s Sora leaks early🔥👇 Users briefly got their hands on OpenAI’s long-awaited Sora Tuesday — but not in the way anyone expected. First announced in February, the startup’s text-to-video model has yet to see a wide release. But a group of early testers beat OpenAI to the punch this week, releasing a prototype on HuggingFace for all to use. The leakers — a group of about a dozen digital artists — said they shared the model early as an act of protest. They want OpenAI to compensate its beta testers and give them more creative autonomy. (For its part, OpenAI says it puts few constraints on its reviewers — and that their feedback is always optional.) For about three hours, anyone could generate their own 10-second videos at 1080p with Sora’s Turbo variant, and social media was awash with users’ creations. No matter what you think of the protest, the consensus is that Sora is really good, maybe even “an order of magnitude better” than anything else currently available. Join the SBR2TH Tech Talent Weekly newsletter: https://lnkd.in/e3Tv72GK Find out more: www.SBR2TH.com Join the SBR2TH Tech Talent Weekly newsletter: https://lnkd.in/e3Tv72GK Click here to book a recruitment consultation for your business: https://lnkd.in/g5hVNd7F #future #economy #experience #work #tech #talent #recruiters #recruitment #talenthunter #developmentrecruitment #developerjobs #aijobs #mljobs #talentfocus #techrecruitment #techrecruiter #techtalent #sourcing #hiring #energy #energymarkets #tech #ai #ml #nuclearindustry #nuclearenergy #coalindustry #naturalgasindustry #solarpower #techadvancements #techcollaboration #techcompany #techcommunity #technews #news #sourcing #hiring
To view or add a comment, sign in
-
Big news in the AI world! OpenAI just announced a refreshed board and revamped governance structure! This comes after some recent changes, including the reinstatement of Sam Altman as CEO. Here's the gist: New board members: Sue Desmond-Hellmann (ex-Gates Foundation CEO), Nicole Seligman (ex-Sony General Counsel), and Fidji Simo (Instacart CEO) join the team. Stronger Governance: New guidelines, a bolstered conflict of interest policy,
To view or add a comment, sign in
-
Friday insight: Do not replace something that requires a deterministic outcome with an LLM. But remember, just because an SOP looks deterministic does not mean the actual process was deterministic. So really, don't replace a coded-if-statement with openai.chat, but otherwise, it's likely fair game! Much like Winter, LLMs and Agents are coming.
To view or add a comment, sign in
-
Finally, we have some major competition for OpenAI’s Sora 🔥 I gave the same Sora prompt to Stability AI’s Stable Video! prompt → “Extreme close up of a 24 year old woman’s eye blinking, standing in Marrakech during magic hour, cinematic film shot in 70mm, depth of field, vivid colors, cinematic” ↓ If you liked this post, you might also enjoy my feed on latest breakthroughs in AI and creative tech: Utsav Soi 🚀
To view or add a comment, sign in
-
I am happy to help with your request, here is the Linkedin update promoting the blog post: "Exciting news! Tim Brooks, the creator of Sora Leaves, has made a significant career move from OpenAI to DeepMind. This insightful blog post delves into the details of this transition. Discover more by checking out the article here: https://lnkd.in/gAqGS_ch"
To view or add a comment, sign in
-
The latest episode from #TheRiseofIntelligence talks about "Is AI just a big hype?" We answer this question by showing a real-world use case for an automated candidate screening process using Zapier and OpenAI with no code tools. This is a productivity hack for all the HRs in the room to implement in your company. Join over 2,000 people who have watched this episode since its release this past weekend! Link to full video in the comment
To view or add a comment, sign in
-
The Aspen Institute Ideas Festival held an "Afternoon of Conversation" yesterday. Having attended a previous session with "nice guy" Brian Chesky interviewing badass reporter Kara Swisher (yes, you read that right, BC interviewing KS), I expected more of Brian. But his blind devotion to OpenAI's Sam Altman is utterly disappointing. Brian’s description of his role during the short and tumultuous firing of Altman from OpenAI last November is nothing short of sycophantic: poor little Sam did nothing wrong (I think what saved him was the botched, ridiculous firing process that did make him look like a victim). These two, and many others like them, seem to live in a closed chamber of mutually reinforcing worldviews. This makes them impervious to any criticism, not just existential criticism but even more mundane pushback against, for example, their notion of fair use. Another example of shameless hypocrisy is when they (rightly, in my view) call openness and research sharing a key to progress -but is there a company in the space more opaque than OpenAI? As much as I consider OpenAI’s GPTs game changers, I don’t think we should give the alpha (mostly) males of the ecosystem the keys to our future. These guys are defining with reckless abandon what our (Western) world will look like. I would hypothesize that at some level they envy the Chinese government’s power to do anything it wants. They may refrain from expressing such dreams in public but even with all that restraint, you can’t escape that feeling that Sam Altman is manipulating all of us, including Brian Chesky. And he does so with such a cagey attitude that it should be a red flag to all. Kara Swisher, please say/do something. You like Brian, I like Brian, but he is under influence. #genAI #LLMs #openai
To view or add a comment, sign in
Strategic Brand & Marketing Manager I Leveraging Social Media & Content Marketing I Storytelling & Visual Branding
1moFascinating insights, Karam! The breadth of applications you touched on—from scaling startups to anticipating user behavior—highlights just how transformative this era of tech innovation is. The question about integrating EQ and SQ into reasoning models truly hits a chord. While AGI is advancing in leaps, the emotional and social dimensions are what will make it resonate on a deeply human level. It’s intriguing to think about the role data and nuanced human experiences will play in shaping these models. Looking forward to seeing how these intersections evolve and how they redefine not just tech but the way we navigate life itself. Thanks for sharing!