Clarifying the Confusion: "GPT Next" is Not a New Model OpenAI has recently clarified that "GPT Next" is not a new model, but rather a figurative placeholder to indicate the potential evolution of their models over time. The term was mistakenly interpreted as a literal new model, leading to confusion in the AI community. The Current State of OpenAI's Models OpenAI's most advanced model currently is GPT-4o, and details about GPT-5 are limited. The company is continuously working on improving its models, but there is no new model called "GPT Next". Fundraising Round and Partnerships OpenAI is reportedly in talks with tech giants Apple, Nvidia, and Microsoft for its latest fundraising round. This could lead to significant investments and partnerships that could further accelerate OpenAI's research and development. The Future of AI Models The clarification from OpenAI highlights the importance of accurate communication in the AI community. As AI models continue to evolve, it is essential to have clear and transparent information about their development and capabilities. Join the Conversation: What do you think about the potential evolution of OpenAI's models? Share your thoughts! How do you think partnerships with tech giants will impact OpenAI's research and development? Share your ideas! Salesforce acquires data management firm Own for $1.9B in cash Salesforce acquires data management firm Own for $1.9B in cash. Own, a provider of data management and protection solutions, is Salesforce's biggest deal since buying Slack in 2021. The acquisition strengthens Salesforce's commitment to secure data solutions and enhances their data protection and management capabilities. Own, valued at $3.35 billion, offers data backup tools and services, and complements Salesforce's existing data management tooling.
Happy .’s Post
More Relevant Posts
-
The concept of Microsoft spending $100B on a data center just for AI is wildly audacious and almost inconceivable. A $1B data center is one of the larger hyperscale data centers around today, possible for only a few very wealthy companies. Further, such a data center implies a scale that can only be utilized by a few companies (e.g., Google, Microsoft, Amazon). For perspective, a $100B data center is effectively beyond the reach of such behemoths as Amazon and Walmart. In addition to the capital investment, such a data center would also need unheard-of volumes of water and power. It would have to be carefully sited near almost limitless sources of nearly free water and power to make it feasible. In addition, building such a colossus might not even pay off. Will it really achieve a breakthrough in LLM cognition? Nobody knows. What we do know, however, is that we will never know unless somebody builds it and tries. Ironically, while this is certainly our best near-term chance at achieving AGI, the terms of Microsoft's contract with OpenAI returns control of the technology to OpenAI if AGI is achieved. Then again, if we achieve AGI with something of this scale, discussions of which company controls the technology will likely be moot - the reality will be the technology controlling the companies! One thing this will definitely achieve, if it is built, is to centralize control of the most advanced AI in the hands of 3-6 companies. No other entity on earth will have the resources to match them. AI will undoubtedly be widespread, with most enterprises having their own development of AI. None of those, however, will reach anywhere near this giant. Nor, for the most part, will nations be able to match this scale on the timeframes involved. https://lnkd.in/g3RmvgMD
To view or add a comment, sign in
-
Another set of morning musings on AI: I think we're seeing an interesting trend starting to emerge here: https://lnkd.in/garc6Tuu OpenAI is making it much easier to fine-tune a model. Why is this important when we've got larger than ever context windows for RAG based architectures? Because RAG isn't a silver bullet but one of many tools we need to utilize based on use case. I predict over the next several months and years we'll continue to utilize RAG based architectures as the predominate way of interacting with Gen AI on bespoke data. But I also predict that we'll see an emergence of specialty tuned foundation models for industries. These predictions are complementary. Fine-tuning seems to be a great way to add additional tuning of weights for industry terms, lingo, and context that would be difficult or even expensive over the long run to continue to feed into a model with just a RAG based architecture alone. What's super cool about all of this is if you have a good foundation (I guess pun intended) for RAG, you can plug in a new model with your industries context in it when you need it. You could also use a lower cost model when the use case doesn't call for anything bespoke. What do y'all think the future of many different models looks like? Do we have many models industry specific or are we able to train one super large parameter model that has the ability to tap into industry specific data?
OpenAI expands its custom model training program | TechCrunch
https://techcrunch.com
To view or add a comment, sign in
-
Microsoft and OpenAI are, according to a story in The Information, making plans for a massive AI supercomputing cluster, dubbed 'Stargate,' that will cost $100 billion and be used to support OpenAI's future advanced AI models. This is far more money than Microsoft currently spends each year on all of its capital infrastructure. It is more than companies in far more capital intensive industries, like the oil sector, spend. This has led some who think AI is overhyped to ask if this is the moment the industry has finally "jumped the shark" and if Microsoft's investment, if it does in fact make it, will wind up looking foolish. It has led others to wonder if AI will end up like the oil sector—highly concentrated and highly regulated. But in today's edition of Fortune's Eye on AI newsletter, I ask whether the right analogy might be to Ronald Reagan's 1983 announcement of the Strategic Defense Initiative, aka "Star Wars," and not just because the names of the two projects are a bit similar. Star Wars was a plan for a technology that was never built. But the actual building of it was sort of beside the point. It was the intention to build it that mattered. And despite later being scrapped, Star Wars nonetheless changed the course of history. Intrigued? Read on.
Why Microsoft's $100 billion 'Stargate' could be AI's 'Star Wars' moment
fortune.com
To view or add a comment, sign in
-
The deal is expected to bring strategic benefits for both companies. But the main goal of OpenAI's acquisition is to boost enterprise AI capabilities. OpenAI's acquisition of Rockset strengthens its position in the enterprise AI market by enhancing its real-time data processing and analytics capabilities. This strategic move allows OpenAI to offer more advanced, efficient, and scalable AI solutions, providing significant benefits to its enterprise clients.
ChatGPT maker OpenAI buys data analytics startup
codewr.blogspot.com
To view or add a comment, sign in
-
Elon Musk has been in a legal battle with OpenAI and sued the company earlier this month over the “betrayal” of the nonprofit AI goal. Since then, he has called out OpenAI and Sam Altman on X multiple times. Musk continues to disrupt the ai world by applying his preference of open sourcing: "Elon Musk’s xAI has open-sourced the base code of Grok AI model, but without any training code. The company described it as the “314 billion parameter Mixture-of-Expert model” on GitHub. 💻 In a blog post, xAI said that the model wasn’t tuned for any particular application such as using it for conversations. The company noted that Grok-1 was trained on a “custom” stack without specifying details. The model is licensed under Apache License 2.0, which permits commercial use cases. Last week, Musk noted on X that xAI intended to open-source the Grok model this week. The company released Grok in a chatbot form last year, accessible to Premium+ users of X social network. Notably, the chatbot could access some of the X data, but the open-source model doesn’t include connections to the social network. 🐊 Many notable companies have open-sourced some of their AI models including Meta’s LLaMa, Mistral, Falcon, and AI2. In February, Google also released two new open models called Gemma2B and Gemma7B. 🤖 Some AI-powered tool makers are already talking about using Grok in their solutions. Perplexity CEO Aravind Srinivas posted on X that the company will fine-tune Grok for conversational search and make it available to Pro users." #code #opensource #mentality #strategy #leadership #codewars #innovation #technology https://lnkd.in/gZ94jyVf
xAI open-sources base model of Grok, but without any training code | TechCrunch
https://techcrunch.com
To view or add a comment, sign in
-
The Setup: Microsoft & OpenAI Partnership: Microsoft made a big investment in OpenAI, starting with $1 billion, and eventually increasing that to $10 billion by January 2023. They were working closely together on AI advancements. The Drama Begins: November 2023 – Sam Altman Gets Ousted: Out of nowhere, the board of OpenAI fired Sam Altman, the CEO, claiming he wasn’t being honest in his communication. This shocked the entire tech industry because Sam had been leading OpenAI's success, especially with things like ChatGPT. Microsoft Steps In: Satya Nadella's Reaction: Microsoft's CEO, Satya Nadella, stayed calm. He publicly supported Sam Altman and said that Microsoft was still fully committed to its partnership with OpenAI. A Job Offer: Within just three days of Sam’s firing, Nadella even offered Altman a job at Microsoft to lead a new advanced AI research team, which showed how much confidence Microsoft had in him. The Comeback: Altman Returns: Shortly after, due to overwhelming support from employees and others in the tech world, Sam Altman was reinstated as the CEO of OpenAI. The board also brought back Greg Brockman, OpenAI’s president, who had left in protest when Altman was fired. Moving Forward: Satya's View on the Outcome: At a tech conference in 2024, Nadella said the whole incident felt like "a long-lost memory" and that everything had worked out well in the end. He praised how everyone came together for a good outcome. What's Next: Microsoft Builds Its Own AI: While still working closely with OpenAI, Microsoft is also building its own AI model called MAI-1, led by big names in AI, including Mustafa Suleyman from DeepMind and Karén Simonyan from Inflection AI. OpenAI’s Latest Model: Meanwhile, OpenAI is advancing its own AI efforts, recently releasing a new AI model, o1, which is designed to think more like humans by producing a "long chain of thought." The Future: More Investments: OpenAI is in talks to raise an additional $6.5 billion, potentially valuing the company at a staggering $150 billion. Microsoft is reportedly considering investing again. In simple terms, Microsoft and OpenAI went through a rough patch when Sam Altman got fired, but things settled quickly, and both companies have continued to push forward in AI research. Microsoft even used this opportunity to strengthen its own AI initiatives while maintaining its partnership with OpenAI.
To view or add a comment, sign in
-
I often encounter that Open AI models have quickly been moving into use also in risk-averse organizations like banks that normally would ask for long and heavy clearance processes for such tools. Thanks to existing Microsoft relationships, and trust in Microsoft, these new models flow in very swiftly. That's the background I have in my mind when I was reading Microsoft's new Responsible AI Transparency Report, and I commented this to AI Business. “As a primary player in this industry, [Microsoft] have a duty to be transparent, not least to the companies building solutions on top of the work they do,” Siivonen said. “It is important to recognize the symbiotic relationship between OpenAI and Microsoft, Microsoft essentially lends its trust to OpenAI by giving the company access to its customers through the cooperation. Given the influence that both companies hold, building a culture of trust, transparency and accountability absolutely should be the number one priority for both Microsoft and OpenAI.” https://lnkd.in/gYzWjb22
Microsoft Highlights Responsible AI Efforts in New Report
aibusiness.com
To view or add a comment, sign in
-
AI won’t be smart if we continue to be stupid: the case for self-sovereign AI In my previous post, I mentioned that contrary to the belief that Science has never been as innovative as it is now, discovery is actually less disruptive than it was 60 years ago. The main reason for this is that researchers, entrepreneurs and explorers know less about the world, and this ‘narrowing of our epistemic base’ is making our discoveries more basic. In addition, as I often mention, lack of trust has eroded our ability to cooperate and the financial pressures to produce ‘whatever’, has allowed mediocre knowledge to flood the market. Now if the ‘wisdom’ of our Intelligent Agents is dependent on the quality of our inputs (i.e. knowledge of the world), then this knowledge is extremely limited. First, <5% of internet’s data is publicly scrapable for LLM training (some estimate that it is as low as <0.5%). Second, we are only referring to data on the ‘surface web’ which represents only 5-10% of the total web (the rest being deep and dark web). So, with such a limited input, no amount of fine tuning, reinforcement learning from human or AI feedback (RLHF/RLAIF) is going to prevent our bots from making sh!t up. So what’s the solution? How do we make sure that the inputted information that trains our LLMs is robust enough to ensure that their output isn’t ‘nonsense on steroids’? Well I think you know where I am going with this (and the point of my post)- We should decentralize AI, but also build models to be self-sovereign (user-owned). These models train LLMs on private data directly contributed by users that otherwise remain locked within their platform (email, text, WhatsApp, Meta, X). They have a x1000 larger ‘knowledge base’ (word count increases from 5T for GPT-4 to 5500T for 100M users recording everything they say for a few years), and train x60 faster, as measured in flops (floating point operations per second) Users collect and own their data, build their own foundation AI, and can co-create a collective model (DAO) to best serve group data. Think about it this way- use all your personal data: texts, posts, google docs, medical records and art on your personal server to train your own personal LLM. Earn, share, collaborate with other personal LLMs (like a patient support group). Now this may sound weird, but we are living in weird times. We need to stop giving away our data for free and be the afterthought of monopolies and governments that hand us down their ‘data leftovers’. In a digital world we are the main knowledge creators, curators and consumers. And if we want our Intelligent Agents to be smart- We gotta stop being stupid.
User-Owned Foundation Models (2024 Update)
anna.kazlausk.as
To view or add a comment, sign in
-
Microsoft and OpenAI have been discussing a project called “Stargate” that would see Microsoft spend $100 billion to build a massive supercomputing cluster to support OpenAI’s future advanced AI models, The Information reported Friday. To put this in context, Microsoft is known to have spent more than “several hundred million dollars” to build the clusters used to train OpenAI’s current top-of-the-line model GPT-4, which OpenAI CEO Sam Altman also has said cost more than $100 million to train. It’s also known that OpenAI is already training a successor model to GPT-4, likely called GPT-5, on one of Microsoft’s existing data centers. Plus, Stargate is anywhere from 10 to 100 times more expensive than any of the data centers Microsoft currently has on the books. Read more: bit.ly/4aGQCJw
Why Microsoft's $100 billion 'Stargate' could be AI's 'Star Wars' moment
fortune.com
To view or add a comment, sign in