Mihailo Backovic’s Post

View profile for Mihailo Backovic, graphic

Managing Partner @ B12 Consulting / part of Yuma | PhD | All things AI

Apple's annual developers' conference kicked off two days ago. As usual, the internet had a field day, with a tsunami of posts about how Apple just revolutionised AI and that nothing will ever be the same… The boring news of day 1 - partnership with OpenAI, ended being the media focus. There are already hundreds of apps which do this 😴 . I also think it will be tricky to convince regulators and customers that letting Siri decide whether your entire phonebook should be sent to OpenAI is “safe". Then again, this is just my speculation so make of it what you wish 😉 The stock market didn't like the news. Apple stock dropped about 2% (but rebounded yesterday… we'll get to that later) The most interesting part of the announcement was actually about Apple's own line of foundational models. You can find the blog post here: https://lnkd.in/eC3r_JMH. I'll try to break down a few important points: - Apple is aware that LLM benchmarks are getting saturated. There is not much to gain here, but they have to show that they can reach SOTA (and they do). I wouldn't expect the new models to perform better than any of the open source models out there. - Instead, Apple focuses on two important aspects: minituarisation and UI/UX. What we saw yesterday is to me a proof of a trend (first with Microsoft's Phi series of models, now with Apple) to get more performance out of smaller LLMs which can run on-device. This is significant because it breaks away from the dogma of "one LLM to rule them all" and it solves a whole plethora of issues regarding data privacy etc. - Apple is focusing on Human experience. A lot about AI is in UX and whoever gets this right will succeed in the AI race! This was the first time I'm seeing emphasis on evals. like "Human Evaluation of Output Harmfulness" or "Human Satisfaction Score" in a large tech company press release, suggesting this is central to Apple's foundational model program. - Apple has been preparing for the "big AI move" for quite some time. There's been a lot of criticism of the company lagging behind competition, but I think this is because they took their time to prepare the ground (e.g. M series chips), giving Apple an edge over the competition. - Apple invested a lot on technical details of minituarisation (e.g. custom LoRA, model quantisation...). The most interesting detail is the Talaria toolkit, which allows to simulate effects of various optimisations on model performance and speed up the optimisation process. To my best knowledge, there is no open source equivalent of Talaria. Apple is in a good place to pull ahead. They seem to be the only company that prepared the ground well before introducing AI into their products, they are doing impressive things with minituarisation and they are aware that the one who solves the "UI of AI" is the ones who wins. The stock market yesterday seemed to agree (AAPL +6.5% at closing)

Introducing Apple’s On-Device and Server Foundation Models

Introducing Apple’s On-Device and Server Foundation Models

machinelearning.apple.com

Godwin Josh

Co-Founder of Altrosyn and DIrector at CDTECH | Inventor | Manufacturer

6mo

It's fascinating to see Apple's strategic approach to AI, prioritizing miniaturization and user experience. Their focus on human-centric evaluation metrics reflects a profound understanding of the importance of UX in AI development. You talked about Apple's foundational models and their emphasis on UX. How do you envision applying Apple's Talaria toolkit in scenarios where real-time adaptation of AI models is crucial, such as in autonomous vehicle navigation systems?

Like
Reply

To view or add a comment, sign in

Explore topics