Machine & Deep Learning Israel (MDLI) is the leading ML and Data Science community in Israel. Today they gathered the top experts in the AI field to foster collaboration, featuring a talk from Run:ai's CTO & Co-Founder, Ronen Dar Ronen talked about the AI industries pace of innovation and how the GenAI revolution means more infrastructure! And how to handle the challenges that come with that Thank you for having us #MLDI, until next time 🚀 #aicommunity #techevent #aiinfrastructure #AIOps #mlops #AIDevOps #GPUComputing #AIScaling #MachineLearningInfrastructure #runai #aistack #ml #solutionsengineer #aiinnovation
Run:ai
Software Development
Tel Aviv עוקבים, Israel 26,142
Squeeze more from your GPU Cluster and streamline AI development with Run:ai
עלינו
Run:ai helps companies execute on their AI initiatives quickly, while keeping budgets under control, virtualizing expensive hardware resources in order to pool, share and allocate your resources efficiently.
- אתר אינטרנט
-
http://www.run.ai
קישור חיצוני עבור Run:ai
- תעשייה
- Software Development
- גודל החברה
- 51-200 עובדים
- משרדים ראשיים
- Tel Aviv, Israel
- סוג
- בבעלות פרטית
- הקמה
- 2018
מיקומים
-
הראשי
17 Ha'arbaa Street
Tel Aviv, Israel 60198, IL
-
85 Delancey st
The Yard
New York, NY 10002, US
עובדים ב- Run:ai
-
Shahar Kaminitz
Tech Entrepreneur. Founder & CEO at Worklight, Insert. Author of the novel "Human Resources".
-
Maya Gordon
Sr. Partner Marketing Manager at Run:ai
-
Menny Hamburger
Software Engineer at Run:AI
-
Sam Heywood
Vice President Product Marketing @ Run:ai | Global Product Marketing Leader
עדכונים
-
Deploying LLMs at scale presents a dual challenge: ensuring fast responsiveness in high-demand times- while trying to manage the costs of GPUs! Getting this balance is no easy endeavor, so organizations often face a trade-off between provisioning additional GPUs for peak demand or risking SLA during spikes in traffic 🚨 And whether you decide to scale up aggressively from zero, with you users suffering through latency spikes, or deploy many replicas with GPUs to handle the worst-case traffic scenarios and pay for hardware that spends most of its time idling... Neither approach is ideal! We've had the pleasure of introducing you to Run:ai's model streamer, and you've heard a little about our GPU Mem Swap! Now we're bringing you our latest blog by Ekin Karabulut and Yoed Ginzburg to see how it can help you cut costs, without hindering performance--> https://hubs.li/Q02-RWtw0 #MLblog #aiblog #newblogalert #aiinfrastructure #AIOps #mlops #AIDevOps #GPUComputing #AIScaling #MachineLearningInfrastructure #runai #ml #LLM #opensource
-
We're back with a brand new blog! 🚨 As the adoption of Generative AI (GenAI) accelerates across industries, more and more companies are choosing open-source GenAI models. With Open-Source's flexibility, customization, and cost-effectiveness, these products can significantly improve model performance and relevance for specific business use cases. Check out Run:ai's latest blog by Sam Heywood the Run:ai Team on open source projects roles are in GenAI, and how to evaluate which you should be thinking about incorporating into your stack! https://hubs.li/Q02ZSY6z0 #MLblog #aiblog #newblogalert #aiinfrastructure #AIOps #mlops #AIDevOps #GPUComputing #AIScaling#MachineLearningInfrastructure #runai #ml #opensource #llms
-
We are back with another episode of The AI Spotlight and this week we have a treat for you! 🚀 We sat down with Vultr’s Senior Vice president of Engineering, Nathan Goulding. Vultr is on a mission to make high-performance cloud computing easy to use, affordable, and locally accessible and they just happen to be one of our awesome partners. We chatted about the industry and things teams should be looking out for in the coming year, to ensure a seamless AI stack. We did a little trend predicting, we chatted about the difference in processing units for AI, specialized chips created for specific use cases and also discussed the pace of innovation, and so much more! Check out the full episode here--> https://hubs.li/Q02ZTp8M0 #aiinfrastructure #AIOps #mlops #AIDevOps #GPUComputing #CloudAI #AIScaling #aispotlight #runaiseries #MachineLearningInfrastructure #runai #aistack #ml
-
🥳 We can't believe it's already December, and over at Run:ai we're showing no signs of slowing 🤾♂️🤸♀️ With November behind us, jam packed with incredible events, from attending #Kubecon and #SuperCompute, to all the fun happy hours we threw, all the inspiring people we met and all the cool swag we gave away, it was definitely a month to remember! This month we're sponsoring MDLI where Run:ai's CTO and Co-Founder, Ronen Dar, will be speaking on a panel. We're then back together with some of our favorite partners for a meetup with NVIDIA and VAST Data! 🚀 And with all that taking place, we'll be getting festive with our 12 days of Christmas Content kicking off December 12th, don't miss joining us looking back at some of our best performing and (still needed) content, with some new exciting insights and benchmarking thrown in the mix Watch this space! For now, Happy December 😎🎄☃️❄️ #Runaismonthlyrundown #runaievents #global #runai #aiinfrastructure #mlops #ai #futureofai #ml
-
Happy Thanksgiving to all our USA Runners and American friends around the globe 🌎 celebrating today 🦃 At run:ai we're grateful for an amazing R&D and product team that ensures you can manage and orchestrate you GPU's and Model Deployments. A hard-working sales department who work tireless to make sure all our customers are well taken care of and our product is integrated seamlessly to your stack. A creative marketing team who makes sure you know all about our latest releases and gives you insights from the AI industry, straight from the experts on the ground. And a kickass CTO and CEO office who keep us innovative, and at the cutting edge of AI Infrastructure technology 🔥🚀 Wishing you all a very happy and fun thanksgiving! #loverunai #runaithanksgiving #nationalholiday #happythanksgiving
-
This week on Run:ai BTS we're back with the product team, asking how industry trends are directly influencing our development roadmap 🚀 We sat down with Hagay Sharon, Jamie Weider, Alon Lavian, Lior Hilel, Omer Benedict, Oz Bar-Shalom and Shiri Arad to talk about how GenAI, and AI in general has been impacting the way our customers need things to work, and the interesting journey it's taking our product on! Check it out here for all the inside scoop on what you should be considering for your AI development to make your life easier 🤘 #runaibts #aiinfrastructure #runaiproductteam #runaiproduct #aidev #mlops #genai #aitrends #aiinnovation #airoadmap #teamlove
-
Run:ai פרסם מחדש את זה
This table from Meta's Lamma 3.1 paper became legendary 🦄 🦄 419 unexpected failures in 54 days for training Llama-3.1-504B using 16,000 GPUs. That’s almost 10 failures per day. Each failure causes the entire training process to hang until remediation 🤯 That’s a huge problem. Millions of dollars are wasted when infrastructure worth hundreds of millions of dollars stops operating even for a few minutes, 10 times a day, and when recovery means resuming from the last checkpoint and losing the work done since then. The problem is probably going to become bigger as training scales to more compute — more GPUs means more frequent failures. xAI, Meta, OpenAI are building clusters of 100,000 GPUs for AI training. Are they going to experience 100 failures per day? In addition, failure rates of new GPU generations will probably not go down. New generations of GPU racks become denser, more capable, more energy hungry, more technically complicated — will that mean higher failure rates? The solution on the other hand, is complicated and goes through the entire software stack: ⏭ from infrastructure and workload monitoring and debugging tools, ⏭ to predictive alerting, ⏭ self healing and automatic recovery, ⏭ efficient checkpointing, and more. Google in their Gemini paper mentioned 97% goodput with in-memory checkpointing and redundant copies. Every percent of lost goodput means millions of dollars wasted! (goodput is measured as the time spent computing useful new steps over the elapsed time of the training job). 🚀 🚀 Is that the path forward? At Run:ai, helping our customers to solve such problems is in our core. Stay tuned to more to come on this topic 🚀 #infrastructurefailures #lamma #gemini
-
Run:ai פרסם מחדש את זה
Wow. Two days into #SC24 and my mind is swirling. The maturation of AI is outpacing even the boldest expectations. I had such great conversations with HPC leaders about their transition to managing their organization’s AI workloads. We’ve gone from “can we do this?” to “how do we do more, faster?” And now, every conversation I have is about scale—whether it’s moving from pilot projects to full-scale implementations or from full-scale to production-grade infrastructure. Here’s what’s standing out: - Efficiency is the new priority: Scaling AI isn’t just about adding more power; it’s about finding ways to do more with less—think model optimization, energy efficiency, and smarter cost management. - Specialization is everywhere: From domain-specific LLMs to custom-built AI infrastructure, organizations are leaning into bespoke solutions that solve real-world problems with precision. - Trust is non-negotiable: Governance, transparency, and ethical AI practices are at the heart of every implementation. Scaling responsibly is just as critical as scaling quickly. (More on this coming soon!) So many great conversations, and there’s still another day to go. Who else is here? Let’s meet up!
-
📣Have you met Run:ai's #Streamer yet? Just a couple of weeks ago we launched our latest Open Source project; The Run:ai Model Streamer 🌀 When it comes to deploying large machine learning models, speed is everything and as models balloon in size—from a couple billions of parameters to hundreds of billions—so does the time it takes to move them from storage to memory and get them ready to serve, causing scaling issues... And that's why we created our Streamer! Ekin Karabulut To learn more, check out the blog we shared here--> https://hubs.li/Q02Zhg1Z0, and check out the latest benchmarking on the project too, in case you missed it https://hubs.li/Q02ZhdhL0 To start using Streamer now, head over to Github--> https://hubs.li/Q02Zh90q0 #aiinfrastructure #runaimodelstreamer #llm #opensource #aidevops #benchmarking #mlops #newproject