PyTorch has emerged as a top choice for researchers & developers due to its relative ease of use and continuing improvement in performance 💌 ❤️ Why PyTorch Gets All the Love - explains Alex Williams' in recent The New Stack article: https://lnkd.in/eQsp4x6E Soumith Chintala Adam Paszke Luca Antiga Adam Seligman Trevor Harvey Brian Granger William Falcon Yangqing Jia Ji Li Alban Desmaison Sunita Nadampalli #PyTorch #OpenSource #AI #ML
PyTorch
Research Services
San Francisco, California 271,971 followers
An open source machine learning framework that accelerates the path from research prototyping to production deployment.
About us
An open source machine learning framework that accelerates the path from research prototyping to production deployment. PyTorch is an open source project at the Linux Foundation.
- Website
-
http://www.pytorch.org
External link for PyTorch
- Industry
- Research Services
- Company size
- 501-1,000 employees
- Headquarters
- San Francisco, California
- Type
- Public Company
- Specialties
- Artificial Intelligence, Deep Learning, Machine Learning, and AI
Locations
-
Primary
548 Market St
San Francisco, California, US
Employees at PyTorch
-
Wei Li
VP/GM, AI Software Engineering, Intel
-
Ali Khosh
Product Management, PyTorch at Meta AI | X-Microsoft/Samsung/Yahoo. Adjunct Prof. at USC School of Law.
-
Trevor Harvey
Principal, Generative AI @ AWS | Solutions Architect – Professional | PyTorch Board Member
-
Cla Rossi
Data Scientist
Updates
-
GenAI won't replace open source, says AWS Exec in interview with The New Stack "Because it’s based on Python, which is already deeply integrated with machine learning, PyTorch is a natural fit for data scientists using deep learning models. It was adapted from the Torch library, an open source scientific computing project well regarded for its ease of use, neural network libraries and community-contributed libraries for machine learning, computer vision and more. Open source has been foundational to AWS’s growth and innovation, Seligman told Williams. AWS has used open source since the early days, contributed to open source, and is involved in really big projects like OpenSearch, Valkey and PyTorch as a contributor. [Open source] lets creative, technical people bring their best ideas to life. And it’s done in a really delightful way where there’s both technical innovation and a lot of human knowledge sharing and best practices. These communities are bigger than the code itself.”
GenAI Won't Replace Open Source, Says AWS Exec
https://thenewstack.io
-
Supercharging Training using float8 and FSDP2 ⚡ Read our latest blog to find out how we achieve up to 50% throughput speedup while achieving loss & evaluation benchmark parity in training: https://hubs.la/Q02Zmfvb0
-
We’re excited to welcome Rebellions to the PyTorch Foundation as a General Member! 🎉 Rebellions is a South Korea-based semiconductor company specializing in the design and development of AI chips for data centers and edge devices. 🇰🇷 We look forward to collaborating with Rebellions to drive innovation and strengthen the PyTorch ecosystem for developers worldwide. Announcement here: https://hubs.la/Q02Z79V90 “By integrating our hardware innovations with PyTorch, we’re building Native NPU support to accelerate diverse AI workloads.” said Hong-seok Kim, the Chief Software Architect at Rebellions. “We’re excited to contribute to the PyTorch community by community-driven initiatives and partnerships, advancing NPU architecture support for next-generation AI solutions. Together with the PyTorch community, we aim to pioneer new possibilities in AI acceleration and empower developers worldwide with efficient computing solutions.”
-
PyTorch leads the model training space, with 63% adoption rate 🚀 New research from The Linux Foundation, LF AI & Data Foundation & Cloud Native Computing Foundation (CNCF) shows how open source tools and frameworks play a critical role in #GenAI model building and inference. For more insights, download the research report on shaping the future of generative AI: https://lnkd.in/eGTGcFXT #OpenSource #AI #ML #PyTorch #GenAI #Inference #ModelBuilding
-
We are happy to announce the addition of knowledge distillation to torchtune, a PyTorch library for easily authoring, fine-tuning and experimenting with LLMs. Knowledge distillation is a technique for imparting knowledge from a larger teacher model to a smaller student model. This was a key step in Llama 3.2 pretraining. Check out how we can leverage knowledge distillation in post-training to distill Llama 3.1 8B to Llama 3.2 1B using torchtune: https://hubs.la/Q02YyqPP0
-
PyTorch is powering the most powerful financial LLMs explained Matt White, PyTorch Foundation Executive Director, today at the International Workshop on Multimodal Financial Foundation Models (MFFMs) at ACM, Association for Computing Machinery's International Conference on AI in Finance in NYC. #LLMs #AI #FinTech #PyTorch #Finance #MFFMs
-
PyTorch Expert Exchange Webinar: How does batching work on modern GPUs? with Finbarr Timbers, an AI researcher, who writes at Artificial Fintelligence and has worked at a variety of large research labs, including DeepMind and Midjourney Batch inference is the most basic optimization that you can do to improve GPU utilization. It is often overlooked and misunderstood because of how common it is. Here, we walk through why, exactly, batching works, and help you develop intuition for what exactly is going on inside your GPU.
How does batching work on modern GPUs?
www.linkedin.com
-
We'll be live today in 1 hour ⏱️ Find out how you can develop intuition for what exactly is going on inside your GPU
PyTorch Expert Exchange Webinar: How does batching work on modern GPUs? with Finbarr Timbers, an AI researcher, who writes at Artificial Fintelligence and has worked at a variety of large research labs, including DeepMind and Midjourney Batch inference is the most basic optimization that you can do to improve GPU utilization. It is often overlooked and misunderstood because of how common it is. Here, we walk through why, exactly, batching works, and help you develop intuition for what exactly is going on inside your GPU.
How does batching work on modern GPUs?
www.linkedin.com
-
Batch inference is the most basic optimization that you can do to improve GPU utilization. It is often overlooked and misunderstood because of how common it is. AI researcher, Finbarr Timbers will walk through why, exactly, batching works, and help you develop intuition for what exactly is going on inside your GPU. Join us live on Wednesday at 10am PT for our next PyTorch Expert Exchange: https://hubs.la/Q02XNSDB0