Exciting Opportunity for AI Researchers: Agora Compute Grants We are thrilled to announce the launch of Agora Compute Grants, an initiative designed to empower AI researchers with the computational resources they need to push the boundaries of multi-modal research. https://buff.ly/4clNtQF What is Agora Compute Grants? Agora Compute Grants is an open-source AI research lab initiative aimed at providing AI researchers with access to powerful GPU clusters. These resources are essential for conducting advanced multi-modal research, which integrates diverse data types such as text, images, and audio to create more sophisticated and capable AI models. Why Focus on Multi-Modal Research? Multi-modal research represents the future of AI. By combining various data types, researchers can develop more robust and versatile models, leading to innovations that can revolutionize industries and improve lives. At Agora, we are committed to supporting this cutting-edge research and helping you achieve breakthroughs. How to Get Started: -> Join our Discord community: https://buff.ly/4clNtQF -> Sign up and introduce yourself to the community. -> Share your project in the Creator Showcase to be considered for the compute grants. We invite all passionate AI researchers to take advantage of this incredible opportunity. By joining Agora Compute Grants, you'll gain access to the computational power needed to drive your research forward and make significant contributions to the field of AI. Let's shape the future of AI together. Join our Discord today and start your journey with Agora Compute Grants. For more details and to sign up, visit our Discord community: https://buff.ly/4clNtQF We look forward to seeing the innovative projects you bring to life!
Kye G.’s Post
More Relevant Posts
-
Exciting Opportunity for AI Researchers: Agora Compute Grants We are thrilled to announce the launch of Agora Compute Grants, an initiative designed to empower AI researchers with the computational resources they need to push the boundaries of multi-modal research. https://buff.ly/4clNtQF What is Agora Compute Grants? Agora Compute Grants is an open-source AI research lab initiative aimed at providing AI researchers with access to powerful GPU clusters. These resources are essential for conducting advanced multi-modal research, which integrates diverse data types such as text, images, and audio to create more sophisticated and capable AI models. Why Focus on Multi-Modal Research? Multi-modal research represents the future of AI. By combining various data types, researchers can develop more robust and versatile models, leading to innovations that can revolutionize industries and improve lives. At Agora, we are committed to supporting this cutting-edge research and helping you achieve breakthroughs. How to Get Started: -> Join our Discord community: https://buff.ly/4clNtQF -> Sign up and introduce yourself to the community. -> Share your project in the Creator Showcase to be considered for the compute grants. We invite all passionate AI researchers to take advantage of this incredible opportunity. By joining Agora Compute Grants, you'll gain access to the computational power needed to drive your research forward and make significant contributions to the field of AI. Let's shape the future of AI together. Join our Discord today and start your journey with Agora Compute Grants. For more details and to sign up, visit our Discord community: https://buff.ly/4clNtQF We look forward to seeing the innovative projects you bring to life!
To view or add a comment, sign in
-
They Claim That Computational Chemistry 👩🔬 Isn’t AI 🤖 - Is It That Simple? 🤔 ... ... ... I recently shared some thoughts on the Schrödinger-Novartis deal, where a big part of the valuation centered on accessing Schrödinger’s “AI platform.” This sparked a debate, with one commenter rightly pointing out that Schrödinger doesn’t call itself an AI company: they use physics-based models rather than machine learning. It’s a good reminder that traditional computational chemistry revolves around physics, like quantum mechanics, to predict structures, molecular interactions and dynamics. But is this enough to say it’s not AI? Let’s unpack this: Is Computational Chemistry AI or Just Algorithms? 📌 Physics-Based Models: Traditional computational chemistry relies on physics-based equations to model structures, molecular dynamics, molecular interactions ... with precision - deterministic rather than adaptive. 📌 AI’s Broader Scope: AI isn’t just neural networks; it’s any system designed to mimic aspects of human intelligence. With this definition, any predictive algorithm that aids decision-making could fit within AI’s umbrella. 📌 Augmentation with Machine Learning: With the integration of machine learning, computational chemistry is evolving. These systems now incorporate probabilistic, data-driven approaches, blending classic physics-based roots with modern AI (I would love to hear from people working at Schrödinger: are you really using NO AI/ML tools at all??) Are We Headed Toward a Merge? 📌 AI could soon combine machine learning, probabilistic models, and physics-based equations into cohesive systems, blurring the line further. Imagine "AI Agents" that adaptively switches between data-driven patterns and physics-based rigor for complex calculations. 📌 In drug discovery, this merging of techniques could enable faster, more flexible models that combine both first-principles physics with AI’s pattern recognition strengths. Let’s not forget Alan Turing’s perspective - "The father of AI" - Turing envisioned intelligence as the ability to perform tasks requiring human-like decision-making and calculations—decades before machine learning emerged. By this standard, even advanced computational systems in chemistry could fit within his early AI vision when performing complex, predictive tasks. So, is computational chemistry really separate from AI? It’s not as clear-cut as it might seem. The fields are evolving together, with computational chemistry gaining new capabilities thanks to AI. As they merge, the distinction between them may blur even more. ________________ Love ❤️ this post? Add your comments 💭, re-share ♻️ it with your network, follow me 🔔 for more posts like this, and DM me 📩 if you want to deep dive on this topic!
To view or add a comment, sign in
-
10/10/2024 Artificial Intelligence Google's Nobel prize winners stir debate over AI research October 10, 20244:11 PM GMT+2Updated an hour ago LONDON, Oct 10 - The award this week of Nobel prizes in chemistry and physics to a small number of artificial intelligence pioneers affiliated with Google (GOOGL.O), has stirred debate over the company's research dominance and how breakthroughs in computer science ought to be recognized. Google has been at the forefront of AI research, but has been forced on the defensive as it tackles competitive pressure from Microsoft-backed (MSFT.O), OpenAI and mounting regulatory scrutiny from the U.S Department of Justice. On Wednesday, Demis Hassabis – co-founder of Google's AI unit DeepMind – and colleague John Jumper were awarded the Nobel prize in chemistry, alongside U.S. biochemist David Baker, for their work decoding the structures of microscopic proteins. Former Google researcher Geoffrey Hinton, meanwhile, won the Nobel prize for physics on Tuesday, alongside U.S. scientist John Hopfield, for earlier discoveries in machine learning that paved the way for the AI boom. Professor Dame Wendy Hall, a computer scientist and advisor on AI to the United Nations, told Reuters that, while the recipients’ work deserved recognition, the lack of a Nobel prize for mathematics or computer science had distorted the outcome. "The Nobel prize committee doesn't want to miss out on this AI stuff, so it's very creative of them to push Geoffrey through the physics route," she said. "I would argue both are dubious, but nonetheless worthy of a Nobel prize in terms of the science they’ve done. So how else are you going to reward them?" Noah Giansiracusa, an associate maths professor at Bentley University and author of "How Algorithms Create and Prevent Fake News", also argued that Hinton’s win was questionable. "What he did was phenomenal, but was it physics? I don't think so. Even if there's inspiration from physics, they're not developing a new theory in physics or solving a longstanding problem in physics." The Nobel prize categories for achievements in medicine or physiology, physics, chemistry, literature and peace were laid down in the will of Swedish inventor Alfred Nobel, who died in 1895. The prize for economics is a later addition established with an endowment from the Swedish central bank in 1968. Page 1 continue Demis Hassabis and John M. Jumper, two of the three laureates who have been awarded the Nobel Prize in Chemistry for 2024, look on, at the offices of Google DeepMind UK in London, Britain, October 9, 2024.
To view or add a comment, sign in
-
Paper accepted in TMLR Journal (https://jmlr.org/tmlr/) Heartiest congratulations for the acceptance of the following work in Transactions on Machine Learning Research. The title, list of authors and a abstract of the work are as follows: Title: A Greedy Hierarchical Approach to Whole-Network Filter-Pruning in CNNs Authors: Kiran Purohit, Anurag Parvathgari, Sourangshu Bhattacharya Abstract: Deep convolutional neural networks (CNNs) have achieved impressive performance in many computer vision tasks. However, their large model sizes require heavy computational resources, making pruning redundant filters from existing pre-trained CNNs an essential task in developing efficient models for resource-constrained devices. Whole-network filter pruning algorithms prune varying fractions of filters from each layer, hence providing greater flexibility. State-of-the-art whole-network pruning methods are either computationally expensive due to the need to calculate the loss for each pruned filter using a training dataset, or use various heuristic / learned criteria for determining the pruning fractions for each layer. Hence there is a need for a simple and efficient technique for whole network pruning. We propose a two-level hierarchical approach for whole-network filter pruning which is efficient and uses the classification loss as the final criterion. The lower-level algorithm (called filter-pruning) uses a sparse-approximation formulation based on linear approximation of filter weights. We explore two algorithms: orthogonal matching pursuit-based greedy selection and a greedy backward pruning approach. The backward pruning algorithm uses a novel closed-form error criterion for efficiently selecting the optimal filter at each stage, thus making the whole algorithm much faster. The higher-level algorithm (called layer-selection) greedily selects the best-pruned layer (pruning using the filter-selection algorithm) using a global pruning criterion. We propose algorithms for two different global-pruning criteria: (1) layerwise- relative error (HBGS), and (2) final classification error (HBGTS). Our suite of algorithms outperforms state-of-the-art pruning methods on ResNet18, ResNet32, ResNet56, VGG16, and ResNext101. Our method reduces the RAM requirement for ResNext101 from 7.6 GB to 1.5 GB and achieves a 94% reduction in FLOPS without losing accuracy on CIFAR-10.
To view or add a comment, sign in
-
🎉 Exciting News: Our Paper Accepted in TMLR! 🚀 I'm thrilled to announce that our work on efficient AI has been accepted in the prestigious Transaction on Machine Learning Research (TMLR)! 🏆 🧠 What's it all about? We've developed an approach to make deep learning models smaller and faster without sacrificing accuracy. Our method: * Reduces RAM usage for ResNext101 from 7.6 GB to just 1.5 GB 💾 * Achieves a whopping 94% reduction in computations (FLOPS) 💨 * Maintains accuracy on datasets like CIFAR-10, CIFAR-100, Tiny-imagenet 🎯 🧩 Key Methodological Details: Two-level hierarchical approach for whole-network filter pruning: 1. Lower-Level Algorithm (Filter-Pruning): - Utilizes sparse-approximation based on linear approximation of filter weights - Implements two algorithms: a) Orthogonal matching pursuit-based greedy selection b) Novel greedy backward pruning with closed-form error criterion 2. Higher-Level Algorithm (Layer-Selection): - Greedily selects the best-pruned layer using global pruning criteria - Two proposed algorithms: a) Layerwise-relative error (HBGS) b) Final classification error (HBGTS) This approach outperforms state-of-the-art pruning methods on popular models like ResNet18, ResNet32, ResNet56, VGG16, and ResNext101. 🙏 Acknowledgments A huge thanks to my supervisor, Dr. Sourangshu Bhattacharya, for his invaluable guidance and support throughout this journey and Anurag Parvathgari, BTech IIT Kharagpur, for his contribution to this research. 📢 Stay Tuned! Preprint and code coming soon. Follow me for updates! #MachineLearning #EfficientAI #ComputerVision #DeepLearning #Research #TMLR #ModelCompression
Paper accepted in TMLR Journal (https://jmlr.org/tmlr/) Heartiest congratulations for the acceptance of the following work in Transactions on Machine Learning Research. The title, list of authors and a abstract of the work are as follows: Title: A Greedy Hierarchical Approach to Whole-Network Filter-Pruning in CNNs Authors: Kiran Purohit, Anurag Parvathgari, Sourangshu Bhattacharya Abstract: Deep convolutional neural networks (CNNs) have achieved impressive performance in many computer vision tasks. However, their large model sizes require heavy computational resources, making pruning redundant filters from existing pre-trained CNNs an essential task in developing efficient models for resource-constrained devices. Whole-network filter pruning algorithms prune varying fractions of filters from each layer, hence providing greater flexibility. State-of-the-art whole-network pruning methods are either computationally expensive due to the need to calculate the loss for each pruned filter using a training dataset, or use various heuristic / learned criteria for determining the pruning fractions for each layer. Hence there is a need for a simple and efficient technique for whole network pruning. We propose a two-level hierarchical approach for whole-network filter pruning which is efficient and uses the classification loss as the final criterion. The lower-level algorithm (called filter-pruning) uses a sparse-approximation formulation based on linear approximation of filter weights. We explore two algorithms: orthogonal matching pursuit-based greedy selection and a greedy backward pruning approach. The backward pruning algorithm uses a novel closed-form error criterion for efficiently selecting the optimal filter at each stage, thus making the whole algorithm much faster. The higher-level algorithm (called layer-selection) greedily selects the best-pruned layer (pruning using the filter-selection algorithm) using a global pruning criterion. We propose algorithms for two different global-pruning criteria: (1) layerwise- relative error (HBGS), and (2) final classification error (HBGTS). Our suite of algorithms outperforms state-of-the-art pruning methods on ResNet18, ResNet32, ResNet56, VGG16, and ResNext101. Our method reduces the RAM requirement for ResNext101 from 7.6 GB to 1.5 GB and achieves a 94% reduction in FLOPS without losing accuracy on CIFAR-10.
Transactions on Machine Learning Research
jmlr.org
To view or add a comment, sign in
-
The article introduces PatternBoost, a new AI tool developed by François Charton and the FAIR (Fundamental AI Research) team at Meta. This tool is designed to identify unintuitive mathematical patterns, potentially solving problems that have stumped mathematicians for decades. While previous AI models have struggled with high-level mathematical reasoning, PatternBoost represents a significant advancement, demonstrating broader applicability and requiring less effort to implement. Its success may encourage more mathematicians to incorporate AI into their research, despite historical reluctance in the field. Traditionally, mathematicians have hesitated to use AI due to its limited success on unsolved problems and the additional effort required to adapt these tools to their research. However, PatternBoost stands out as a more generalized and user-friendly approach. The tool can tackle a wide variety of mathematical challenges, making it a promising resource for addressing long-standing gaps in knowledge. François Charton notes the surprising effectiveness of this general method, which offers mathematicians new ways to approach complex problems that have defied traditional techniques. The key to PatternBoost’s success lies in its ability to detect patterns that might elude human intuition, a crucial aspect of advanced mathematics. By analyzing datasets and identifying underlying relationships, the AI can propose novel pathways for solving problems, which may lead to breakthroughs in fields ranging from number theory to applied mathematics. This ability to uncover hidden structures in data demonstrates AI’s potential to complement human ingenuity in mathematical discovery. If widely adopted, PatternBoost could revolutionize mathematical research, bridging the gap between human and machine intelligence in problem-solving. Its development also highlights the growing intersection of AI and academic disciplines that have traditionally been seen as purely human endeavors. By unlocking solutions to previously intractable problems, tools like PatternBoost may not only advance mathematics but also inspire new applications in physics, engineering, and computer science. This marks a turning point in how AI can assist in the expansion of human knowledge.
To view or add a comment, sign in
-
Last year, at the Heidelberg Laureate Forum Foundation Laureate Forum, I attended a panel discussion on Generative AI, moderated by Anil Ananthaswamy, featuring esteemed experts Sanjeev Arora, Sebastien Bubeck, Björn Ommer, and Margo I. Seltzer. While at #HLF2023, I spoke with Anil about his upcoming book, "Why Machines Learn: THE ELEGANT MATH BEHIND MODERN AI," which I'm reviewing. 📚 As algorithms powered by machine learning increasingly control critical aspects of our lives, from detecting early signs of sepsis and saving lives 🩺, to analyzing DNA in complex criminal cases 🔬and predicting recidivism ⚖️, to analyzing vast amounts of satellite imagery to predict climate change impact, democratizing AI has never been more important 🌍. Understanding the technology and context of its history is crucial. "Why Machines Learn" provides an accessible yet substantial introduction to the key ideas shaping the machine learning revolution. By delving into the mathematical concepts underpinning machine learning and connecting them to human stories, Anil's book equips readers with the conceptual framework. The Heidelberg Laureate Forum not only celebrated the achievements of laureates but also highlighted the importance of perseverance in the face of adversity. Just as Bob Metcalfe's PhD thesis was initially rejected at Harvard before he co-developed the Ethernet 🌐, the pioneers of machine learning faced numerous challenges and setbacks. "Why Machines Learn" traces the winding path of progress in AI, reminding us of the importance of questioning assumptions and learning from the stories of researchers who paved the way. Generative AI, with its foundations in machine learning, is on par with the creation of cell phones📱, the microprocessor 🖥️, and the Internet 🌍. However, only a handful of specialists truly understand how contemporary machine learning works. This knowledge gap has led to a polarized discourse, with AI doomsayers on one side and evangelists on the other. Anil's book bridges this divide by providing an accessible introduction to the elegant math behind modern AI. Understanding the math behind machine learning isn't just some abstract academic pursuit - it's absolutely essential to navigating this AI revolution. When you dive into the equations and algorithms, you start to see the true potential and pitfalls of these systems. It's like being handed a map to the future. With that knowledge, we can steer this technology in a direction that doesn't just benefit a select few, but uplifts ALL. That's why I'm so excited about Anil's book and conversations it will spark-not just about demystifying the technicalities, but about empowering all of us to be active participants in shaping our AI-driven world. 🚀 #AIForGood #EthicalAI #MachineLearning #AILiteracy #InnovationStories #ResilienceInSTEM #DemocratizeAI #ShapingTheFuture #HLF https://lnkd.in/g2dTZEbP
Why Machines Learn by Anil Ananthaswamy: 9780593185742 | PenguinRandomHouse.com: Books
penguinrandomhouse.com
To view or add a comment, sign in
-
Alan Turing’s work in the 1930s further developed Ada Lovelace’s foundational ideas about the limits of computation. His famous Halting Problem showed that there are certain computational tasks that no algorithm can solve, even if we had a machine capable of performing any computable function. The problem involves determining whether a given algorithm will eventually halt or continue running indefinitely. Turing proved that a universal algorithm to solve this problem, which reinforced Lovelace’s idea that machines are inherently limited by the instructions humans provide. 1. Turing Machine: Turing introduced the concept of a universal machine, a theoretical construct capable of simulating any other machine’s logic. The Turing Machine formalized the idea of computation, showing that any process that can be algorithmically described can be computed by a machine. Despite its theoretical power, the Turing Machine is still bound by the limits of computation, such as the Halting Problem, where some problems cannot be solved, no matter how advanced the machine. 2. Computability and Limits: Turing’s work solidified the idea of computability—that there are problems that are solvable by machines and those that are unsolvable. This directly relates to Lovelace’s insight that machines can only do what they’re programmed to do. Even modern AI systems, despite their apparent capabilities, still adhere to these limits—they can analyze data, generate predictions, or create content, but they cannot originate new, independent goals. 3. Artificial Intelligence and Creativity: In the field of AI, Turing’s famous Turing Test (proposed in 1950) suggests that a machine could be considered intelligent if it can converse with humans in a way that is indistinguishable from human behavior. However, even advanced AI systems like GPT-3 and others are limited to generating responses based on the data they have been trained on. While they may appear to “understand” or “create,” their actions are still rooted in human input, data, and instruction. This limitation echoes Lovelace’s belief that machines cannot originate their own ideas—AI can only function based on what it’s been programmed or trained to do. 4. AI’s Role in Creativity: While AI systems can generate creative outputs, like art or poetry, these creations are not truly “original” in the human sense. They are the result of patterns in the training data—this aligns with Lovelace’s assertion that a machine is not capable of original thought or creativity. AI can replicate, remix, and generate novel combinations, but it lacks the intrinsic creativity that humans possess. In sum, Turing’s theoretical work in the 1930s and 1940s built upon Ada Lovelace’s insights, addressing the formal boundaries of computation and reinforcing the idea that machines, no matter how advanced, can only execute tasks they are explicitly directed to perform. Modern AI, despite its complex capabilities, remains bound by these limits.
To view or add a comment, sign in
-
PyTorch 2.5 Released: Advancing Machine Learning Efficiency and Scalability PyTorch 2.5: Enhancing Machine Learning Efficiency Key Improvements The PyTorch community is dedicated to improving machine learning frameworks for researchers and AI engineers. The new PyTorch 2.5 release focuses on: Boosting computational efficiency Reducing startup times Enhancing performance scalability Practical Solutions This release introduces several valuable features: CuDNN backend for Scaled Dot Product Attention (SDPA) : Optimizes performance for transformer models on H100 GPUs, improving speed and reducing latency. Regional compilation of torch.compile : Allows for faster compilation of repeated neural network components, speeding up development cycles. TorchInductor CPP backend : Offers optimizations like FP16 support, enhancing computational efficiency. Benefits for Users With these updates, users can expect: Faster training and inference : Significant speed improvements for large-scale models. Reduced cold startup times : Quicker iterations during model development. Greater control over performance : Enhanced tools for developers in both research and production environments. Conclusion PyTorch 2.5 is a major advancement for the machine learning community. By addressing GPU efficiency, compilation latency, and overall speed, it remains a top choice for ML practitioners. The focus on SDPA optimizations and improved backend tools ensures that users can work more efficiently on complex AI projects. Get Involved For more details, check out the https://lnkd.in/drBV-pVT. Follow us on Twitter , join our Telegram Channel , and connect with our LinkedIn Group . If you appreciate our efforts, subscribe to our newsletter and join our 50k+ ML SubReddit . Upcoming Webinar Upcoming Live Webinar- Oct 29, 2024 : Learn about the best platform for serving fine-tuned models with the Predibase Inference Engine. Leverage AI for Your Business To stay competitive with AI, consider the following steps: Identify Automation Opportunities : Find areas in customer interactions that can benefit from AI. Define KPIs : Ensure measurable impacts from your AI initiatives. Select an AI Solution : Choose tools that fit your needs and allow for customization. Implement Gradually : Start with a pilot project, gather data, and expand wisely. For AI KPI management advice, contact us at . For ongoing insights, follow us on Telegram or Twitter . Transform Your Sales and Customer Engagement Discover how AI can redefine your sales processes and customer engagement at https://itinai.com. List of Useful Links: http://t.me/itinai https://lnkd.in/e7mfJSf5 #VirtualReality #Blockchain #CognitiveComputing #AutomationTools #CyberSecurity #TechnologyNews #TechInnovation #ArtificialIntelligence #MachineLearning #AI #DeepLearning #Robotics https://lnkd.in/dV5SRC-V
To view or add a comment, sign in
AI Expert / Consultant IT Expert / Consultant
6mocan I get a meeting