“Not Everything Fits into an Algorithm” is the title of the contribution by Rubén Abadía Funes, from the motor car research institute CENTRO ZARAGOZA, to the 20th issue of our digital magazine focused on artificial intelligence. In the article, he reviews the possibilities of this technology in assessing motor car damages. #AI The article is available in English👉 https://n9.cl/b4lfm and also in Spanish👉 https://n9.cl/4cq7a
Consorcio de Compensación de Seguros’ Post
More Relevant Posts
-
AI this, AI that...in short, Artificial Intelligence, etc. refer to a computational design that uses massive datasets for more accurate output, at the cost of consuming equally more electricity. In my opinion AI is only good for statisticalization. that's about it.
To view or add a comment, sign in
-
🤖 AI Insight: The Bitter Lesson in 70 Years of AI Research I came across a random article by Rich Sutton titled "The Bitter Lesson." Here are the key takeaways for AI professionals: 1. General methods leveraging computation consistently outperform approaches based on human knowledge in the long run. 2. Examples across various AI domains (chess, Go, speech recognition, computer vision) show that scaling computation through search and learning leads to breakthrough progress. 3. The "bitter lesson": Building in how we think we think doesn't work long-term. Instead, we should focus on meta-methods that can discover and capture complexity. 4. As AI researchers, we should embrace the power of general-purpose methods that scale with increased computation. 5. The goal should be to create AI agents that can discover like we can, not ones that contain what we have already discovered. Recent research challenges, or provides alternative solutions that achieve similar results without scaling, which is very important for AI to be adopted in economical ways. Thoughts? Has your experience in AI aligned with or challenged this perspective? Link in the first comment.
To view or add a comment, sign in
-
Popular approaches for complex problem-solving, such as neural networks, Often require a large number of calculations to produce predictions. The time it takes to produce a prediction can vary depending on the complexity of the input data And the size of the network as well as hardware requirements, Which can be computationally expensive and make information difficult to process at the edge. This can be a significant issue in applications that require real-time predictions or where the response time is critical. Real-time, computable AI can be used across industries, such as Defense, Healthcare, Finance, etc. To provide quick and accurate insights and responses in high-pressure and time-sensitive situations, Improving decision-making and transforming operations. The ability to calculate exact response times combined with the ability to adapt to new data sets in the field on a low SWaP-c system allows for highly dynamic and advanced decision-making for next-gen systems and exponentially increases the value of implementing AI for high-stakes predictions. Truly computable AI requires determinism that eliminates randomness in both the function and execution of algorithms. While computability seemingly presents a significant roadblock to the widespread use of AI, Advancements in computational power and an industry focus on real-time, lifelong learning hold promise for the future. Share your thoughts in the comments. #intelligentartifacts #ai #ml #transparentai ______________________________ 👉 I regularly share content on the topics of Deterministic AI/ML & Reasoning Solutions, AI/AGI (Artificial General Intelligence), Transparent and Responsible AI. 🔔 Follow me and hit the bell icon not to miss those insights.
To view or add a comment, sign in
-
Researchers from Harvard University and the University of Michigan found that AI models can understand complex concepts like color and size before they show these abilities in standard tests. This study shows that AI models often have hidden skills that only come up in specific situations, which could change how we think about AI safety and development. The research highlights that measuring an AI's abilities is more complicated than it seems, as models may appear less capable with standard prompts despite having advanced skills. Anthropic researchers used "dictionary learning" to connect neural pathways in AI to specific concepts, aiming to make AI learning less of a "black box." They found that AI models learned concepts much earlier than traditional testing could detect, with strong concepts emerging around 6,000 training steps. The team used techniques like "linear latent intervention" to access hidden capabilities, showing that these abilities can be tapped into early with the right methods. This hidden emergence suggests that AI models are more powerful than we thought, but it also makes it harder to fully understand what they can do. For AI companies, this means revisiting how they test AI models, as traditional benchmarks might not catch these hidden capabilities. Read more: https://lnkd.in/ePDAb4-f #ai #artificialintelligence
To view or add a comment, sign in
-
GreenLightning AI: The sustainable AI revolution is here. ⚡ We have an evolved system that offers a unique set of advantages over traditional AI solutions based on Deep Neural Networks (DNNs). Train and deploy your AI model while reducing compute costs and energy while maintaining model accuracy. + Efficient model (re)training --> x100-x1000 times faster + Sustainable: energy savings proportional to retraining time optimization + Transparent: no black-box issue. Know how the model inputs directly affect the output + Accurate: no catastrophic forgetting. Improve your model quality with every retraining without losing any knowledge from old data GreenLightning AI is the solution of the future for companies looking to: ✅ Reduce costs and energy consumption. Increasing efficiency. ✅ Contribute to a more sustainable future. ✅ Develop accurate and transparent ML models. Visit our website to learn more about the potential of Qsimov and GreenLightning AI. #AI #Sustainability #Qsimov #GreenLightningAI #Efficiency #GreenAlgorithm
To view or add a comment, sign in
-
Having journeyed through the enlightening pathways of the “Introduction to Artificial Intelligence” course by Doug Rose, I now stand enriched with a deeper understanding of this transformative realm, inspired by the vast potential of AI to reshape our world and guide us into the future. Check it out: https://lnkd.in/gwB3_7gQ #artificialintelligenceforbusiness #artificialintelligence.
Certificate of Completion
linkedin.com
To view or add a comment, sign in
-
Can Artificial Intelligence Understand the Incomprehensible? As a scientist and researcher, I am increasingly fascinated and concerned by the growing complexity of artificial intelligence (AI) systems. Modern AI models, particularly those based on neural networks, are becoming true "black boxes," incomprehensible even to their creators. David Bau, a computer scientist at Northeastern University, said, "Here’s what really terrifies me about the current breed of AI: there is no such understanding, even among the people building it." This opacity is particularly unsettling when AI is used in crucial areas like medical diagnosis or judicial decisions. Explainable AI (XAI) is an emerging field that seeks to unravel these mysterious processes. Techniques such as highlighting the parts of an image that lead an algorithm to classify it or building simplified decision trees are steps in the right direction. These efforts are fundamental not only to improve user trust and AI safety but also to meet the demands of regulators who require transparent explanations for the safe use of these technologies. However, we are still far from fully understanding the internal complexities of advanced AI. I firmly believe that it is necessary to continue developing explainability tools and to hold companies that produce these technologies accountable. It is not enough to build powerful systems; they must also be transparent and understandable. What do you think? How can we balance innovation with the need for transparency and safety? Share your thoughts in the comments! #ArtificialIntelligence #Explainability #Innovation #Safety #Research
To view or add a comment, sign in
-
Ai4 - Artificial Intelligence Conferences Geoffrey Hinton If you have many copies of the same neural network, you can get a thousand times more knowledge than one copy of it. As soon as AI realizes that control is everything, it will take control over the data centers, and evolution will be left in the dust. AI can share knowledge and compute better than humans. If we so not take measures, it is a very real and serious threat #AI #LLM #NeuralNetworks
To view or add a comment, sign in
-
Google DeepMind's team has released a revolutionary tool that lets us peek inside AI models. This led to the emerging field of mechanistic interpretability that seeks to reverse-engineer neural networks to understand their inner workings. By using techniques like sparse autoencoders—unsupervised tools that discover data features independently—researchers can now map how concepts evolve within AI models, layer by layer. Resulting in interesting discoveries about how the models break down human concepts. About Gemma Scope: - Uses sparse autoencoders to break down neural network components - Provides insights into model decision-making process - Helps researchers understand how models generate outputs What does this mean for AI development? - Improved model explainability - Enhanced model control and reliability - Identification of potential biases and errors - Development of more efficient and effective models The future of AI is transparent and accountable! Let's discuss the implications of mechanistic interpretability for AI safety and ethics. Share your thoughts below. #AI #MechanisticInterpretability #GemmaScope #GoogleDeepMind #ArtificialIntelligence #MachineLearning #Innovation #Transparency #Accountability
To view or add a comment, sign in
-
The new method, known as Linear-Complexity Multiplication (L-Mul), optimizes the mathematical operations that run AI. https://lnkd.in/d8Ss7_Pp
L-Mul algorithm breakthrough slashes AI energy consumption by 95%
interestingengineering.com
To view or add a comment, sign in
23,373 followers