⚠️ New Regulation on Liability for Defective Software and AI 👉 The European Union introduces new regulations in the field of digital technologies. In November 2024, the Directive on Liability for Defective Products was published in the EU's Official Journal, which now defines software and artificial intelligence systems as products. ❓When is software or AI considered defective? ❓Who will be liable for defective software or AI? ❓Is it possible to be exempt from liability? ☎️ If you have any questions about the new directive, contact us at [email protected].
STUCHLIKOVA & PARTNERS, Law Firm’s Post
More Relevant Posts
-
Put Service Agreements Reviews on Turbo-Charge with AI —Faster than Brewing Your Morning Coffee! 🚀 Introducing AI for Service Agreement Reviews with Screens and Node.Law! We've engineered the AI Playbook to streamline your Service Agreement reviews. In minutes, assess your agreement against 26 risk standards for a comprehensive risk insight and crucial clause analysis. Plus, it answers 8 key questions to clarify the agreement's risk and context. Just upload your document and click 'Screen'—it's that fast. Our AI Playbook has undergone rigorous testing across numerous Service Agreements, achieving a 100% average accuracy! Give it a try and share your thoughts. Your feedback is crucial for further enhancements. This tool is ideal for individuals or companies receiving services, ensuring your agreement offers the necessary protection. This tool supports—not replaces—legal advice. Accuracy may vary; always verify AI outcomes. Use in conjunction with a lawyer. Feedback is welcomed at [email protected]. A big shout-out to the team that has worked on this project with me: Aman G. (Partner at Node.Law) and Parth Singh (Of Counsel at Node.Law). #AI #ServiceAgreement #ContractReview #nodelaw #Screens https://lnkd.in/ddMBDsgj
To view or add a comment, sign in
-
A very interesting use case of AI is for financial companies to estimate the possibility of their policy and plan may violate regulations. So companies can reduce the cost in their program management and development of new product in new market. Droit, Very impressive.
To view or add a comment, sign in
-
#ProductLiability #AI #EU — Update on revised EU Product Liability Directive, which is significant in itself and will also play an important role alongside the AIA #artificialintelligence #aia #aiact #regulation
*This is not a post on the AI Act* But yes, we're still talking about #artificialintelligence. Last week, on March 12, the European Parliament adopted its final version of the new directive on #liability for defective products. The new text aims at revising the previous version of the directive - which dates back to 1985 - by introducing new rules for damages caused by technological products (including #softwares and #AIsystems). In particular, the new directive provides that national courts should #presume the defectiveness of a product (or the causal link between the #damage and the defectiveness, or both), where, due to the technical or scientific complexity of the product, it would be "excessively difficult" for the claimant to prove the defectiveness or the causal link, or both. Recital 48 clarifies that "For example, in a claim concerning an AI system, the claimant should, for the court to decide that excessive difficulties exist, neither be required to explain the AI system’s specific characteristics nor how those characteristics make it harder to establish the causal link." It seems that, while we were all busy wrapping our heads around the latest updates on the #AIAct, most players in the tech sector missed out on the massive implications that this proposal will have on #innovation. Echoing Luca Bertuzzi's words (from an article published on Euractiv almost a year ago - link in the first comment): "Due to a glaring misconception or simple lack of capacity, the tech sector has overlooked, or largely underestimated, one of the EU’s legislative proposals that should define the liability regime for the decades to come and might open the door to mass claims." Now that the approved text only needs to be adopted by the Council before being published in the Official Journal, it will be exciting (mostly in lawyer-terms, I guess…) to see which measures tech companies will be willing to implement to safeguard themselves from potentially disruptive damage claims. Maybe #explainableAI or new log registration techniques? Happy to discuss if you have any views on this!
To view or add a comment, sign in
-
Whenever people are stressing about there not having been enough about human rights or sustainability in the AI Act, I always point out that we were able to strip about 1/3 of the HLEG's whitepaper that concerned liability out of the act and just modify a few lines of this liability directive. That's the way legislation should be – clear and to a point. If you modularise it correctly, you get more power with less confusion. I hope we just flush the parallel AI liability legislation in the works, I don't think we really need it. #AIAct #AILiability #productLaw #digitalGovernance cc Matthias Spielkamp Kai Wegrich Meeri Haataja
*This is not a post on the AI Act* But yes, we're still talking about #artificialintelligence. Last week, on March 12, the European Parliament adopted its final version of the new directive on #liability for defective products. The new text aims at revising the previous version of the directive - which dates back to 1985 - by introducing new rules for damages caused by technological products (including #softwares and #AIsystems). In particular, the new directive provides that national courts should #presume the defectiveness of a product (or the causal link between the #damage and the defectiveness, or both), where, due to the technical or scientific complexity of the product, it would be "excessively difficult" for the claimant to prove the defectiveness or the causal link, or both. Recital 48 clarifies that "For example, in a claim concerning an AI system, the claimant should, for the court to decide that excessive difficulties exist, neither be required to explain the AI system’s specific characteristics nor how those characteristics make it harder to establish the causal link." It seems that, while we were all busy wrapping our heads around the latest updates on the #AIAct, most players in the tech sector missed out on the massive implications that this proposal will have on #innovation. Echoing Luca Bertuzzi's words (from an article published on Euractiv almost a year ago - link in the first comment): "Due to a glaring misconception or simple lack of capacity, the tech sector has overlooked, or largely underestimated, one of the EU’s legislative proposals that should define the liability regime for the decades to come and might open the door to mass claims." Now that the approved text only needs to be adopted by the Council before being published in the Official Journal, it will be exciting (mostly in lawyer-terms, I guess…) to see which measures tech companies will be willing to implement to safeguard themselves from potentially disruptive damage claims. Maybe #explainableAI or new log registration techniques? Happy to discuss if you have any views on this!
To view or add a comment, sign in
-
I welcome Jag Lamba from Certa.ai to explore the integration of AI into compliance frameworks. The discussion demonstrates how AI can improve current teams and processes when combined with adaptable and scalable software. However, implementing guardrails is essential, especially in areas like third-party compliance, to prevent misuse of this new technology. Learn how AI can improve efficiency in compliance programs while maintaining necessary control measures. Don’t miss this valuable discussion on the future of compliance! https://bit.ly/4dByeCN #Compliance #AI #FCPA
To view or add a comment, sign in
-
The new EU Product Liability Directive, which comes into force this year, has arguably been lying in the shadows of the EU AI Act which has dominated the headlines in recent months. Innovators in the consumer products sphere should now be turning their attention to the new #PLD which is expected to generate increased exposure to cross border product liability claims, including collective actions, in the years to come. Once in force, Member States will have two years to transpose the PLD into their national laws. #PLD #productliability #kennedyslaw
*This is not a post on the AI Act* But yes, we're still talking about #artificialintelligence. Last week, on March 12, the European Parliament adopted its final version of the new directive on #liability for defective products. The new text aims at revising the previous version of the directive - which dates back to 1985 - by introducing new rules for damages caused by technological products (including #softwares and #AIsystems). In particular, the new directive provides that national courts should #presume the defectiveness of a product (or the causal link between the #damage and the defectiveness, or both), where, due to the technical or scientific complexity of the product, it would be "excessively difficult" for the claimant to prove the defectiveness or the causal link, or both. Recital 48 clarifies that "For example, in a claim concerning an AI system, the claimant should, for the court to decide that excessive difficulties exist, neither be required to explain the AI system’s specific characteristics nor how those characteristics make it harder to establish the causal link." It seems that, while we were all busy wrapping our heads around the latest updates on the #AIAct, most players in the tech sector missed out on the massive implications that this proposal will have on #innovation. Echoing Luca Bertuzzi's words (from an article published on Euractiv almost a year ago - link in the first comment): "Due to a glaring misconception or simple lack of capacity, the tech sector has overlooked, or largely underestimated, one of the EU’s legislative proposals that should define the liability regime for the decades to come and might open the door to mass claims." Now that the approved text only needs to be adopted by the Council before being published in the Official Journal, it will be exciting (mostly in lawyer-terms, I guess…) to see which measures tech companies will be willing to implement to safeguard themselves from potentially disruptive damage claims. Maybe #explainableAI or new log registration techniques? Happy to discuss if you have any views on this!
To view or add a comment, sign in
-
Artificial intelligence (AI) is currently one of the most important topics for law firms. We have been utilizing #LegalTech for many years and focus on integrating technical innovations into our daily work – for example, in the creation and optimization of our consulting products or in project and capacity planning. We started with the automated analysis of rental contracts many years ago. Today, thanks to generative #AI, we have successfully developed and established our own assistant: Chat@GT. At its core, Chat@GT is a #AI assistant developed specifically by and for #GreenbergTraurig. It is designed to streamline our daily operations in numerous ways – from taking over standard tasks such as text creation and improvement, translations and transcriptions to knowledge management and chatbot functions – much of it is faster and takes into account all data protection requirements. This project has now landed us in the final round of the #PMNManagementAwards2024. We are looking forward to the #Awards ceremony in September and are eager to discover what other technical advancements will revolutionize the legal market. https://lnkd.in/eQ9GnRhB #Innovation #PMNAwards2024 #ArtificialIntelligence
To view or add a comment, sign in
-
Many in Legal have been quick to anoint generative AI as the ultimate game changer when it comes to delivering legal services faster and cheaper, and finally automating painstaking workflows for increased efficiency. However, not all that glitters is always gold and there’s been an equal amount of GenAI adoption hesitation largely centered around security, ethics, solution complexity and availability, and overall costs. During the recent NetDocuments Inspire Conference, the cloud document management platform pioneer introduced the intelligent DMS, focused on leveraging document management for all things including the practical, secure use of AI based on ready access to a firm’s entire corpus of content. Per NetDocuments CEO Josh Baxter, “it’s not about bringing content to your AI, but about bringing AI to your content” and could well address all the aforementioned concerns and displace expensive one-off/point solutions that are starting to clog firms’ tech stacks. As Above the Law stated in a related article, the era of lawyers taking material and feeding it into external AI-driven applications must end. It sounds like the narrative around legal industry AI has moved toward improving the user experience, and embedding AI in ways that deliver meaningful, practical results … with the advent of intelligent document management. Inspire Conference: https://lnkd.in/g4wfqf_M Above the Law Article: https://lnkd.in/gwdczTqR #GenAI #LegalTechnology #Innovation #InHouse #Efficiency #ALSP
To view or add a comment, sign in
-
Today the European Union passed the first ever laws to regulate the impacts of AI technology. These laws and many more are needed to make sure that AI is a positive impact for society. At On3 we have already adopted our AI bi-law - “AI will be used to enhance human performance and not replace employees.” This policy should be considered by your company as you expand your use of this game changing technology. https://lnkd.in/gHARbsGY
World’s first major law for artificial intelligence gets final EU green light
cnbc.com
To view or add a comment, sign in
-
Thomas Fox welcomes back Jag Lamba from Certa.ai to explore the integration of AI into compliance frameworks. The discussion demonstrates how AI can improve current teams and processes when combined with adaptable and scalable software. However, implementing guardrails is essential, especially in areas like third-party compliance, to prevent misuse of this new technology. Learn how AI can improve efficiency in compliance programs while maintaining necessary control measures. Don’t miss this valuable discussion on the future of compliance! https://bit.ly/4dByeCN #Compliance #AI #FCPA
To view or add a comment, sign in
938 followers