I am excited about the potential of AI to enhance efficiency, but I have serious concerns about Brazil's decision to hire OpenAI to analyse lawsuits and cut costs in the legal system. The speed at which AI is being adopted in sensitive domains like the judiciary is outpacing the development of robust governance frameworks and ethical guidelines. While AI could help flag cases requiring action, there are risks of encoded biases leading to unfair outcomes, especially without strong human oversight and accountability measures in place. The right to due process and fair legal representation could be compromised if AI systems make errors or exhibit discrimination against certain groups. Brazil's solicitor general claims the AI will not replace human employees, but we have seen AI tools like ChatGPT produce biased and factually incorrect outputs when not properly constrained. Deploying such systems in high-stakes legal proceedings without a comprehensive ethics framework is deeply concerning. Rather than rushing to cut costs, Brazil should prioritize developing rigorous standards and regulations around AI use in the judicial system. Transparency, explainability, and human oversight must be non-negotiable to uphold the principles of justice and protect civil liberties. The consequences of getting this wrong are too severe. #AIRegulation #AIGovernance #AIEthics #ResponsibleAI #AIRisks #BrazilAI #AILegislation #AICompliance #AITransparency #AIAccountability #TechPolicy #EmergingTech #LegalTech #RegulatoryAffairs #TechLaw #ITnews
Alyssa Le Cornu’s Post
More Relevant Posts
-
Private and sensitive data of an individual in AI driven world shall be given protection owing to increase in cybercrimes. With increase in collection, integration, usage and application of private sensitive data for technology advancement, Indian Government has taken into consideration, the need for and importance of having proper and systemized framework to regulate various unauthorized platforms that are using person's private sensitive and confidential data in AI systems. In pursuance to this, an advisory has been sent by the government to various AI companies like Google, OpenAI in India to (i) seek prior government approval before launching #AI products in India (ii) ensure use of AI models/LLM/ Generative AI, algorithms or software diligently in accordance with the provisions of IT Act and Rules. Such advisory also lays down the penal consequences to be faced by the defaulters in case of non-compliance which have come into immediate effect from March 1, 2024. https://lnkd.in/gh7MYSww #IT #Tech #startup #media #intermediaries #publicpolicy #dataprotection #ai #meity #law
Govt asks AI platforms to seek approval for deploying under-trial AI; makes labelling mandatory
moneycontrol.com
To view or add a comment, sign in
-
Brazil Partners with OpenAI to Cut Legal Costs with AI Solutions - https://lnkd.in/eJG9jFbW Brazil's government has enlisted the help of OpenAI to streamline the analysis and review of thousands of lawsuits using artificial intelligence (AI). This initiative aims to prevent costly court losses, which have increasingly burdened the federal budget. ..... #AINews #OpenAI #Brazil
AiNews.com
ainews.com
To view or add a comment, sign in
-
America's first comprehensive AI law is here. This month, while the EU AI Act (https://lnkd.in/dzcqxh7i) receives its final vote and is about to be officially published and enter into force, #Colorado became the first state in the US to follow suit and enact comprehensive legislation regulating artificial intelligence, with the signing of the Colorado AI Act. The new act mandates stringent requirements for developers and deployers of high-risk AI systems to prevent algorithmic discrimination and ensure transparency. a "high-risk artificial intelligence system" is any AI system that, when deployed, makes, or is a substantial factor in making, a consequential decision. "Substantial factor" means factor generated by an AI system that is used to assist in making, and is capable of altering the outcome of, a consequential decision, and thereby has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of: (a) Education; (b) Employment; (c) Financial or lending services; (d) Essential government services; (e) Healthcare service; (f) Housing; (g) Insurance; or (h) Legal services. Interestingly though, the act specifically excludes technologies that “communicate with consumers in natural language for the purpose of providing users with information, making referrals or recommendations, and answering questions and is subject to an accepted use policy that prohibits generating content that is discriminatory or harmful“ - such as ChatGPT and Google's #Gemini. More on the new regulatory framework, conditions for applicability and practical takeaways, in our client update below. Herzog w/Oded Kramer #AI #artificialintelligence #AILaw #airegulation
The Colorado AI Act: America's First Comprehensive AI Law - Herzoglaw | Israeli Law Firm
https://herzoglaw.co.il/en/
To view or add a comment, sign in
-
AI tool helps lawyers quickly sort through millions of documents. #efficiency 🤝 Download 1 Million Logo Prompt Generator 🔜 https://wapia.in/1mlogo 🤝 Follow us on Whatsapp 🔜 https://wapia.in/wabeta _ ❇️ Summary: Multiple lawyers have experienced issues with generative AI tools like OpenAI and Google Gemini hallucinating or making things up. However, there is still a place for gen AI in legal circles, especially in electronic discovery processes. Data company Hanzo helps legal departments sort through unstructured data from platforms like Slack and email to identify relevant documents for e-discovery. They offer a platform that can analyze documents and identify necessary information, ultimately reducing costs and increasing efficiency in the legal industry. Hashtags: #chatGPT #AIlegaltech #documentanalysis
AI tool helps lawyers quickly sort through millions of documents. #efficiency
https://webappia.com
To view or add a comment, sign in
-
Use case is super important. Stop to give consideration to the limitations and strengths of the AI model, as well as the risks of using public models such as #ChatGPT in your business workflows. Applying the wrong tool in the wrong use case can have disastrous consequences, as has been seen in a number of highly publicized cases. Let's talk about your use case and how to realize value while minimizing risk. #genai #llms #openai #privatellm #datasecurity #servicelaunch https://lnkd.in/e34HG_Wb
ANALYSIS: Build or Buy AI? Legal Is Doing Both
news.bloomberglaw.com
To view or add a comment, sign in
-
I have a prediction that will sound like an accusation: #AI companies will not follow the rules or adhere to safety promises in the coming years. I'm not saying this because of any special lack of faith in the ethics of AI leaders, but because of the past and present of tech. THE PAST History is littered with companies that bent or broke rules to improve financial outcomes and gain market share. For example, Uber knew it was violating local taxi laws. In some instances, Uber expressly urged drivers to ignore local laws and promised to pay for any fines. The rideshare companies successfully exploited a gap between the rapid adoption of easy on-demand transportation, people's low satisfaction with taxis, and local politicians' unwillingness to restrict illegal services many loved. AI companies will likely follow the same path. Delivering the #GenAI products people love takes a LOT of training content. Recently, Anthropic has been accused of ignoring "do not crawl" protocols put in place to allow site owners to protect their content from being scraped and used by AI companies (https://lnkd.in/gHpx_huU). And, of course, there have already been several lawsuits accusing OpenAI and others of violating IP law by using online content without permission. We can also expect AI companies to dance along (or jump across) the lines put in place to protect the safety of users. Much like the ride-hailing platforms, if users and companies are eager to adopt the next-great AI tool, AI companies will be keen to respond, regardless of rules or promises. This leads us to... THE PRESENT OpenAI is bleeding cash (https://lnkd.in/gpwqTxne). Leaders at Facebook and Google are already feeling the need to justify their AI investments given meager early returns, saying things like "the risk of underinvesting is dramatically greater than the risk of overinvesting for us here.” (https://lnkd.in/g9CV_i9x) VC and tech firms are not patient. They want a return, and they want it now. As AI leaders evaluate their run-rate, it will raise the heat for them to produce returns more quickly. These companies can no longer rely on the promise of AI in three to five years--they need to reduce operating losses now. For example, OpenAI is launching SearchGTP, a GenAI search engine similar to Google's Search Generative Experience (https://lnkd.in/g_mXwFpu). Does the world need another search engine? That's debatable, but OpenAI desperately needs a share of that lucrative search revenue that Google enjoys. The past and present of tech tell us what to expect from AI firms. The question is if we'll trust them or will seek to personally and collectively ensure AI firms proceed with care for users, content creators and the world.
OpenAI, Home Of ChatGPT, May Lose $5B This Year – Report
https://deadline.com
To view or add a comment, sign in
-
AI is bringing about many changes, including a new kind of database with application to your LLM's. Let's begin the discussion as you ride the wave of the AI Revolution.....
Fine-Tuning Your LLM? There's a Better Way
https://www.salesforce.com/blog
To view or add a comment, sign in
-
Users turning to technology for legal advice is nothing new. As per the 2019 Legal Technology Survey Report by the American Bar Association (ABA), 31% of respondents used the Internet for legal advice while only 29% relied on lawyers, and 63% of respondents used information they found online to resolve their legal problems. With the release of ChatGPT in November 2022 and the subsequent Cambrian explosion of generative AI tools, people started using LLMs for their legal tasks. This paper by researchers at Princeton University highlights three broad uses of AI for legal tasks as well as some concerns: 1. Information Processing (summarization/ legal information retrieval) 2. Creativity, Reasoning/ Judgment (preparing legal filings) Concerns: - data contamination (when a model is trained on later versions of a benchmark which leads to over-optimistic performance estimates) - lack of construct validity (a model was designed for one type of task is applied to another which leads to low accuracy) - prompt sensitivity (outputs can vary depending on how prompts are phrased) 3. Predictions (criminal risk prediction, predicting outcomes of court decisions) Concern: - low accuracy and bias (oftentimes, courts don’t build predictive AI tools from scratch that could be later tailored to their specific needs but purchase or license one-size-fits-all products from AI vendors) The evaluations of how these GenAI tools are used are scarce mainly due to the lack of transparency on how users interact with them in their daily lives. This makes it hard to understand the limitations of AI models, how well they perform on given tasks, and what are the best tasks to use this technology for. As a result, these unresolved limitations make the adoption of LLMs challenging.
To view or add a comment, sign in
-
"The evaluations of how these #GenAI tools are used are scarce mainly due to the lack of #transparency on how users interact with them in their daily lives. This makes it hard to understand the #limitations of #AImodels, how well they perform on given tasks, and what are the best tasks to use this technology for. As a result, these unresolved limitations make the #adoption of #LLMs #challenging." Elena Gurevich #AIAdoption #Legal #AICertification #AIAssesment #AISafety #AISecurity #AITrustworthyness #AIResearch #ComputationalLaw
AI Policy-Curious Attorney | Owner @ EG Legal Services | Director of Development at Center for Art Law
Users turning to technology for legal advice is nothing new. As per the 2019 Legal Technology Survey Report by the American Bar Association (ABA), 31% of respondents used the Internet for legal advice while only 29% relied on lawyers, and 63% of respondents used information they found online to resolve their legal problems. With the release of ChatGPT in November 2022 and the subsequent Cambrian explosion of generative AI tools, people started using LLMs for their legal tasks. This paper by researchers at Princeton University highlights three broad uses of AI for legal tasks as well as some concerns: 1. Information Processing (summarization/ legal information retrieval) 2. Creativity, Reasoning/ Judgment (preparing legal filings) Concerns: - data contamination (when a model is trained on later versions of a benchmark which leads to over-optimistic performance estimates) - lack of construct validity (a model was designed for one type of task is applied to another which leads to low accuracy) - prompt sensitivity (outputs can vary depending on how prompts are phrased) 3. Predictions (criminal risk prediction, predicting outcomes of court decisions) Concern: - low accuracy and bias (oftentimes, courts don’t build predictive AI tools from scratch that could be later tailored to their specific needs but purchase or license one-size-fits-all products from AI vendors) The evaluations of how these GenAI tools are used are scarce mainly due to the lack of transparency on how users interact with them in their daily lives. This makes it hard to understand the limitations of AI models, how well they perform on given tasks, and what are the best tasks to use this technology for. As a result, these unresolved limitations make the adoption of LLMs challenging.
To view or add a comment, sign in
-
If we look in our society, businesses choose to work with specialists. You want domain focused individuals, that offer performance, speed and efficiency for a certain set of tasks. This is a valid point as well for large language models. Businesses need domain specific LLM's that can fit in small machines and can run efficiently in premise empowering your staff productivity, contributing to your businesses prosperity and agility. Our current and future solutions provide autonomous LLM's, and AI applications that are designed to be able to run completely isolated, making them a trusted environment where you can feed confidential, personal or proprietary data without fear of breaches. Think about generic models like an octopus with a spaghetti of tentacles disoriented by all the ambiguous bloated data trained on, incapable to stay focused and running on an absolutely insane amount of energy to keep them operational. The legal and moral aspect presented by training these generic models in proprietary data and copyrighted materials must be also considered and set already a precedent with OpenAI that has now ongoing a huge legal dispute over the data they train them models on, brought by several individuals and companies. Within the next few years generic models like ChatGPT, Gemin, Claude etc. consumer base will be mainly the general population while businesses will turn steadily to solutions such as the one we are offering bringing back data in local data centers and running AI models in their premises. The cloud will still be used mostly for front facing applications and as a backup for disaster recovery. In this process we benefit from starting at a point when several generative AI research studies are already concluded worldwide and these research papers released almost on a daily basis bring us a huge amount of knowledge and understanding related to what must be done to improve the present AI capabilities.
To view or add a comment, sign in