📢 ⚖️ 🏛️ Italian Privacy Agency Concludes OpenAI Investigation, Issues 15m EUR Fine, Imposes Public Disclosure Obligations ❗ (Dec. 20, 2024) From The Italian Data Protection Authority" ➡️ OpenAI used personal data to train ChatGPT "without having an adequate legal basis and violated the principle of transparency and the related information obligations towards users". ➡️ OpenAI also didn't provide an "adequate age verification system," to prevent users under 13 years old from being exposed to inappropriate AI-generated content, the investigation continued. ➡️ Last year The Italian Data Protection Authority banned the use of ChatGPT in Italy over breaches of EU privacy rules. The service was reactivated after OpenAI established the right of users to refuse consent for the use of personal data to train the algorithms. Around the world, many other investigations of OpenAI are underway. In the United States, the Center for AI and Digital Policy filed a detailed complaint with the Federal Trade Commission in March 2023, urging the US consumer protection agency to open an investigation of OpenAI and to establish guardrails for AI services. In July 2023, The New York Times and The Wall Street Journal reported that the FTC had opened the investigation CAIDP requested. But a year later, there was still no outcome from the US agency. So, the Center for AI and Digital Policy issued the report "ChatGPT and the Federal Trade Commission: Still No Guardrails" (CAIDP July 2024) CAIDP is still urging US policymakers to take immediate action to safeguard US consumers and establish clear guidelines and enforcement mechanisms for AI companies. The Center for AI and Digital Policy Europe presented an AI policy leader award to Guido Scorza, on behalf of The Italian Data Protection Authority, at the 2024 Computers, Privacy and Data Protection conference in Brussels. Christabel R. Marc Rotenberg https://lnkd.in/e6P7GxNs
Center for AI and Digital Policy
Public Policy Offices
Washington, DC 63,397 followers
"Filter coffee. Not people."
About us
The Center for AI and Digital Policy aims to ensure that artificial intelligence and digital policies promote a better society, more fair, more just, and more accountable – a world where technology promotes broad social inclusion based on fundamental rights, democratic institutions, and the rule of law. As an independent non-profit corporation, the Center for AI and Digital Policy will bring together world leaders, innovators, advocates, and thinkers to promote established frameworks for AI policy – including the OECD AI Principles and the Universal Guidelines for AI – and to explore emerging challenges.
- Website
-
https://caidp.org
External link for Center for AI and Digital Policy
- Industry
- Public Policy Offices
- Company size
- 11-50 employees
- Headquarters
- Washington, DC
- Type
- Educational
- Founded
- 2021
- Specialties
- Public Policy, Artificial Intelligence, Privacy, and AI
Locations
-
Primary
Washington, DC, US
Employees at Center for AI and Digital Policy
-
Alice Liu
Responsible AI | Partnerships | Digital Development | Policy | Business
-
Ren Bin Lee Dixon
AI policy, governance, and ethics
-
Gry Hasselbalch
Author & Scholar | Cofounder DataEthics.eu | PhD | AI/Data Ethics & EU/Global Tech Policy & Diplomacy| Author of Human Power (2025), Data Pollution…
-
Sedef Akinli Kocak, PhD, MBA
Top 100 Women Advancing AI | Women in AI North America Finalist | DMZ Women of the Year Nominee | Applied AI projects & Tech Translation with…
Updates
-
📢 UK Office Adopts CAIDP Recommendations on Generative AI The Center for AI and Digital Policy provided extensive comments to the Information Commissioner's Office public consultation on generative AI. The UK consultation covered five key areas: ➡️ The lawful basis for web scraping to train generative AI models. ➡️ Purpose limitation in the generative AI lifecycle. ➡️ Accuracy of training data and model outputs. ➡️ Engineering individual rights into generative AI models. ➡️ Allocating controllership across the generative AI supply chain. The ICO stated that legitimate interest is the only lawful basis for using web-scraped data for AI training, and that societal interests alone are not sufficient, in line with CAIDP’s recommendations to the ICO. The Information Commissioner's Office also adopted CAIDP’s recommendation that: ✅ Developers should not avoid the use of personal data to train generative AI models ✅ Transparency, safeguard mechanisms, and rigorous training date documentation are essential to uphold information rights ✅ Developers should demonstrate a sufficiently detailed and specific purpose when training generative AI ✅ AI actors should use accurate, reliable, and representative data across the lifecycle. ✅ Organizations must implement mechanisms to fulfill information rights requests for training data and the model itself (if a model contains personal data). ✅ Data protection by design and by default are legal requirements that must be upheld for generative AI model ✅ The Data Minimization Principle is the most effective standard for upholding the individual rights of data subjects with generative AI 🙏 🙏 CAIDP applauds the ICO for undertaking a public consultation on generative AI and for incorporating recommendations provided by the Center for AI and Digital Policy CAIDP looks forward to the ICO's adoption of other recommendations: ➡️ Independent external audits ➡️ Ex-ante human rights impact assessments ➡️ Redress mechanisms #aigovernance #PublicVoice Grace S. Thomson Selin Ozbek Cittone Ren Bin Lee Dixon Caroline Friedman Levy https://lnkd.in/eYJuEpbX
-
📢 House Bipartisan Task Force on Artificial Intelligence Delivers Report ➡️ Objective of the report: To offer the U.S. government guiding principles, recommendations, and policy proposals for regulating and advancing AI innovation. 🔎 Key Findings 💡 The federal government should utilize core principles and avoid conflicting with existing laws. 💡 The federal government should be wary of algorithm-informed decision-making. 💡 The federal government should provide notification of AI’s role in governmental functions. 💡 Agencies should pay attention to the foundations of AI systems. 💡 Roles and associated AI knowledge and skills are unclear and highly varied across the federal workforce. 💡 Skills-based hiring is critical for filling the demand for AI talent in the federal workforce. ❗ Recommendations ✅ Take an information and systems-level approach to the use of AI in the federal government. ✅ Support flexible governance. ✅ Reduce administrative burden and bureaucracy using AI. ✅ Require that agencies provide notification of AI’s role in governmental functions. ✅ Facilitate and adopt AI standards for federal government use. ✅ Support NIST in developing guidelines for federal AI systems. ✅ Improve cybersecurity of federal systems, including federal AI systems. ✅ Encourage data governance strategies that support AI development. ✅ Congress and the government must understand the federal government’s AI workforce needs. ✅ Support different pathways into federal service for AI talent. h/t Ren Bin Lee Dixon Center for AI and Digital Policy #PolicyTeam
-
📢 ✍ 📜 CAIDP Urges Türkiye to Endorse International AI Treaty 🇹🇷 ➡️ "The Center for AI and Digital Policy, a global network of AI experts and human rights advocates, urges Türkiye to sign and ratify the Council of Europe Framework Convention on Artificial Intelligence. The aim of the AI Treaty is to safeguard human rights, democracy, and the rule of law. The AI Treaty is open to all nations. More than 37 nations have already signed the AI Treaty. ➡️ "Türkiye aims to govern AI with ethical principles. To solidify its position as a responsible AI leader, Türkiye should sign and ratify the Council of Europe Framework Convention on AI and Human Rights. This endorsement would affirm Türkiye's dedication to ethical AI governance, prevent policy fragmentation, and ensure that Turkish priorities are reflected in international AI development." 🇹🇷 📜 The AI Treaty makes clear that activities within the lifecycle of AI systems must comply with these fundamental principles: ► Human dignity and individual autonomy ► Equality and non-discrimination ► Respect for privacy and personal data protection ► Transparency and oversight ► Accountability and responsibility ► Reliability ► Safe innovation 📜 The AI Treaty sets out foundational safeguards for the operation of AI systems. These requirements include: ►Public documentation regarding the use of AI systems ► Contestability mechanisms to review adverse decisions ► Redress mechanisms and procedural guarantees ► Notice that one is interacting with an AI system and not with a human being. 📜 Parties to the AI Treaty should implement relevant impact assessments concerning the actual and potential impacts of AI systems on human rights, democracy, and the rule of law throughout the AI lifecycle. ➡️ "By endorsing the Framework Convention on Artificial Intelligence and Human Rights, Türkiye will become a leader in the development of ethical AI. This endorsement underscores the country's commitment to human rights and will promote international policies that reflect regional interests. Endorsing the treaty will facilitate collaboration among Middle Eastern and Eurasian nations on AI governance." 🇹🇷 Merve Hickok Marc Rotenberg Sharvari Dhote Amir Noy Artem Kobrin 🇺🇦 International Bar Association Center for AI and Digital Policy #Türkiye #aigovernance Council of Europe
-
📢 ✍ 📜 CAIDP Urges Türkiye to Endorse International AI Treaty 🇹🇷 ➡️ "The Center for AI and Digital Policy, a global network of AI experts and human rights advocates, urges Türkiye to sign and ratify the Council of Europe Framework Convention on Artificial Intelligence. The aim of the AI Treaty is to safeguard human rights, democracy, and the rule of law. The AI Treaty is open to all nations. More than 37 nations have already signed the AI Treaty. ➡️ "Türkiye aims to govern AI with ethical principles. To solidify its position as a responsible AI leader, Türkiye should sign and ratify the Council of Europe Framework Convention on AI and Human Rights. This endorsement would affirm Türkiye's dedication to ethical AI governance, prevent policy fragmentation, and ensure that Turkish priorities are reflected in international AI development." 🇹🇷 📜 The AI Treaty makes clear that activities within the lifecycle of AI systems must comply with these fundamental principles: ► Human dignity and individual autonomy ► Equality and non-discrimination ► Respect for privacy and personal data protection ► Transparency and oversight ► Accountability and responsibility ► Reliability ► Safe innovation 📜 The AI Treaty sets out foundational safeguards for the operation of AI systems. These requirements include: ►Public documentation regarding the use of AI systems ► Contestability mechanisms to review adverse decisions ► Redress mechanisms and procedural guarantees ► Notice that one is interacting with an AI system and not with a human being. 📜 Parties to the AI Treaty should implement relevant impact assessments concerning the actual and potential impacts of AI systems on human rights, democracy, and the rule of law throughout the AI lifecycle. ➡️ "By endorsing the Framework Convention on Artificial Intelligence and Human Rights, Türkiye will become a leader in the development of ethical AI. This endorsement underscores the country's commitment to human rights and will promote international policies that reflect regional interests. Endorsing the treaty will facilitate collaboration among Middle Eastern and Eurasian nations on AI governance." 🇹🇷 Merve Hickok Marc Rotenberg Sharvari Dhote Amir Noy Artem Kobrin 🇺🇦 International Bar Association Center for AI and Digital Policy #Türkiye #aigovernance Council of Europe
-
📢 CAIDP Opens Nominations for Global AI Policy Leaders 🏅 🏅 🏅 🏅 Each year, the Center for AI and Digital Policy presents awards to Global AI Policy Leaders in Academia, Business, Civil Society, and Government. ➡️ Individuals are recognized who have made outstanding contributions to the development and implementation of AI policies that advance fundamental rights, democratic values, and the rule of law. ➡️ Following the CAIDP mission statement, we will consider also their support for inclusion, fairness, and justice. And we will highlight their notable achievements. 💡 We seek your help to identify, recognize, and elevate these truly exceptional people. ❗ 📅 The deadline for nominations is January 20, 2025. 2024 CAIDP AI Policy Leader Awardees 🏅 Anu Bradford 🏅 Gabriela Ramos 🏅 AI Now Institute 🏅 Linda Bonyo 2023 CAIDP AI Policy Leader Awardees 🏅 Stuart Russell 🏅 Jan Kleijssen 🏅 Tawana Petty 🏅 Beena Ammanath #AIgovernance #AIleaders Merve Hickok https://lnkd.in/e5QRDghn
-
📢 CAIDP to Host Conversation on Public Procurement of AI - Jan 2, 2025 2025's first CAIDP Conversations focuses on the critical importance of public procurement of AI. Christabel Randolph moderates a discussion between Gus Rossi(Omidyar Network) and Merve Hickok (CAIDP). Gus Rossi recently authored an article "Public Procurement as AI Industrial Policy" arguing for a model of leveraging public procurement to drive sustainable practices in AI. Merve Hickok published a book "From Trustworthy AI Principles to Public Procurement Practices" in which she provides actionable recommendations for practitioners and policymakers on how to implement trustworthy AI principles in the procurement of AI systems and why such implementation is a necessity. Join us! ➡️ Gus Rossi, Omidyar Network ➡️ Merve Hickok, Center for AI and Digital Policy ➡️ Christabel R., Center for AI and Digital Policy 📅 January 2, 2025 🕐 1:00 pm EST 🕖 7:00 pm CDT 💻 Online - https://lnkd.in/ehw3vbcw
-
📢 "EDPB opinion on AI models: GDPR principles support responsible AI" "The European Data Protection Board (EDPB) has adopted an opinion on the use of personal data for the development and deployment of AI models. This opinion looks at 1) when and how AI models can be considered anonymous, 2) whether and how legitimate interest can be used as a legal basis for developing or using AI models, and 3) what happens if an AI model is developed using personal data that was processed unlawfully. ➡️ Anonymity. The Opinion says that whether an AI model is anonymous should be assessed on a case-by-case basis. For a model to be anonymous, it should be very unlikely (1) to directly or indirectly identify individuals whose data was used to create the model, and (2) to extract such personal data from the model through queries. ➡️ Legitimate Interest. The Opinion provides general considerations that DPAs should take into account when they assess if legitimate interest is an appropriate legal basis for processing personal data for AI models. . . "The EDPB gives the examples of a conversational agent to assist users, and the use of AI to improve cybersecurity. These services can be beneficial for individuals and can rely on legitimate interest as a legal basis, but only if the processing is shown to be strictly necessary and the balancing of rights is respected. ➡️ "The Opinion also includes several criteria to help DPAs assess if individuals may reasonably expect certain uses of their personal data. These criteria include: whether or not the personal data was publicly available, the nature of the relationship between the individual and the controller, the nature of the service, the context in which the personal data was collected, the source from which the data was collected, the potential further uses of the model, and whether individuals are actually aware that their personal data is online. ➡️ "If the balancing test shows that the processing should not take place because of the negative impact on individuals, mitigating measures may limit this negative impact. ➡️ Unlawfully processed personal data. "This could have an impact on the lawfulness of its deployment, unless the model has been duly anonymized." ➡️ Several key AI deployments, such as AI decision-making (Art. 22), are excluded in the analysis (p. 12) h/t Dr. Gabriela Zanfir-Fortuna Georgetown University Law Center Eleni Kyriakides
-
📢 CAIDP Provides Advice to US OMB about AI and Personal Data The Center for AI and Digital Policy has provided detailed comments to the Office of Management and Budget regarding the use of personal data in commercial services deployed by federal agencies. CAIDP said the Request for Information is timely as more agencies are developing AI systems. A recent report from the Office of the Director of National Intelligence warned that commercial services reveal sensitive personal information that increases the risk of surveillance. Highlighting the risks of data aggregation and profiling, re-identification, sensitive inferences, and bias and discrimination, CAIDP recommended that the OMB require the following in its guidance to federal agencies: ➡️ Privacy impact assessments at the time of procuring commercial services ▶️ Implement purpose limitations, retention periods, and data minimization requirements for all commercial services ➡️ Enhanced transparency of agency data practices in line with NIST’s AI Risk Management Framework ➡️ Maintain registries of commercial services containing personal data obtained by federal agencies ➡️ Require vendors to disclose data sources, maintain privacy risk mitigation measures including quality, and purpose of data sets, anonymization, encryption, and audit mechanisms ➡️ Implement oversight and reporting mechanisms in accordance with US Government Accountability Office 2022 report "Privacy: Dedicated Leadership can Improve Programs and Address Challenges" ➡️ Training agency IT professionals on integrating Privacy-Enhancing Techniques (PETs) into agency data practices ➡️ Implementing a uniform mechanism for handling commercial data, reporting on PIAs and privacy risk mitigation, agency commercial data registries/inventories, and oversight and reporting #RMF #aigovernance Merve Hickok Marc Rotenberg Christabel R. Rupali Lekhi Sophie Nantanda
-
CAIDP Reunions in Montenegro and Riyadh 🥳 🎉 Center for AI and Digital Policy Advanced and Policy lead representing CAIDP at the Tech4Rights Summit in Montenegro Snežana Nikčević and Paola Gálvez Callirgos🌐 and Policy Group team member Heramb Podar at the Internet Governance Forum in Riyadh. CAIDPers spoke about the work of CAIDP, including the need for clear red lines, our advocacy work urging states such as South Africa 🇿🇦, Brazil 🇧🇷, and USA 🇺🇸 to ratify the CoE treaty, the AIDV report and highlighted UGAI principles like transparency, impact assessments and termination obligation. Center for AI and Digital Policy Europe