Join us on December 2-3 at the National University of Singapore 🇸🇬 , where Concordia AI and Singapore’s AI Verify Foundation will be hosting three panels on AI safety at the International AI Cooperation and Governance Forum 2024! 🌟 Speakers include leadership of the Singaporean and UK AI Safety Institutes, top AI companies such as Zhipu.AI, representatives from the EU AI Office and General-Purpose AI Code of Practice, plus distinguished academics from Tsinghua University, the National University of Singapore, and more. 🔗 Event details: https://lnkd.in/gBsmnxk4 🔗 Registration: https://lnkd.in/gefva7Yg #AISafetyTesting #AISafetyInstitutes #ScienceofEvaluations #AIGovernance #InternationalCooperation
Concordia AI 安远AI
Technology, Information and Internet
Guiding the governance of emerging technologies for a long and flourishing future
About us
AI is likely the most transformative technology that has ever been invented. Controlling and steering increasingly advanced AI systems is a critical challenge for our time. Concordia AI aims to ensure that AI is developed and deployed in a way that is safe and aligned with global interests. We provide expert advice on AI safety and governance, support AI safety communities in China, and promote international cooperation on AI safety and governance.
- Website
-
https://concordia-ai.com/
External link for Concordia AI 安远AI
- Industry
- Technology, Information and Internet
- Company size
- 2-10 employees
- Type
- Privately Held
- Specialties
- Consulting, Artificial Intelligence, Technology, Strategy, AI Safety, AI Governance, Information Technology, Policy Analysis, and Conferences
Employees at Concordia AI 安远AI
Updates
-
The deadline for our International AI Governance position is November 27 -- don't forget to apply if you're interested in the position!
📢 Concordia AI is Hiring! We’re looking for talented professionals to join our teams in Beijing and Singapore across several roles. See our full job posting here (https://lnkd.in/gh5uuG6M). We’re hiring for: 🌐 International AI Governance Research Manager/Researcher (Singapore, 50% on-site) – Requires native English and proficient Mandarin. This role will publish research on Chinese and Southeast Asian approaches to AI safety, provide policy recommendations for international summits, engage with Singapore’s AI governance ecosystem, and plan AI governance events in Singapore. Deadline: 27 November 2024. Apply here (https://lnkd.in/gJrZjJSk). 👩🔬 Foundation Model Safety Research Engineer (Beijing; Singapore-based role possible) – Requires native Mandarin and proficient English. This role will develop and conduct safety evaluations and red teaming of large models to identify safety risks and propose mitigation strategies. Deadline: 24 November 2024. Apply here (https://lnkd.in/gNqU-P6C). See the Chinese listing here (https://lnkd.in/g87uUVvf). 📊 Operations Manager/Associate (Beijing) – Requires native Mandarin and proficient English. This role will manage Concordia AI’s finances, human resources, risk management, and overall office operations. Deadline: 24 November 2024. Apply here (https://lnkd.in/gNqU-P6C). See the Chinese listing here (https://lnkd.in/g87uUVvf). Upcoming Info Sessions: 7 Nov, 8–9 PM (China Time) – Mandarin session for Beijing-based roles (https://lnkd.in/gpFErWn4). 13 Nov, 9–10 PM (China Time) – English session for Singapore-based roles (subject to interest) (https://lnkd.in/gNvTZChH). Start date is March 2025. #hiring #AISafety #AI #China #ConcordiaAI
-
Reminder that Concordia AI's information session for our Singapore-based open positions will be held on November 13 at 9 PM China time. Please join the Zoom link here: https://lnkd.in/gufyBN9H For more information on the positions, see our full post here: https://lnkd.in/gw-a2bQh
Join our Cloud HD Video Meeting
us06web.zoom.us
-
📢 Concordia AI is Hiring! We’re looking for talented professionals to join our teams in Beijing and Singapore across several roles. See our full job posting here (https://lnkd.in/gh5uuG6M). We’re hiring for: 🌐 International AI Governance Research Manager/Researcher (Singapore, 50% on-site) – Requires native English and proficient Mandarin. This role will publish research on Chinese and Southeast Asian approaches to AI safety, provide policy recommendations for international summits, engage with Singapore’s AI governance ecosystem, and plan AI governance events in Singapore. Deadline: 27 November 2024. Apply here (https://lnkd.in/gJrZjJSk). 👩🔬 Foundation Model Safety Research Engineer (Beijing; Singapore-based role possible) – Requires native Mandarin and proficient English. This role will develop and conduct safety evaluations and red teaming of large models to identify safety risks and propose mitigation strategies. Deadline: 24 November 2024. Apply here (https://lnkd.in/gNqU-P6C). See the Chinese listing here (https://lnkd.in/g87uUVvf). 📊 Operations Manager/Associate (Beijing) – Requires native Mandarin and proficient English. This role will manage Concordia AI’s finances, human resources, risk management, and overall office operations. Deadline: 24 November 2024. Apply here (https://lnkd.in/gNqU-P6C). See the Chinese listing here (https://lnkd.in/g87uUVvf). Upcoming Info Sessions: 7 Nov, 8–9 PM (China Time) – Mandarin session for Beijing-based roles (https://lnkd.in/gpFErWn4). 13 Nov, 9–10 PM (China Time) – English session for Singapore-based roles (subject to interest) (https://lnkd.in/gNvTZChH). Start date is March 2025. #hiring #AISafety #AI #China #ConcordiaAI
-
🚀 Issue #17 of AI Safety in China! Key Takeaways: 🌐 China announced an AI capacity building project directed at Global South countries at the UN Summit of the Future. 🇨🇳 🇺🇸 The Chinese and US governments indicated that a second round of intergovernmental dialogue on AI is likely after the US national security advisor’s trip to China. 📘 A Chinese standards body issued China’s first AI Safety Governance Framework with substantial treatment of frontier AI risks. 🔎 Recent Chinese technical AI safety papers include work on “weak-to-strong deception,” benchmarking LLM risks in science, and assessing which layers at the parameter level are most important for AI safety. 🎤 A Chinese academician and former cybersecurity official spoke on the need for further technical AI safety research. 💡 Subscribe to AI Safety in China to get biweekly updates on ✅ China's positions on international AI governance ✅ China's governance and policy initiatives to mitigate AI risks ✅ Technical safety and alignment research in China ✅ Views on AI risks by Chinese experts. Read the full issue here: https://lnkd.in/gBJx-xpW #AISafety #AI #China #InternationalCollaboration #ConcordiaAI #Newsletter
AI Safety in China #17
aisafetychina.substack.com
-
🌟Concordia AI was honoured to participate in events surrounding the UN Summit of the Future, including signing the Manhattan Declaration alongside AI luminaries such as Yoshua Bengio and Alondra Nelson—check out more in our Substack! 🌟 https://lnkd.in/g8_Uf6kY
Concordia AI at UN Summit of the Future, Signing the Manhattan Declaration (Full Declaration Text Included)
aisafetychina.substack.com
-
📣 New report: China’s AI Safety Evaluations Ecosystem📣 💡 With growing interest around the world towards evaluating AI models for dangerous risks, Concordia AI is publishing the first database of Chinese AI safety evaluations and comprehensive analysis of this ecosystem to appear in English. Highlights: 🏛️ The Chinese government already requires pre-deployment testing and evaluation of certain AI systems for ideology, discrimination, commercial violations, violations of individual rights, and application in higher risk domains. There are signs that this could expand in the future to incorporate testing for frontier or catastrophic AI safety risks. 🧠 The risk areas that received the most testing by Chinese AI safety benchmarks are bias, privacy, robustness to adversarial and jailbreaking attacks, machine ethics, and misuse for cyberattacks. 💻 Chinese evaluations tested for all categories defined as frontier AI risks, with misuse for cyberattacks as the most tested frontier risk. 📏 Chinese AI safety evaluations primarily comprise static benchmarks, with a small number of open-source evaluation toolkits, agent evaluations, and domain red teaming efforts. Chinese institutions do not appear to have conducted human uplift evaluations. 🏫 Shanghai AI Lab, Tianjin University NLP Lab, and Microsoft Research Asia Societal AI team are the only research groups that have published two or more frontier AI safety evaluations in China. However, many other government-backed, academic, and private industry research groups have also published evaluations covering a broad spectrum of AI safety and social impacts concerns. Read our Substack post (https://lnkd.in/gGZh7JK7) and check out our database (https://lnkd.in/gbzQqb5G) to learn more! We welcome engagement and outreach with other organizations interested in fostering internationally interoperable AI safety evaluation practices and standards. #Evaluations #AISafety #AI #China #InternationalCollaboration #ConcordiaAI #Newsletter
China's AI Safety Evaluations Ecosystem
aisafetychina.substack.com
-
📣 Announcement: Concordia AI to join Singapore’s AI Verify Foundation 📣 Concordia AI is honoured to be joining Singapore’s AI Verify Foundation together with leading global technology companies such as AWS, Google, IBM, Microsoft, and Ant Group to participate in Singapore’s efforts to advance responsible AI 🎉 https://lnkd.in/eR9ZfYTW AI Verify is an AI governance testing framework 🔍 first developed by the Infocomm Media Development Authority (IMDA) of Singapore in 2022. The AI Verify Foundation, established in 2023, brings together researchers, companies, and policymakers to support the AI Verify framework. 🌐 The Foundation will help to foster an open-source community to contribute to AI testing frameworks, code base, standards and best practices to ensure the safety and trustworthiness of AI systems. 🛡️ As a member of the AI Verify Foundation, Concordia AI looks forward to: 💡 Leveraging our expertise in AI safety and governance to to advise relevant testing and evaluation projects. 🔧 Developing and using cutting-edge tools, such as the LLM testing platform Project Moonshot, with other members of the Foundation to promote cross-language and cross-cultural AI evaluations. 🤝 Promoting exchanges and cooperation to foster internationally recognized and interoperable AI safety standards and evaluation frameworks.
-
📣 The Third Plenum of the Communist Party of China included the goal of “instituting oversight systems to ensure the safety of AI.” This is the strongest sign so far that top leaders in China are concerned about AI safety. But, what does this goal actually entail? 📕 Concordia AI has translated authoritative official study materials co-edited by President Xi and other top leaders expounding in greater detail the Chinese leadership’s views on AI safety. Key points: 🎯 Motivations for creating AI safety oversight systems are explained in terms of responding to rapid AI development, promoting high-quality development, and participating in global governance. 🔭 AI safety oversight should involve “forward-looking prevention and constraint-based guidance,” which suggests an active and potentially precautionary approach. ⚖️ The text argues against putting development ahead of governance. Instead, it suggests that both should go hand in hand, progressing at the same time. 🌏 The section is supportive of AI governance efforts globally, referencing China’s Global AI Governance Initiative, the UK’s Global AI Safety Summit, EU AI safety legislation, and American AI safety standards. Read our full translation on Substack (https://lnkd.in/gJVmS_wu) and subscribe for future updates! #AISafety #AI #China #InternationalCollaboration #ConcordiaAI #Newsletter
What does the Chinese leadership mean by "instituting oversight systems to ensure the safety of AI?"
aisafetychina.substack.com
-
🚀 Issue #16 of AI Safety in China! Key Takeaways: 🎤 AI safety was included in China’s Third Plenum decision document laying out top domestic priorities for the next five years, the highest-level document in which this concept has been mentioned. 🌐 The 2024 World AI Conference (WAIC) included a strong safety and governance theme and featured participation of China’s Premier, the Shanghai Party Secretary, and four additional ministerial or vice-ministerial level officials. ⚖️ A top researcher for China’s legislature cautioned against excessive focus on AI safety in a recent speech, advocating for an incremental approach to lawmaking. He also noted AI’s risks in cybersecurity and automated decision-making. 📝 Over the past two months, Chinese researchers published one of the first papers in China on mechanistic interpretability, as well as papers on unlearning, risks in superalignment, and benchmarking honesty. 💡 Subscribe to AI Safety in China to get biweekly updates on ✅ China's positions on international AI governance ✅ China's governance and policy initiatives to mitigate AI risks ✅ Technical safety and alignment research in China ✅ Views on AI risks by Chinese experts. Read the full issue here: https://lnkd.in/gVX-JJTZ #AISafety #AI #China #InternationalCollaboration #ConcordiaAI #Newsletter
AI Safety in China #16
aisafetychina.substack.com