Concordia AI 安远AI’s Post

🚀 Issue #11 of AI Safety in China! Key Takeaways: 📑 Chinese research groups published over a dozen new preprints on AI safety in February, including papers on sociotechnical alignment and benchmarks for AI safety. 💻 AI safety and security standards were listed as a priority for 2024 by China’s top standards body. 🛡️ A think tank at Tsinghua University ranked AI as one of China’s eight key external security risks in 2024. 💵 Influential academic ZHANG Ya-Qin reiterated his support for frontier AI companies and AI-related government funds to spend a minimum of 10% on AI risks research. 💡 Subscribe to AI Safety in China to get biweekly updates on ✅ China's positions on international AI governance ✅ China's governance and policy initiatives to mitigate AI risks ✅ Technical safety and alignment research in China ✅ Views on AI risks by Chinese experts. #AISafety #AI #China #InternationalCollaboration #ConcordiaAI #Newsletter

AI Safety in China #11

AI Safety in China #11

aisafetychina.substack.com

Rufo Guerreschi

Towards a global constituent assembly for AI and digital communications

9mo

Great work! One comment though. I believe it is largely impossible to separate work on AI capabilities and work on their safety, hence the 10% requirement is good, but cautioness and safety need to be ensured by other and more extreme measures. Don’t you agree?

Like
Reply

To view or add a comment, sign in

Explore topics