We've been weirdly quiet about AI at Meantime Studio, it's been an elephant in the room for a while, if the elephant was made of fire and was shooting bullets around the office with it's trunk. We can't really ignore it, but not ignoring it doesn't have to mean embracing it with open arms. We recognise the potential for some areas of AI to help us in our day to day whilst keeping a watchful eye on the ethics behind other tools. So we're building an AI policy/framework/code of ethics (whatever cool name we think of) to share our stance on AI. It's about being transparent with our clients on what we do, how we've done it, and advising on what acceptable usage looks like. It's about setting a standard for us as creatives, keeping our talents sharp with knowing how to use acceptable tools but not fall into a lazy rut. I believe having an AI disclosure will be an important asset for studios to have in the future, perhaps having a space next to privacy policies on websites. I've written a short piece about what we're doing here https://lnkd.in/eNkF3BBk We've conducted staff surveys to gather a range of views and ideas on AI at Meantime, and will be discussing the results in the next few weeks, hopefully not too long after that, we'll have a shiny new policy to share!
Jack P.’s Post
More Relevant Posts
-
What do you think about when you think about technology? In the middle of the last century, science fiction author and scientist Issac Asimov saw the future and, in his works, created three laws for artificial intelligence that are still referenced today. The roboticization of society or humanity is a trope beloved by science fiction movies and authors, but Asimov tried to see how these new beings, these new ideas, might operate best in a human world. Appearing first in the 1940s and as part of his short fiction entitled ‘I Robot’ they stated: 1. First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. Second Law: A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Asimov later added a "Zeroth Law" as well: 0. Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm. The Zeroth Law takes precedence over the other laws, indicating that the well-being of humanity as a whole is of utmost importance. These laws form a fundamental aspect of Asimov's fictional universe and have also sparked discussions in real-life ethics and the field of artificial intelligence regarding the potential guidelines for AI systems. But are these ideas even right? Are these laws actually enforceable in the code we write today? I find that when we look at these laws we are very specifically thinking about physical, or at the very least immediate, harm, and they can be argued against and loopholed very easily. Expressed like this they are very good for us humans with contextual understanding and the ability to interpret; but for an AI, for a program running on logic: I think they might be in trouble. So if we are unable to place boundaries on technology moving faster than we can legislate for, what is left to us? The answer is we have to manage ourselves. We have to impose upon our approach to technology. I argue, through a few short chapters how the principles of Context, Collaboration, and Critical thinking can create a framework for us and how we use AI that provides a way to approach a fast moving space with confidence and creates the best outcomes for us all. Humans are indeed, at times, simple. But we are also perfectly unique, irreplaceable pieces in the great machine we call society. The new Laws of Technology (Gen AI version 1.0) 1. First Law: technology must be available to all 2. Second Law: technology must be a collaborator 3. Third Law: Any use of technology must be based on a user’s expert contextual understanding of the topic being investigated. 0. Zeroth Law: technology is not perfect and must be used within a framework of critical thinking.
To view or add a comment, sign in
-
🚨 Fascinating AI paper alert: Ian Ayres & Jack M. Balkin publish "The Law of AI is the Law of Risky Agents without Intentions." A must-read for everyone in AI. Quotes: "A recurrent problem in adapting law to artificial intelligence programs is how the law should regulate the use of entitles that lack intentions. Many areas of the law, including freedom ofspeech, copyright, and criminal law, make liability turn on whether the actor who causes harm(or creates a risk of harm) has a certain intention or mens rea. But AI agents—at least the oneswe currently have—do not have intentions in the way that humans do. If liability turns on intention, that might immunize the use of AI programs from liability." (page 1) - "The two strategies of ascribing intention and imposing standards of behavior based on an imagined intention are mirror images of each other. The first strategy says 'regardless of your intentions, the law will treat you as if you had a particular intention and regulate or penalize you accordingly.' The second strategy says 'regardless of your actual intentions, the law will measure your conduct by the standard of a hypothetical person with a particular mental state and regulate or penalize you if you do not live up to that standard.' We propose that the law regulate the use of AI programs through these two strategies. (...)" (page 3) - "The spread of AI technology will likely require changes in many different areas of the law. In this essay we’ve argued for viewing AI technology not in terms of its independent agency but in terms of the people and companies that design, deploy, offer and use the technology. To properly regulate AI, we need to keep our focus on the human beings behind it." (page 10) ➡ Read the full paper below. ➡ Never miss my AI policy, regulation, and excellent papers updates: subscribe to my weekly newsletter. #AI #AIlaw #AIregulation #AIagents #liability #AIAct #AIpolicy
To view or add a comment, sign in
-
'Regulating the Synthetic Society: Generative AI, Legal Questions and Societal Challenges' by Bart van der Sloot, Tilburg Law School: 'Experts predict that in five years' time, more than 90 per cent of all digital content will be wholly or partially AI generated. In a synthetic society, it may no longer be possible to establish what is real and what is not. Although they are only in their relative infancy, these technologies can already produce content that is indistinguishable from authentic material. The impact of this new reality on democracy, the judicial system, the functioning of the press, as well as on personal relationships, might be unprecedented.' – Book refers to #EPRS Briefing 'Artificial Intelligence ante portas: Legal & ethical reflections' (https://lnkd.in/eFPNhwe6) and study on Liability of online platforms (https://lnkd.in/dDzzZ6Vd)
Regulating the Synthetic Society
bloomsbury.com
To view or add a comment, sign in
-
#Humane and #Ethical dimensions of #AIDesign are among the most demanded and debated aspects of the contemporary world. This research based paper asks how to incorporate #FundamentalRights protection into the architecture of #AI systems. The present report tackles this challenge by merging the results of #LegalResearch with the findings achieved through the participation in the macro-project “#MetricsFor Ethics” carried out in the context of WP5 of the HumanE AI Net project. It covers the following six dimensions of #Ethics: · Explainability · Transparency · Fairness · Bias · Trustworthiness The macro-project “Metrics for Ethics” aims to contribute to the research on the technical approaches to monitor #Ethical, #Legal, and #Social requirements in #AIDesign, development, and deployment. To this end, the participants in the macro-project have conducted a case study that, by addressing a use case classified as High-risk under the AI Act, i.e., creditworthiness evaluation, sought to develop a dashboard that allows its users to explore different aspects of the use of AI metrics to measure requirements such as bias, fairness, robustness, explainability and transparency. The cross-disciplinary dialogues with the team members of the macro-project has offered a hands-on perspective on the question of whether and, if so, how and to what extent, AI metrics can contribute to Legal Protection by Design, a concept coined by Hildebrandt to characterise an approach aimed to articulate fundamental rights protection and the checks and balances of the Rule of Law in the design of digital technologies. In this perspective, the topic of AI Metrics has proved particularly apt to investigate the delicate interplay between legal and technical requirements in the practices of designing, testing and documenting AI systems and models
To view or add a comment, sign in
-
Can AI do all your heavy lifting? The use of AI has skyrocketed over the past five years. However, as you leverage its capabilities, it's crucial to consider the legal implications, such as copyright and plagiarism. Engaging thoughtfully with AI not only enhances your productivity but also ensures you navigate these important issues responsibly. Let's harness the power of AI while staying mindful of our legal and ethical obligations! #kleymansolicitors #lawyersoflinkedin #lawfirm #ai #technology #legalimplications #newtechnology #safeonline
To view or add a comment, sign in
-
With the #AI Revolution upon us and gaining speed daily, its impact on industry and society continues to know no bounds.⏩ 🤔 But what about the ethical boundaries of AI? The power of AI has raised growing ethical concerns related to: ⚖️ Bias and Discrimination 📋 Transparency and Accountability 🔏 Data Privacy and Protection ©️ Creative Ownership and Copyright 📉 The Impact on the Job Market ♻️ Environmental Sustainability 🔗 Want to learn more about the current ethical implications of AI? Check the comments for the link! 👇
To view or add a comment, sign in
-
🚀 The European Union Commission this week approved a groundbreaking AI Act, marking a significant milestone in the regulation of Artificial Intelligence! 🌐💡 🌟 This act aims to foster innovation while ensuring AI technologies are safe, transparent, and adhere to ethical standards. It sets clear guidelines for the development and deployment of AI systems across various sectors, prioritizing human well-being and rights. 🔍 Key highlights of the AI Act include: 1️⃣ Strict rules for high-risk AI applications to guarantee safety and accountability. 2️⃣ Transparency requirements to ensure users understand when they're interacting with AI systems. 3️⃣ Clear criteria for assessing the risk level of AI applications, promoting trust and reliability. 4️⃣ Emphasis on data privacy and protection, reinforcing user trust in AI technologies. 🌐 This forward-thinking legislation positions the EU as a global leader in shaping the future of AI, promoting responsible innovation that benefits society as a whole. 💬 Let's engage in conversations and collaborations to harness the power of AI responsibly and ethically. Together, we can create a future where AI enriches lives and drives positive change! #AIAct #EthicalAI #EUCommission #Innovation #TechRegulation 🤝🌍💡 https://lnkd.in/ekgDFa55
EU AI Act: first regulation on artificial intelligence | Topics | European Parliament
europarl.europa.eu
To view or add a comment, sign in
-
The passing of California’s AI Digital Replica Bill brings up important questions about how we balance technological advancement with individual rights. This bill tackles a growing issue—the unauthorized creation of digital replicas, which can easily be exploited through AI. The legislation aims to curb the misuse of AI-generated content, especially in scenarios where individuals' likenesses are replicated without their consent. As someone deeply involved in both technology and compliance, I see this as a much-needed regulatory move. It reflects the growing necessity for IT professionals to not only understand the capabilities of AI but also the ethical responsibilities that come with deploying it. This isn’t just about catching up to technology—it’s about setting boundaries to ensure AI develops in a way that respects privacy and personal agency. The implications for the future of AI governance and compliance are vast, and this bill could be the first of many as other states and countries begin to wrestle with similar issues. #AIRegulation #DigitalEthics #TechnologyAndCompliance #AI #ITGovernance
A bill to protect performers from unauthorized AI heads to California governor
npr.org
To view or add a comment, sign in
-
In the US, EU, and China, the race to regulate AI isn't just about technology; it's about shaping the future of humanity. The EU has taken steps forward with sweeping laws, setting a global example. The AI Act categorises applications by risk, imposing strict requirements on high-risk systems like those in law enforcement and healthcare. Developers must ensure non-discrimination, transparency, safety, and privacy compliance. Generative AI models must be clearly labelled, respecting copyright laws. China enforces strict controls, requiring government approvals for AI technologies, ensuring the state remains in the driver's seat. Meanwhile, the US treads cautiously, with cities and states leading restrictions on AI in policing and hiring, while the federal government deliberates its move. This isn't just about preventing a sci-fi scenario of AI rebellion. It's about addressing real risks: misinformation, bias, privacy violations, and more. When a fake AI-generated image can shake the stock market, it's clear the impact of unregulated AI reaches far beyond the digital realm. Tech giants like Google and IBM are calling for oversight, recognising the need for balance between innovation and safety. The question we must ask ourselves: Are we prepared to navigate the ethical minefield that AI presents? Let's discuss: What role should governments play in regulating AI, and how can we ensure these technologies enhance rather than endanger our societies? Check out the full article below: https://lnkd.in/eCxxkMRW
Regulate AI? How US, EU and China Are Going About It
bloomberg.com
To view or add a comment, sign in