Barry Cooper’s Post

View profile for Barry Cooper, graphic

Founding Principal | Making a Difference Through Education | I Love Start Ups | Ed Tech | AI | Cultural education | Entrepreneurship | Building Strong Communities | podcast host.

What do you think about when you think about technology? In the middle of the last century, science fiction author and scientist Issac Asimov saw the future and, in his works, created three laws for artificial intelligence that are still referenced today. The roboticization of society or humanity is a trope beloved by science fiction movies and authors, but Asimov tried to see how these new beings, these new ideas, might operate best in a human world. Appearing first in the 1940s and as part of his short fiction entitled ‘I Robot’ they stated: 1.     First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2.     Second Law: A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3.     Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Asimov later added a "Zeroth Law" as well: 0.     Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm. The Zeroth Law takes precedence over the other laws, indicating that the well-being of humanity as a whole is of utmost importance. These laws form a fundamental aspect of Asimov's fictional universe and have also sparked discussions in real-life ethics and the field of artificial intelligence regarding the potential guidelines for AI systems. But are these ideas even right? Are these laws actually enforceable in the code we write today? I find that when we look at these laws we are very specifically thinking about physical, or at the very least immediate, harm, and they can be argued against and loopholed very easily. Expressed like this they are very good for us humans with contextual understanding and the ability to interpret; but for an AI, for a program running on logic: I think they might be in trouble. So if we are unable to place boundaries on technology moving faster than we can legislate for, what is left to us?  The answer is we have to manage ourselves. We have to impose upon our approach to technology. I argue, through a few short chapters how the principles of Context, Collaboration, and Critical thinking can create a framework for us and how we use AI that provides a way to approach a fast moving space with confidence and creates the best outcomes for us all. Humans are indeed, at times, simple. But we are also perfectly unique, irreplaceable pieces in the great machine we call society. The new Laws of Technology (Gen AI version 1.0) 1.     First Law: technology must be available to all 2.     Second Law: technology must be a collaborator 3.     Third Law: Any use of technology must be based on a user’s expert contextual understanding of the topic being investigated.   0.     Zeroth Law: technology is not perfect and must be used within a framework of critical thinking.

  • No alternative text description for this image
Isobel Lee

BSc (Hons)Biology(Exeter). PGCE (Oxford). MEd. Educational Leadership and Policy.(Monash). Global Schools Advocate

1mo

The third law is interesting - aligns in education to students not using AI passively to ‘do the thinking for them’ but as co constructors providing tools -? Interesting how that will be developed for, example, the TOK teacher.🤔

To view or add a comment, sign in

Explore topics