From the course: Ethics and Law in Data Analytics
Values like ours
- Unless we are foolish in the extreme, there are two coming realities that we, as a human species, must prepare for. First of all, that artificial intelligence will eventually exceed human intelligence on just about every metric. And second, that it will eventually not be fully controllable by human beings. If you think about it, we're actually in a pretty exciting place. We know this is coming, and we have a small window of time, however small, to think about how to make this happen properly. The apes, on the other hand, had no idea humans were coming, and I bet they wish they could've done some planning if they had to do it again. We humans have been just terrible at sharing the world with them. We kind of just took over the place once we got here. So how do we avoid the fate of our evolutionary ancestors? The general consensus is that we want these super intelligent machines to have values similar to human values. Now, you're probably already thinking, hey there's a lot of evil in the world, so maybe humans aren't such a great model. And yes, there are terrorists and other kinds of evil-doers out there. But if you think about numbers, there are seven billion of us, and almost all of those seven billion just want to achieve our goals peacefully without doing serious harm to other people. And so, if these super intelligent beings think in those basic terms, that's actually a pretty decent future. The extreme opposite possibility is that we'll program some machine to solve a problem, and it will run some calculation and determine that the best way to solve that problem is eliminating the human race. In the academic community, we call that a suboptimal outcome. So, how can we design the kind of future we want? Nick Bostrom, Director of the Future of Humanity Institute at Oxford University, has established a framework that seems basically right. He notes that designing super intelligence is obviously extremely difficult, but designing super intelligence that shares our values necessarily combines the difficulty of the first task and then adds a layer of complexity. And the specific danger is that we'll figure out how to design super intelligence, but before we can figure out how to program it with values, we get that suboptimal outcome I describe. Bostrom calls this the problem of value loading. It turns out that my full-time job over a decade now has been value loading. And like most people who have been working at something, I've had many successes and a chance to learn from many failures, and so I have a few opinions on best practices here. Now, I should probably mention, as a disclaimer, that I am paid to load value not into AI systems, but into organic college student intellects, because I teach Applied Ethics at CL University. But I have a feeling that there are some lessons that can be transferred. Here are what seem to be the two most basic. First, value loading must be something general, not directed towards specific situations. It would probably make me more likely to be successful in my job as a professor if I could train my students with a list of if, then statements. If someone says or does X, then the ethical thing to do is Y. But situations are like snowflakes, there are no two exactly the same. So however value loading works, there has to be a moral capacity available for every contingency. Second, value loading in humans is only possible unless there is first, a basic concern with human well-being in general. We know this from module one, when I talk about ethics in general, and if you remember, I was careful to distinguish this basic attitude from the five specific ethical values. On the first day of my college courses, I ask my students a question. Can ethics be taught? I give some time for the awkward silence to wash over people as one by one they start realizing that if the answer is no, we are all wasting our time. The truth is that it depends. Specifically it depends on whether the student cares about human well-being in the first place. Because if they do, then talking about different values is very helpful. But if they don't, then yeah, talking about ethics is a waste of time. I wonder if the same thing will be true to super intelligence. We can attempt value loading, but if the basic concern with human well-being isn't there, the values won't stick. The problem of creating super intelligence with values like ours is a fascinating problem, and one that may turn out to influence the future of humanity more than any other problem we must confront.
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.