Towards a computational methodology in ethics?

Draft – do not cite

As machine learning and artificial intelligence filter into more and more complex areas of human action, the natural question that follows is how we deal with the many ethical and moral issues that arise. Examples abound: should a self driving car be driven by utilitarian or deontological principles? In what order do we prefer that it hits property and people? What about systems that help making medical decisions, how should ensure that they are liable for those decisions? These questions are all important and central to the waves of automation and man-machine collaboration that are coming. Technology is, as Hans Jonas noted, a form of exercise of power and so must be subject to ethical discussion, and the more powerful the technology, the more pressing the ethical concerns.

Coding ethics is not a simple task, and the field of computational ethics is just opening up. Even so it is valuable to also think about what this new set of questions could imply for the study of ethics _generally_. Here I would like to suggest and start to explore a few of the possible consequences of the introduction of a computational methodology for the study of ethics.

This idea is not new, but bear with me. I hope to make a few points that may be at least made in a novel way, and it is my hope that these points can be arranged to come to a few tentative conclusions about the usefulness of such a methodology.

First, what, exactly, is it that we would mean by a computational methodology for ethics? Let me state the strongest possible version this way:

(I) In order to be a valid object for study an ethical proposition must be expressed in code or meta-code that can be unambiguosly implemented in a computer system.

What (I) says is that a proposition of ethics is only valid as an object for study – or indeed as a proposition of ethics (although the difference between the two is interesting) if it is expressed as code that can be implemented in a computer. The categorical imperative or the golden rule would thus have to be written out in code in order for us to understand what they actually mean as ethical propositions (doing this is a useful exercise that is left to the reader). Any proposition that cannot be implemented would be excluded from study and could be classified as too unclear to be an ethical proposition.

(I) resembles a requirement that a proposition be expressed logically in order to be studied, so how is this not just a rerun of extreme logicism? The belief that propositions need to be qualified this way – by turning them into another form or translating them into another formal language – makes no sense, we could argue, because it is the real-world ambiguity that is the subject of study in ethics. Hence we need to study ordinary language propositions and understand them instead of formalizing them.

This is an entirely possible position to take, and my suggestion is not that this is the only possible methodology, so I subscribe to a weaker version of (I) that is this:

(II) Expressing an ethical proposition in code or meta-code that could be implemented in a computer will reveal interesting aspects of that proposition and the problem it refers to.

The idea here is that formalization offers insight into a propositions qualities and challenges. I think this is clearly true, and furthermore I think it is also true that expressing an ethical proposition as code is different than formalizing it in logic. Expressing something in running code forces a focus on things that never come up in logical formalization, and that actually help us see a situation more clearly.

A few examples may help to illustrate this.

Code executes in time. That means that in order to express an ethical principle in code we need to express it in time. Time is an essential part of ethics that is usually abstracted out from thought experiments like the trolley experiment. This experiment, where an observer gets to choose different courses of action as he sees a trolley careening towards different groups of people in different scenarios, is obviously dependent on time. If the trolley is moving very fast, the reactions will be predicated by that speed. If the trolley is slow other options present themselves. It seems obvious that in situations with extreme time pressure we become Kantian, and if we have time we can afford the luxury of utilitarism – to put it simply.

When we express an ethical proposition in code we need to factor this in, and that means that the study of the proposition becomes, arguably, more useful and time is taken into account in an interesting way.

A corollary to this is that code actualizes the notion of computational cost. An ethical principle that requires massive computation is very hard to implement and may be completely impractical. Let’s take a very silly but interesting example: the exercise left to the reader above. How would we implement the categorical imperative in code?

The idea is simple enough: act as if your actions could be elevate to a general rule or law. But how do you test for that? In order to do so you need to simulate a social system with a base set of laws and then add the behaviour you are contemplating as a law and calculate the outcomes of that complex system across a number of probabilistic scenarios. The computational cost of at least this implementation of the categorical imperative would be enormous, not to mention that the outcomes may not be decisive. In some cases the principle could be encoded in a software program, but executing that program may require more computational power and take more time than the time needed to make the decision.

Ethical principles that are to costly from a computational perspective would thus be weeded out and discarded, or we could discuss the possible limitations of the simulations we run in order to implement a principle. Take Adam Smith’s idea of an independent third party observer – a reasonable man – how would we code such a principle for ethical decision making? How complex must a simulation of a reasonable man be in order to be reasonable? And is the computational cost for that and the time required to run the simulation within decision parameters?

Ethics with bounded computational power would also ask interesting questions about the situations where the computer system actually had much more computational capacity than a human being in considering different alternatives. Would we accept an ethical proposition that relies on a system’s computational capacity being orders of magnitude larger than that of our own?

What would this look like? The simplest example would be the computational categorical imperative: one could argue that the categorical imperative requires a complexity of simulation that is only possible in a computer system of certain capacity X, and that the principle then is a fair and good ethical principle because the possible outcomes are examined in a way that makes it reasonably certain that the action is ok to undertake should it be elevated to a general rule or law. And we could equally say that this complexity of simulation cannot be run by a human, so the categorical imperative could never apply to humans, only computer systems.

So, does there exist a class of ethical propositions such that they can only be implemented in artifical intelligence systems? Or in a combination between man and machine? That would be another interesting question for examination in a computational methodology discussion.

This leads us to a final consideration that is increasingly important, and that is the general consideration of opacity and transparency. In machine learning research a key work stream is to come to grips with the question of how we audit and review and understand systems that evolve and teach themselves to become better and better. As Arbesman and others have shown our systems are increasingly becoming opaque not through design, but through the evolved complexity that they acquire drawing on large data sets and repeated iterative learning of different kinds.

This raises a number of interesting questions for us. One of the questions is if it is possible to build an ethics engine that would operate the same way a game engine for, say, chess or go would operate. In essence you would then formalize a series of ethical situations as games, feed them to the system and the have it play “master ethicians” in order to acquire an ethical capacity. It could then play itself million of times in more and more complex situations and acquire an ethical capacity that would be much greater than any human being.

What would need to hold true for this to happen? This in itself is an interesting question and in order to explore that we would need to ask things like how ethics is like a game, how instances of the game can be played, how we can capture ethical behaviour and teach it to a computer. Our instinctive reaction that this is silly and borders on the nonsensical – but then the interesting question becomes one of why we cannot do this. If we can teach ethics to a human, why can we not teach it to a computer? What is special about learning ethical rules? Here the methodology opens up a series of interesting questions as well.

If we find that we could do this, we end up with the first problem again: how do we deal with the fact that this system would be so complex that we could not easily explain how it arrives at the ethical conclusions it arrives at? It will make what we presume are supreme ethical decisions, but we do not know how it arrives at those decisions! (Here one of the interesting differences between games and ethics is of course that there are agreed on ways to _win_ a game but not an ethical decision). Even if we have narrowed the scope an allow a machine to make only very limited decisions we need to deal with the situation where we face a system that we cannot interpret correctly at what Dennett calls the design stance, as a system. We would then have to examine the intention of the system instead, and adopt an intentional stance – something that requires that we find intentional explanations for ethical behavior implemented in machines.

Here we seem to face a series of difficult problems about if there are ethical decisions that we can recognize as superior or right only on the basis of knowing that the system that made them was designed in a certain way, even if we do not understand or are unable to explain the way we arrived at the individual decision itself.

*

What I hope these different examples an questions have shown, what these different points are trying to illustrate, is that the notion of a computational methodology for ethics would present an intriguing project and a worthwhile exploration. It would also support and guide the inevitable inclusion of machines into more and more ethically relevant action. Hopefully it could also teach us interesting things about ethics itself.

Nicklas Lundblad, September 2017

Comments are closed.