Draft: do not cite.
A recurring question in the discussions about technology, economy and society that are unfolding now is that of jobs and the role of us in a world where machines threaten to replace us. There is a series of interesting and important questions around this that I believe deserve our attention, and we should take them very seriously. In the following short essay I try to lay some of them out and examine them more closely.
So, here is the thesis: machines will replace us and create economic and social havoc.
We should ask a number of different questions when we unpack this idea.
The first is this: will machines really be able to do all that we can do? This is a key question that has been around for quite some time. The debate about what computers cannot do is one that has followed the evolution of artificial intelligence since its inception. In books like Hubert Dreyfus’ What Computers Cannot Do the lines around human uniqueness are drawn up and defended vigorously, and even if the development of the field of artificial intelligence has put some serious dents in the belief that there are things computers cannot do, we cannot exclude the possibility that there are things that humans are uniquely capable of. There are challenges here, of course, not least the fundamental one of what this even means. If we assume that anything that can be described algorithmically can be done by a machine, the proposition we have to prove seems to be:
i) There are things humans do that are not possible to describe algorithmically.
That seems a hard proposition to prove, to put it mildly, since I cannot think of anything that cannot be described as a series of different steps executed in due order. Another version of the same challenge is to come up with something indeterministic that we do (that still makes sense to say that we actually do, and that does not just happen to us). Roger Penrose has, with great logical sequence, argued that what we are looking for here needs to be sought at quantum level, but his views are as of yet unsubstantiated.
If we move away from the hard version of this challenge to substitution, however, we see that there are several much better softer versions. In order to imagine total substitution we need to not just believe that machines will be able to do all that we can do, but also that they can do it cheaper and faster.
This may very well not be true. Here several other factors of cognition – like it being embodied – comes into play. And the truth is probably that for any particular set of more complex projects the tasks in that project – if allocated on a cost basis – would be distributed between man and machine. Now, this might not be very heartening if we fear that all menial tasks will befall humans, but it still argues for complementarity. We do have reason to believe that any sufficiently complex set of tasks probably is most efficiently solved by a combination of man and machine.
If we compare the efficiency of human chess players and Deep Blue, for example, we could surmise that the cost and time and energy required by a human chess player is far less than that of a super computer. Is this true? Maybe. It is unclear exactly what we would be comparing here. The number of hours spent by the human player to learn the game and play it well? That may be in the range of the famous 10 000 hours and at 100 dollars an hour that is a million dollars total for the human chess player. What is the energy requirement? A chess player burns roughly 133 calories per hour, and so you would have to calculate the sum total energy consumption through that as well – so how do we compare? Still fairly favorable compared to one cost assessment for Watson, IBMs Jeopardy playing computer, that is believed to be between 900 and 1.8 billion dollars in research alone. If we add staff and energy consumption we end up in an even more interesting calculation. Furthermore, the ability to use narrow AI for other purposes simply does not exist. AlphaGo cannot cook, read to kids or drive a bus.
A broader question is what the metabolism of a general artificial intelligence would look like in terms of how much energy it would require. How energy efficient would it be? We could imagine a world in which we can only run an AGI for a very short while, and where our first question to it should be where we can find enough energy for it to run for longer. Generally the cost of applying machine learning to a set of tasks across a society sums up and needs to be compared with the benefit visavi the alternative cost of asking humans to do the same tasks (and remember that the loss of purpose and possible social unrest in the face of unemployment are costs too).
This seems to imply that substitution either is not possible or not efficient for a set of cases, and this would mean that complementarity is the natural consequence. There are other factors that seem to add to that assumption as well.
The changing programming paradigms seem to suggest that complementarity is a real alternative. As we move from exhaustive search of databases to neural networks and probabilistic models it seems adding yet another neural network in the form of a human could often be a great way of complementing the model. In fact, when the models of cognition converge it seems that we could even say that complementarity is natural – neural networks are naturally complementary with themselves in some of our current models and so why would we not just add one that has slightly different characteristics? In this world, where machines think more like us, the distinction between substitution and complementarity blurs.
Maybe we are being led astray by our metaphors here. We speak of man and machine, of robots taking our jobs. There is a very real possibility that these metaphors are wrong. A better set of metaphors may be picked up from music, and the image we could explore is that of the piano and pianist. Does the piano substitute the pianist? Of course not, the piano is an instrument with which the pianist makes music. Together they play and create and explore. We have been locked in a metaphor looking at the computer as a tool and a machine. We speak of Turing machines, but maybe we should speaking of Turing instruments?
Substitution and complementarity melt away in that metaphor and leave room for a rather attractive image of us playing the world with technology.
Stockholm, October 2017.