Philosophy, Technology

Autonomy, technology and prediction I: some conceptual remarks

“How would you feel if a computer could predict what you would buy, how you would vote and what kinds of music, literature and food you would prefer with an accuracy that was greater than that of your partner?” Versions of this question has been thrown at me in different fora over the last couple of months. It contains much to be unpacked, and turns out to be a really interesting entry into a philosophical analysis of autonomy. Here are a few initial thoughts. We don’t want to be predictable. There is something negative about that quality that is curious to me. While we sometimes praise predictability, we then call it reliability, not predictability. Reliability is a relational concept – we feel we can rely on someone, but predictability is something that has nothing to do with relationships, I think. If you are predictable, you are in some sense a thing, a machine, a simple system. Predictable people lose some of their humanity. Take an example from popular culture – the hosts in Westworld. They are caught in loops that make them easy to predict, and in a key scene Dr Ford expresses his dislike for humanity by saying that the same applies to humans: we are also caught in our loops. The flip side of that, of course, is that noone would want to be completely unpredictable. Someone who at any point may throw themselves out the window, start screaming, steal a car or disappear into the wilderness to…

Continue Reading