Philosophy

Agency and autonomy I: Agency and an attitude to a soul

The notion of agency is essential to understanding our society. If we cannot say who did something, or what it means to be the one actual acting in a specific case, then all of the language games of legal liability, contractual freedom and intellectual property – to name but a few subjects – falter and fail. Agency lies at the core of our legal philosophies, it is a concept so deeply entrenched that it is easy to miss.

What, then, does it mean to be an agent, to act, to have agency? There is no simple answer here, but there is a simplistic one: we believe that we all act with agency, that whatever we aim towards, what we will, is what we should be responsible for. It is also fairly obvious that we never hold artefacts or systems responsible for their actions. Indeed, we do not think that systems act, they simply function.

We could make an observation here about language games and action. Wittgenstein captures the essence of action in saying that “my attitude to him is an attitude to a soul”, and in that simple sentence he also captures a lot of the complexity around agency and intention. We treat those systems as responsible towards which we have an attitude to a soul. If there is no soul, there can be no agency, and hence no responsibility. We do not arrest the machine that kills a worker. We examine it for flaws, and if necessary we fix it, but when looking for who is responsible we would look towards whoever was responsible for keeping the machine working safely. Indeed, we even hesitate to say that the machine “killed” someone, because a machine lacks the ability to do so.

This is about to change, however. We are now entering an age where systems are becoming so complex as to effectively have some agency, or at least act as delegates for our agency. Look at an automated system for evaluating applications for university. Such a system will decline or approve applications. Now, if an application is denied erroneously we still do not fire the machine, but look to the programming and possibly examine the implementation of the software used. But what if the machine is learning, if it is evolving?

The problem here may be one of agency and states. We assume that if the original state of the machine is set by someone, then that someone is where we would look for agency. Now, what if we have a machine that evolves a system for handling applications, and where every state S(1)…S(N) after the initial one is produced by the software itself. Do we then hold the system responsible? Reset it if it has “learned the wrong things”? Or do we turn to who ever set the initial state S(0) and then designed the algorithm by which subsequent states then were evolved? Our attitude to a soul seems not to only hinge on individual states of a system, but to something else.

Agency remains elusive. But the importance of agency is not. It seems quite clear that agency is the basis on which we design legal systems of responsibility and liability, and that we assign a moral value to acts that only really works when the object of our judgment is something towards which, or whom, we have an attitude to a soul. Now, can we ever have an attitude towards a soul towards a computational system? If not, how do we meet the challenge of ever more autonomous systems?

Leave a Reply