Artificial Intelligence, Legal philosophy, Man / Machine, The man / machine series

Digital legal persons? Fragments (Man / Machine XI and Identity / Privacy III)

The following are notes ahead of a panel discussion this afternoon, where we will discuss the need for a legal structure for digital persons in the wake of the general discussion of artificial intelligence. 

The idea of a digital assistant seems to suggest a world in which we will see new legal actors. These actors will buy, order, negotiate and represent us in different ways, and so will have a massive impact on the emerging legal landscape. How do we approach this in the best possible way?

One strawman suggestion would be to propose a new legal construct in addition to natural and legal persons, people and companies, and introduce a new legal category for digital persons. The construct could be used to answer questions like:

  • What actions can a digital person perform on behalf of another person and how is this defined in a structured way?
  • How is the responsibility of the digital person divided of the 4 Aristotelian causes? Hardware error, software error, coder error and objective error all seem to suggest different responsible actors behind the digital person. Hardware manufacturers would be responsible for malfunction there, software producers for errors in software and coders for error that could not be seen as falling within the scope of the software companies — finally the one asking for the assistant to perform a task would have a responsibility for a clearly defined task and objective.
  • In n-person interactions between digital persons with complex failures, who is then responsible?
  • Is there a preference for human / digital person responsibility?
  • What legal rights and legal capacities does a digital person have? This one may seem still in the realm of science fiction – but remember that with legal rights we can also mean the right to incur a debt on behalf of a non-identified actor, and we may well see digital persons that perform institutional tasks rather than just representative tasks.

There are multiple other questions here as well, that would need to be examined more closely. Now, there are also questions that can be raised about this idea, and that seem to complicate things somewhat. Here are a few of the questions that occur to me.

Dan Dennett has pointed out that one challenge with artificial intelligence is that we are building systems that have amazing competence without the corresponding comprehension. Is comprehension not a prerequisite for legal capacity and legal rights? Perhaps not, but we would do well to examine the nature of legal persons – of companies – when we dig deeper into the need for digital persons in law.

What is a company? It is a legal entity defined by a founding document of some kind with a set of responsible natural persons identified clearly under the charter and operations of that company. In a sense that makes it a piece of software. A legal person, as identified today, is at least an information processing system with human elements. It has no comprehension as such (in fact legal persons are reminiscent of Searle’s Chinese room in a sense, they can act intelligently without us being able to locate the intelligence in the organization in any specific place). So – maybe we could say that the law already recognizes algorithmic persons, because that is exactly what a legal entity like a company is.

So, you can have legal rights and legal capacity based on a system of significant competence but without individual comprehension. The comprehension in the company is located in the specific institutions where the responsibility is located, e.g. the board. The company is held responsible for its actions through holding the board responsible, and the board is made up of natural persons – so maybe we could say that legal persons have derived legal rights, responsibilities and capacity?

Perhaps, but it is not crystal clear. In the US there is an evolving notion about corporate personhood that actually situates the rights and responsibilities within the corporation as such, and affords it constitutional protection. At the center of this debate the last few years have been the issue of campaign finance, and Citizens United.

At this point it seems we could suggest that the easiest way to deal with the issue of digital persons would be to simply incorporate digital assistants and AIs as they take on more and more complex tasks. Doing this would also allow for existing insurance schemes to adapt and develop around digital persons, and would resolve many issues by “borrowing” from the received case law.

Questions around free expression for digital assistants would be resolved by reference to Citizen United, for example, in the US. Now, let’s be clear: this would be tricky. In fact, it would mean, arguably, that incorporated bot networks had free speech rights, something that flies in the face of how we have viewed election integrity and fake news. But incorporation would also place duties on these digital persons in the shape of economic reporting, transparency and the possibility of legal dissolution if there was illegal behavior on behalf of the digital persons in question. Turning digital persons into property would also allow for a market in experienced neural networks in a way that could be intriguing to examine more closely.

An interesting task, here, would also be to examine how rights would apply – such as privacy – to these new corporations. Privacy, purely from an instrumental perspective here, would be important for a digital person to be able to conceal certain facts and patterns about itself to retain the ability to act freely and negotiate. Is there, then, such a thing as digital privacy that is distinct from natural privacy?

This is, perhaps then, a track worth exploring more – knowing full well the complexities it seems to imply (not least the proliferation of legal persons and what that would do with existing institutional frameworks).

Another, separate, track of investigation would be to look at a different concept – digital agency. Here we would not focus on the assistants as “persons”, but we would instead admit that this analysis only flows from the analogy and not from any closer analysis. When we speak of artificial intelligence as a separate thing, as some entity, we are lazily following along with a series of unchallenged assumptions. The more realistic scenarios are all about augmented intelligence and so about an extended penumbra of digital agency on top of our own human agency, and the real question then becomes one about how we integrate that extended agency into our analysis of contract law, tort law and criminal law.

There is – we would say – no such thing as a separate digital person, but just a person with augmented agency, and the better analysis would be to examine how that can be represented well in legal analysis. This is no small task, however, since a more and more networked agency dissolves the idea of legal personhood to a large degree, in a way that is philosophically interesting.

Much of the legal system has required the identification of a responsible individual. In the case of failure to do so, noone has been held responsible, even if it is quite possible to say that there is a class of people or a network that carries distributed responsibility. We have, for classical liberal reasons, been hesitant to accept any criminal judgment that is based on a joint responsibility in cases where the defendants identify each-other as the real criminal. There are many different philosophical questions that need to be examined here – starting with the difference between augmented agency, digital agency, individual agency, networked agency, collective agency and similar concepts. Other issues would revolve around whether we believe that we can pulverize legal rights and responsibility and say that someone is 0.5451 responsible for a bad economic decision? A distribution of responsibility that equates to the probability that you should have caught it multiplied by the cost for you to do so would introduce an ultra-rational approach to legal responsibility that would, perhaps, be more fair from an economic standpoint, but more questionable in criminal cases.

And where an entire network has failed a young person subsequently caught for a crime – could one sentence all of the network? Are there cases where we all are somewhat responsible because of actions or inactions? The dissolution of agency asks an order of magnitude more complex questions than simply focusing on the introduction of a new person, but it is still an intriguing avenue to explore.

As the law of artificial intelligence evolves, it is also interesting to take into account its endpoint. If we assume that we will reach – one day artificial general intelligence, then what we will have done is most likely to have created something towards which we have what Wittgenstein called an attitude towards a soul. At that point, any such new entities likely are, in a legal sense, human if we interact with them as human. And then no legal change at all is needed. So what do we say about the intermediate stages and steps and the need for a legal evolution that ultimately – we all recognize – will just bring us back to where we are today?

 

Leave a Reply