Books: Semiosis by Sue Burke

Just finished this excellent and surprising science fiction book. It explores several different themes – our ability to start anew on a new planet, our inherent nature, our relationship to nature and plants (!) and the growing suspicion that we are always doing someone else’s bidding. It is also beautifully written, with living characters and original ideas.

One of the themes that will stay with me is how nature always plays a dominance game, and that the darwinian struggle in some way is a ground truth that we have to understand and relate to. I have always felt somewhat uneasy with that conclusion, but I think it ultimately is because there is a mono-semiosis assumption there: all things must be interpreted in light of this fact. They must not, and Burke highlights how dominance strategies may evolve into altruistic strategies, almost in an emergent fashion. I found that striking, and important.

Overall, we should resist the notion that there are ground truths that are more true than other things, truth is a coherence space of beliefs and interpretations. Not in a postmodern way, but in a much more complicated way — this is why I often return to the wittgensteinian notion of a “form of life”. Only within that can sense be made of anything.

(Is this not also then a “ground truth”? You could make that argument I suppose, but at some point you just reach not truths but the event horizon of axiomatic necessity. We are not infinite and cannot extend reason infinitely).

So – a recommended read, and an interesting set of issues and questions.

Posted in books, Read | Leave a comment

Computational vs Biological Thinking (Man / Machine XII)

Our study of thinking has so far been characterised by a need to formalize thinking. Ever since Boole’s “Laws of Thought” the underlying assumption and metaphor for thinking has been mathematical or physical – even mechanical and always binary. Logic has been elevated to the position of pure thought, and we have even succumbed to thinking that is we deviate from logic or mathematics in our thinking, then that is a sign that our thinking is flawed and biased.

There is great value to this line of study and investigation. It allows us to test our own thinking in a model and evaluate it from the perspective of a formal model for thinking. But there is also a risk associated with this project, a risk that may become more troubling as our surrounding world becomes more complex, and it is this: that we neglect the study of biological thinking.

One way of framing this problem is to say that we have two different models of thinking: computational and biological; the computational is mathematical and follows the rules of logic – and the biological is different, it forces us to ask things about how we think that are assumed in computational thinking.

Let’s take a very simple example – the so-called conjunction fallacy. The simplest rendition of this fallacy is a case often called “Linda the bank teller”.

This is the standard case:

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Which is more probable?

Linda is a bank teller.

Linda is a bank teller and is active in the feminist movement.

https://en.wikipedia.org/wiki/Conjunction_fallacy

What computational thinking tells us is that the first proposition is always more probable than the second. It follows from the fact that the probability p is always bigger than the probability p x q if either probability is less than 1.

Yet, a surprising amount of people seem to think that it is more likely that Linda is a bank teller and active in the feminist movement. Are they wrong? Or are they just thinking in a different mode?

We could argue that they are simply chunking the world differently. The assumption underlying computational thinking is that it is possible to formalize the world into single statement propositions and that these formalizations are obvious. We thus take the second statement to be a compound statement – p AND q – and so we end up saying that it is necessarily less probable than just p. But we could challenge that and simply say that the second proposition is as elementary as the first.

What is at stake here is the idea of atomistic propositions or elementary statements. Underlying the idea of formalized propositions is the idea that there is a hierarchy of statements or propositions starting from “single fact”-propositions like “Linda is a bank teller” and moving on to more complex compound propositions like “Linda is a bank teller AND active in the feminist movement”.

Computational thinking chunks the world this way, but biological thinking does not. One way to think about it is to say that for computational thinking a proposition is a statement about the state of affairs in the world for a single variable, whereas for biological thinking it is a statement about the state of affairs for multiple related variables that are not separable nor possible to chunk into individuals.

What sets up the state space we are asked to predict is the premises, and they define the state space we are asked to predict as one that contains facts about someones activism. The premises determine the chunking of the state space, and the proposition “Linda is a bank teller and active in the feminist movement” is a singular, elementary proposition in the state space set up by the premises — not a compound statement.

What we must challenge here is the idea that chunking state spaces into elementary propositions is the same as chunking them into the smallest possible propositions. For computational thinking this holds true – but not for biological thinking.

The result of this line of arguing is intriguing: it suggests that what is commonly identified as a bias here is in fact just a bias if you assume that computational thinking is the ideal to which we are all to be held — but that in itself is a value proposition. Why is one way of chunking the state space better than another?

Another version of this argument is to say that the premises set up a proposition chunk that contains a statement about activism, so that the suppressed second part of “Linda is a bank teller” is “and NOT active in the feminist movement” and cannot be excluded. That you do not write it out does not mean that the chunk does not automatically contain a statement about that as the second chunk and the premises set that up as the natural chunking of the state space we are asked to predict.

The real failure, then, is to assume that “Linda is a bank teller” is the most probable statement – and that is not a failure of bias as such, but an interesting kind of thinking frame failure; the inability to move away from computational thinking instilled through study and application.

It is well-known that economists become more rational than others, that they are infected with mathematical rationality through study. Maybe there is this larger distortion in psychology where tests are infected with computational thinking? Are there other biases that are just examples of being unable to move from the biological frame of thinking?

Posted in Philosophy of Questions, Philosophy of thinking | Leave a comment

Digital legal persons? Fragments (Man / Machine XI and Identity / Privacy III)

The following are notes ahead of a panel discussion this afternoon, where we will discuss the need for a legal structure for digital persons in the wake of the general discussion of artificial intelligence. 

The idea of a digital assistant seems to suggest a world in which we will see new legal actors. These actors will buy, order, negotiate and represent us in different ways, and so will have a massive impact on the emerging legal landscape. How do we approach this in the best possible way?

One strawman suggestion would be to propose a new legal construct in addition to natural and legal persons, people and companies, and introduce a new legal category for digital persons. The construct could be used to answer questions like:

  • What actions can a digital person perform on behalf of another person and how is this defined in a structured way?
  • How is the responsibility of the digital person divided of the 4 Aristotelian causes? Hardware error, software error, coder error and objective error all seem to suggest different responsible actors behind the digital person. Hardware manufacturers would be responsible for malfunction there, software producers for errors in software and coders for error that could not be seen as falling within the scope of the software companies — finally the one asking for the assistant to perform a task would have a responsibility for a clearly defined task and objective.
  • In n-person interactions between digital persons with complex failures, who is then responsible?
  • Is there a preference for human / digital person responsibility?
  • What legal rights and legal capacities does a digital person have? This one may seem still in the realm of science fiction – but remember that with legal rights we can also mean the right to incur a debt on behalf of a non-identified actor, and we may well see digital persons that perform institutional tasks rather than just representative tasks.

There are multiple other questions here as well, that would need to be examined more closely. Now, there are also questions that can be raised about this idea, and that seem to complicate things somewhat. Here are a few of the questions that occur to me.

Dan Dennett has pointed out that one challenge with artificial intelligence is that we are building systems that have amazing competence without the corresponding comprehension. Is comprehension not a prerequisite for legal capacity and legal rights? Perhaps not, but we would do well to examine the nature of legal persons – of companies – when we dig deeper into the need for digital persons in law.

What is a company? It is a legal entity defined by a founding document of some kind with a set of responsible natural persons identified clearly under the charter and operations of that company. In a sense that makes it a piece of software. A legal person, as identified today, is at least an information processing system with human elements. It has no comprehension as such (in fact legal persons are reminiscent of Searle’s Chinese room in a sense, they can act intelligently without us being able to locate the intelligence in the organization in any specific place). So – maybe we could say that the law already recognizes algorithmic persons, because that is exactly what a legal entity like a company is.

So, you can have legal rights and legal capacity based on a system of significant competence but without individual comprehension. The comprehension in the company is located in the specific institutions where the responsibility is located, e.g. the board. The company is held responsible for its actions through holding the board responsible, and the board is made up of natural persons – so maybe we could say that legal persons have derived legal rights, responsibilities and capacity?

Perhaps, but it is not crystal clear. In the US there is an evolving notion about corporate personhood that actually situates the rights and responsibilities within the corporation as such, and affords it constitutional protection. At the center of this debate the last few years have been the issue of campaign finance, and Citizens United.

At this point it seems we could suggest that the easiest way to deal with the issue of digital persons would be to simply incorporate digital assistants and AIs as they take on more and more complex tasks. Doing this would also allow for existing insurance schemes to adapt and develop around digital persons, and would resolve many issues by “borrowing” from the received case law.

Questions around free expression for digital assistants would be resolved by reference to Citizen United, for example, in the US. Now, let’s be clear: this would be tricky. In fact, it would mean, arguably, that incorporated bot networks had free speech rights, something that flies in the face of how we have viewed election integrity and fake news. But incorporation would also place duties on these digital persons in the shape of economic reporting, transparency and the possibility of legal dissolution if there was illegal behavior on behalf of the digital persons in question. Turning digital persons into property would also allow for a market in experienced neural networks in a way that could be intriguing to examine more closely.

An interesting task, here, would also be to examine how rights would apply – such as privacy – to these new corporations. Privacy, purely from an instrumental perspective here, would be important for a digital person to be able to conceal certain facts and patterns about itself to retain the ability to act freely and negotiate. Is there, then, such a thing as digital privacy that is distinct from natural privacy?

This is, perhaps then, a track worth exploring more – knowing full well the complexities it seems to imply (not least the proliferation of legal persons and what that would do with existing institutional frameworks).

Another, separate, track of investigation would be to look at a different concept – digital agency. Here we would not focus on the assistants as “persons”, but we would instead admit that this analysis only flows from the analogy and not from any closer analysis. When we speak of artificial intelligence as a separate thing, as some entity, we are lazily following along with a series of unchallenged assumptions. The more realistic scenarios are all about augmented intelligence and so about an extended penumbra of digital agency on top of our own human agency, and the real question then becomes one about how we integrate that extended agency into our analysis of contract law, tort law and criminal law.

There is – we would say – no such thing as a separate digital person, but just a person with augmented agency, and the better analysis would be to examine how that can be represented well in legal analysis. This is no small task, however, since a more and more networked agency dissolves the idea of legal personhood to a large degree, in a way that is philosophically interesting.

Much of the legal system has required the identification of a responsible individual. In the case of failure to do so, noone has been held responsible, even if it is quite possible to say that there is a class of people or a network that carries distributed responsibility. We have, for classical liberal reasons, been hesitant to accept any criminal judgment that is based on a joint responsibility in cases where the defendants identify each-other as the real criminal. There are many different philosophical questions that need to be examined here – starting with the difference between augmented agency, digital agency, individual agency, networked agency, collective agency and similar concepts. Other issues would revolve around whether we believe that we can pulverize legal rights and responsibility and say that someone is 0.5451 responsible for a bad economic decision? A distribution of responsibility that equates to the probability that you should have caught it multiplied by the cost for you to do so would introduce an ultra-rational approach to legal responsibility that would, perhaps, be more fair from an economic standpoint, but more questionable in criminal cases.

And where an entire network has failed a young person subsequently caught for a crime – could one sentence all of the network? Are there cases where we all are somewhat responsible because of actions or inactions? The dissolution of agency asks an order of magnitude more complex questions than simply focusing on the introduction of a new person, but it is still an intriguing avenue to explore.

As the law of artificial intelligence evolves, it is also interesting to take into account its endpoint. If we assume that we will reach – one day artificial general intelligence, then what we will have done is most likely to have created something towards which we have what Wittgenstein called an attitude towards a soul. At that point, any such new entities likely are, in a legal sense, human if we interact with them as human. And then no legal change at all is needed. So what do we say about the intermediate stages and steps and the need for a legal evolution that ultimately – we all recognize – will just bring us back to where we are today?

 

Posted in Artificial Intelligence, Legal philosophy, Man / Machine, The man / machine series | Leave a comment

The free will to make slightly worse choices ( Man / Machine XI)

In his chapter on intelectronics, his word for what most closely resembles artificial intelligence, Stanislaw Lem suggests an insidious way in which the machine could take over. It would not be, he says, because it wants to terrorize us, but more likely because it will try to be helpful. Lem develops the idea of the control problem, and the optimization problem, decades before they are then re-discovered by Nick Bostrom and others, and he runs through the many different ways in which a benevolent machine may just manipulate us in order to get better results for us.

This, however, is not the worst scenario. At the very end of the chapter, Lem suggests something much more interesting, and – frankly – hilarious. He says that another, more credible, version of the machines taking over would look like this: we develop machines that are simply better at making decisions for us than we would be making these very same decisions ourselves.

A simple example: your personal assistant can help you book travel, and knowing your preferences, being able to weight them against those of the rest of the family, the assistant has always booked top-notch vacations for you. Now, you crave your personal freedom so you book it yourself, and naturally – since you lack the combinatorial intelligence of an AI – the result is worse. You did not enjoy it as much, and the restaurants were not as spot on as they usually are. The book stores you found were closed, and not very interesting and out of the three museums you went to, only one really captured all of the family’s interests.

But you made your own decision. You exercised your free will. But what happens, says Lem, when that free will is nothing but the free will to make decisions that are always slightly worse than the ones the machine would have made for you? When your autonomy always comes at the cost of less pleasure? That – surmises Lem – would be a tyranny as insidious as any control environment or Orwellian surveillance state.

A truly intriguing thought, is it not?

*

As we examine it closer we may want to raise objections: we could say that making our own decisions, exercising our autonomy, in fact always means that we enjoy ourselves a little bit more, and that there is utility in the choice itself – so we will never end up with a benevolent dictator machine. But does that ring true? Is it not rather the case that a lot of people feel that there is real utility in not having to choose at all, as long as they feel that could have made a choice? Have we not seen sociological studies that argue that we live in a society that imposes so many choices on us that we all feel stressed about the plethora of alternatives for us?

What if the machine could let you know what breakfast cereal out of the many hundreds in the shelf in the supermarket will taste best for you, and at the same time be healthy? Would it not be great not to have to choose?

Or is there value in self-sabotage that we are neglecting to take into account here? That thought – that there is value in making worse choices, not because we exercise our will, but because we do not like ourselves, and are happy to be unhappy – well, it seems a little stretched. For sure, there are people like this – but as a general rule I don’t find that argument credible.

Well, we could say, our preferences change so much that it is impossible for a machine to know what I will want tomorrow – so the risk is purely fictional. I am not so sure that is true. I would suggest we are much more patterned than we like to believe. We live, as Dr Ford in Westworld notes, in our little loops – just like his hosts. We are probably much more predictable than we would like to admit, for a large set – although not all – cases. It is unlikely, admittedly, that a machine would be better at making life choices around love, work and career – these are choices that are hard to establish a pattern in (in fact, we arguably only establish those patterns in retrospect when we tell ourselves autobiographical stories about our lives).

There is also the possibility that the misses would be so unpleasant that the hits would not matter. This is an interesting argument, and I think there is something to it. If you knew that your favorite candy tasted fantastically 9 out 10 cases and tasted garbage ever tenth, without any chance of predicting when that would be, would you still eat it? Where would you draw a line? Every second piece of candy? 99 out of a 100? There is such a thing as disappointment cost, and if the machine is righto in the money in 999 out of a 1000 cases — is the miss such that we would stop using it, or prefer our own slightly worse choices? In the end – probably not.

*

The free will to make slightly worse choices. That is one way in which our definition of humanity could change fundamentally in a society with thinking machines.

Posted in Artificial Intelligence, Man / Machine, The man / machine series | Leave a comment

Stanislaw Lem, Herbert Simon and artificial intelligence as broad social technology project (Man / Machine X)

Why do we develop artificial intelligence? Is it merely because of an almost faustian curiosity? Is it because of an innate megalomania that suggests that we could, if we want to, become gods? The debate today is ripe with examples of risks and dangers, but the argument for the development of this technology is curiously weak.

Some argue that it will help us with medicine, and improve diagnostics, others dutifully remind us of the productivity gains that could be unleashed by deploying these technologies in the right way and some even suggest that there is a defensive aspect to the development of AI — if we do not develop it, it will lead to an international imbalance where the nations that have AI will be akin to those nations that have nuclear capabilities: technologically superior and capable of dictating the fates of those countries that lag behind (some of this language is emerging in the on-going geo-politicization of artificial intelligence between The US, Europe and China).

Things were different in the early days of AI, back in the 1960s, and the idea of artificial intelligence was actually more connected then with the idea of a social and technical project, a project that was a distinct response to a set of challenges that seemed increasingly serious to writers of that age. Two very different examples support this observation: Stanislaw Lem and Herbert Simon.

Simon, in attacking the challenge of information overload – or information wealth as he prefers to call it – suggests that the only way we will be able to deal with the complexity and rich information produced in the information age will be to invest in artificial intelligence. The purpose of that, to him, is to help us learn faster – and if we take into account Simon’s definition of learning as very close to classical darwinian adaptation, we realize that for him the development of artificial intelligence was a way to ensure that we can continue to adapt to an information rich environment.

Simon does not call this out, but it is easy to read between the lines and see what the alternative is: a growing inability to learn, to adapt that generates increasing costs and vulnerabilities, the emergence of a truly brittle society that collapses under its own complexity.

Stanislaw Lem, the Polish science fiction author, suggests a very similar scenario (in his famously unread Summa Technologiae), but his is more general. We are, he argues, running out of scientists and we need to ensure that we can continue to drive scientific progress, since the alternative is not stability, but stagnation. He views the machine of progress as a homeostat that needs to be kept in constant operation in order to produce, in 30 year increments, a doubling of scientific insights and discoveries. Even if we, he argues, force people to train as scientists we will not be able to grow fast enough to respond to the need for continued scientific progress.

Both Lem and Simon suggest the same thing: we are facing a shortage of cognition, and we need to develop artificial cognition or stagnate as a society.

*

The idea of a scarcity or shortage of cognition as a driver of artificial intelligence is much more fundamental than any of the ideas we quickly reviewed in the beginning. What we find here is an existential threat against mankind, and a need to build a technological response. The lines of thought, the structure of the argument, here almost remind us of the environmental debate: we are exhausting a natural resource and we need innovation to help us continue to develop.

One could imagine an alternative: if we say that we are running out of cognition, we could argue that we need to ensure the analogue of energy efficiency. We need cognition efficiency. That view is not completely insane, and in a certain way that is what we are developing through stories, theories and methods in education. The connection with energy is also quite direct, since artificial intelligence will consume energy as it develops. A lot of research is currently being directed into the question of the energy consumption of computation. There is a boundary condition here: a society that builds out its cognition through technology does so at the cost of energy at some level, and the cognition / energy yield will become absolutely essential. There is also a more philosophical point around all of this, and that is the question of renewable cognition, sustainable cognition.

Cognition cost is a central element in understanding Simon’s and Lem’s challenge.

*

But is it true? Are we running out of cognition? How would you measure that? And is the answer really a technological one? What about educating and discovering the talent of the billions of people that today live in poverty, or without any chance of an education to grow their cognitive abilities? If you have a 100 dollars – what buys you the most cognition (all other moral issues aside): investing in developmental aid or in artificial intelligence?

*

Broad social technological projects are usually motivated by competition, not by environmental challenges. One reason – probably not the dominating one, but perhaps a contributing factor nonetheless – that climate change seems to inspire so little action in spite of the threat is this: there is no competition at all. The world is at stake, and so nothing is at stake relative to one another. The conclusion usually drawn from that observation is that we should all come together. What ends up happening is that we get weak engagement from all.

Strong social engagement in technological development – what are the examples? The race for nuclear weapons, the race for the moon. In one sense the early conception of the project to build artificial intelligence was as a global, non-competitive project. Has it slowly changed to become an analogue of the space race? The way China is now approaching the issue is to some reminiscent of the Manhattan project style. [1]

*

If we follow that analogy for a bit further — what comes next? What is the equivalent of the moonlanding for artificial intelligence? Surely not the Turing test – it has been passed multiple times in multiple versions, and as such has lost a lot of its salience as a test for progress. What would then be the alternative? Is there a new test?

One quickly realizes that it probably is not the emergence of an artificial general intelligence, since that seems to be decades away, and a questionable project at best. So what would be a moon landing moment? Curing cancer (too broad, many kinds of cancer)? Eliminating crime (a scary target for many reasons)? Sustained economic growth powered by both capital investment strategies and deployment of AI in industry?

An aside: far too often we talk about moonshots, without talking about what the equivalent of the moonlanding would be. It is one thing to shoot for the moon, another to walk on it. Defined outcomes matter.

*

Summing up: we could argue that artificial intelligence was conceived of, early on, as a broad social project to respond to a shortage of cognition. It then lost that narrative, and today it is getting more and more enmeshed in a geopolitical, competitive narrative. That will likely increase the speed with which a narrow set of applications develop, but there is still no single moonlanding moment associated with the field that stands out as the object of competition between the US, EU and China. But maybe we should expect the construction of such a moment in medicine, military affairs or economics? So far, admittedly, it has been games that have been the defining moments – tic-tac-toe, chess, go – but what is next? And if there is no single such moment, what does that mean for the social narrative, speed of development and evolution of the field?

 

[1] https://www.technologyreview.com/s/609038/chinas-ai-awakening/

Posted in Artificial Intelligence, Man / Machine, Technology, The man / machine series | Leave a comment

Law, technology and time (Law and Time I)

I just got a copy of the latest Scandinavian Studies of Law, no. 65. I contributed a small piece on Law, Technology and Time — examining how the different ways in which time is made available by technology changes demands on the law and legislation. It is a first sketch of a very big area, and something I aim to try to dig deeper into. I am very grateful for the chance to start to lay out thoughts here, and especially so since it was on the occasion of celebrating that the Swedish Law and Informatics Research Institute now is 50 years young!

As I continue to think about this research project, I would like to think about things like “Long law” for contracts and legal rules that extend in the Long Now (-10 000 years to + 10 000 years) as well as different new modes of time – concurrent time, sequenced time et c. There may also be connections here to the average age of legal entities, the changing nature of law in cities and corporations, foundations and other similar ideas. I really like the idea of exploring time and law thoroughly and from a number of different angles.

Stay tuned.

Posted in Law and time | Leave a comment

A small note on cost and benefit

I have picked up Cass Sunstein’s latest book on cost / benefit trade offs, and am enjoying it. But it seems to me that there is a fundamental problem here with the framing. The model being put forward is one in which we straight-forwardly calculate costs and benefits for any choice and then we make the right, informed and rational choice. Yet, we know that this model breaks down in two significant cases – and that is when the costs or the benefits become very large.

At that point, the probability is subsumed by the gravity of the cost or benefit and deemed unimportant. These decision spaces, let’s call them the “rights”-space and “risk”-space, are spaces where we navigate in a mostly rule-based fashion, and where deontological and kantian methods apply.

We will not calculate the benefit of sacrificing human lives, because people have a right to their own life and the individual benefit of that is vast. We will not calculate the cost of a nuclear break-down accurately because if it happens it has such a great potential cost. Even if the probability is miniscule, and the expected cost and benefit could be calculated well, we don’t. Rationality breaks down at the event horizon of these two decision singularities.

Now, you could argue that this is just a human thing, and that we need to get over it. Or you could say that this is a really interesting characteristic of decision space and study it. I find that far fewer take the second approach, and so expose an interesting trait: rationality blindness. A striving for rationality that leads to a blindness for human nature.

If we were to develop a philosophy of decisions, one things we would need to do is to show that not all decisions are the same. That there is a whole taxonomy of decisions that needs to explicated and examined and explored. As this example shows there are decisions that do not admit of a probability calculus in the normal way.

(Is this not Kahneman’s and Tversky’s project? No, it is in fact the opposite. Showing that the idea of decisional bias actually reveals a catalogue of different categories of decisions – not weaknesses in human rationality.)

Posted in The philosophy of decisions | Leave a comment

Memory, Ricoeur, History, Identity, Privacy and Forgetting (Identity and Privacy II)

In the literature on memory it is almost mandatory to cite the curious case of the man who who after an accident could remember no more than a few minutes of his life before resetting and then forgetting everything again. He had retained long term memory from before the accident, but lacked the ability to form any new long term memories at all.

His was a tragic case, and it is impossible to read about the case and not be dripped by both a deep sorrow for the man, and a fear that something like this would happen to anyone close to us or ourselves. Memory is an essential part of identity.

The case also highlights a series of complexities in the concept of privacy that are interesting to consider more closely.

First, the obvious question is this: what does privacy mean for someone that has no long term memory? There are the obvious answers – that he will still care about wearing clothes, that he will want to sleep in solitude, that there are conversations that he will want to have with some and not others, but does the lack of any long term memory change the concept of privacy?

What this questions brings out, I think, is that privacy is not a state, but a relationship. Not a new observation as such, but it is often underestimated in the legal analysis of privacy-related problems. Privacy is a negotiation of the narrative identity between individuals. That negotiations breaks down completely when one party has no long term memory. We end up with a strange situation in which everyone around the person in question may feel that his or her privacy is being infringed upon, but no such infringement is felt or experienced by the subject himself. Privacy is, in this sense, perception.

This follows from our first observation, that identity is collective narration (that may be a pleonasm, how could narration be individual?) and that privacy is about the shaping of that story. When one lacks the ability to hold the story in memory, both identity and privacy fade out.

Second, the case asks an interesting question about privacy and time. We can bring that to a point and ask — how long is privacy? European legislation has a peculiar answer – it seems to argue that privacy is only held by natural, living persons, and that death is a breaking point where privacy no longer applies. But if there was ever a case for a right to extend beyond the end of life, privacy is probably a good candidate. Should it be possible to reveal all about an individual at the very moment of that person’s death? Why is death a relevant moment in the determination of the existence of the right at all? And what would a society look like that entertained eternal privacy? What shared history could such a society have?

We run into another aspect of privacy here – that it is limited by legitimate interest, journalism, art, literature. So in a very real sense, privacy cannot be used to protect against unauthorized biography or infringing on the story we tell about ourselves. This is also a peculiar thing; it seems to fly in the face of the realization that identity is story, and suggest that if anyone really tells a story about you through the established vehicles of storytelling, then you are defenseless from a privacy perspective. There is a lack of consequence here, born out of the realization that storytelling may well be a value that is more important than privacy in our societies. That the value of history is greater than the value of privacy, and that the control over narrative ultimately needs to give in to the transformation of individual memory to history.

Time, memory, identity and history. All of them are essential to explore in the language game of privacy, and need to be explored more deeply. Ricoeur’s thinking and ideas are key here, and his exploration of these themes more and more appear as a prolegomena to any serious discussion on privacy.

What has been written here, has been written on the right to be forgotten, but that is just a narrow application of the body of thought that Ricoeur has offered on these themes. So we will need to return to this a new in a later post.

Posted in Identity / Privacy Series, Philosophy of Privacy | Leave a comment

The Narrated Self (Identity and Privacy I)

The discussions and debates about privacy are key to trust in the information society. Yet, the our understanding of the concept of privacy is still in need of further exploration. This short essay is an attempt to highlight one aspect of the concept that seems to be crucial, and highlight a few observations about what we could conclude from studying this aspect. 

Privacy is not a concept that can be studied in isolation. It needs to be understood as a concept strongly related to identity. Wittgenstein notes that doubt is impossible to understand without having a clear concept of belief, since doubt as a concept is dependent on first believing something. You have to have believed something to be able to doubt something. 

The same applies for privacy. You have to have an identity in order to have privacy, and in order to have that privacy infringed upon in some way. Theories of identity, then, are key to theories of privacy. 

So far nothing new or surprising. As we then turn to theories of identity, we find that there are plenty to choose from. Here are a few, eclectically collected, qualities of identity that I think are rather basic.

1. Identity is not a noun, but a verb, it exists not as a quality in itself but as a relationship with someone else. You find yourself strewn in the eyes of the Others, to paraphrase (badly) Heidegger. Your identity is constructed, changed and developed over time. A corollary of this is that if you were the last human in the universe you would have no identity. And you would not enjoy any privacy. 

2. The means through which we create identity are simple, and were best laid out by philosopher Paul Ricoeur. We narrate our identity, we tell stories about ourselves, and others tell stories about us. That is how our identity is constituted. 

These two qualities then imply a few interesting observations about privacy. 

First, privacy is also relational, it is the negotiation of identity with different audiences and constituencies. At least this is how it has been. One of the key challenges with technology is that it flattens the identity landscape, unifies the islands of identity that you could previously enjoy. What once was a natural fragmentation of identity is flattened and clustered as the information sphere grows larger and information about us more prevalent. Our ability to tell different stories to different people almost disappears. 

An aside: this observation that privacy is the telling of different stories about ourselves has led some economists like Richard Posner to state that privacy enables lying, and so that transparency would be preferable, since it would allow people to minimize risk. The flaw in the argument is that it assumes that there is a single true identity, and that this identity is revealed in the flattening of the information space, and the transparency that this brings about. This is not necessarily true: there may not be any “true identity” in any meaningful way. Just as there is no absolute privacy. An infringement of privacy is not so much revealing a truth about you as negating your ability, your autonomy, in telling stories about yourself. 

Second, this means that any right to privacy is synonymous with a right to the narration of our identities. This is what several writers have observed when they have equated privacy and autonomy, I think, but the focus on autonomy easily devolves into a discussion about the autonomy of will, rather than the autonomy of identity narration. 

Third, a society with the strongest privacy protections would be one in which no one is allowed to narrate your identity other than yourself. It seems self-evident that this creates a tension with free expression in different ways, but it highlights the challenging and changing nature of privacy infringments in an age where everyone is telling stories about us on social media. 

To sum up, then: privacy is a concept secondary to identity, and identity is best understood as the narratives of the self. Privacy then becomes the right to narrate yourself, to tell your own story. The political control and power over the stories we tell is a key problem in the study of the information society. One could even imagine a work written entirely focusing on the power over stories in a technological world, and such a work could encompass controversial content, fake news, hate speech, defamation, privacy and perhaps even copyright — we have here a conceptual model that allows us to understand and study our world from a slightly different vantage point. 

*

Posted in Identity / Privacy Series, Philosophy of Privacy | Leave a comment

Sad songs (Notes on Culture I)

A cursory examination of the landscape of sad songs suggest that they fall into a number of categories: break up songs, songs about missing someone, songs about falling apart — but the best ones probably mix all of these different categories and are about the sudden loss of meaning. Think of “Disintegration” by The Cure, and its despair:

[…]But I never said I would stay to the end
I knew I would leave you and fame isn’t everything
Screaming like this in the hope of sincerity
Screaming it’s over and over and over
I leave you with photographs, pictures of trickery
Stains on the carpet and stains on the memory
Songs about happiness murmured in dreams
When we both of us knew how the end always is
How the end always is

How the end always is
How the end always is
How the end always is
How the end always is

A good sad song allows for the complete collapse of concepts and truths around us, and captures that feeling of semantic uncertainty, our inability to assign meaning to what is happening, our lack of pattern. There is something there – the lack of patterns, the inability to make sense of the world, and the feeling that meaning is seeping away.

I think one of the best examples of this feel in general – a kind of Weltschmerz – is Nine Inch Nails “Right Where It Belongs”. Here the world is slipping away, the interpretations like claws on a smooth rock surface (this version is even scarier than the album one):

[…]What if all the world’s inside of your head?
Just creations of your own
Your devils and your gods all the living and the dead
And you really oughta know
You can live in this illusion
You can choose to believe
You keep looking but you can’t find the ones
Are you hiding in the trees?
What if everything around you
Isn’t quite as it seems?
What if all the world you used to know
Is an elaborate dream?
And if you look at your reflection
Is it all you want it to be?
What if you could look right through the cracks
Would you find yourself, find yourself afraid to see? 

The calmness with which the lyrics are delivered, the understated use of questions makes the doubt all the more personal and close. As the song slides into the last verse it comes closer and is drowned in the noise of a concert in the background, and we are invited to share the doubt carefully constructed through-out the song.

A variation on this theme of uncertainty, but brought home to a much more personal setting and therefore so much worse in a sense, is found in the National’s “About Today” (this version is perhaps the best one – but beware, it is 8 minutes). The lyrics sketch out, in the darkest possible way, the uncertainty – and it is a lack of certainty about exactly what the title says – about today. What happened, how it will affect us all, what it means for the future. The breakup is there, but radiating from it are the cracks and fault lines through out our lives:

Today
You were far away
And I
Didn’t ask you why
What could I say
I was far away
You just walked away
And I just watched you
What could I say
How close am I
To losing you
Tonight
You just close your eyes
And I just watch you
Slip away
How close am I
To losing you
Hey, are you awake
Yeah I’m right here
Well can I ask you
About today
How close am I
To losing you
How close am I
To losing 

The haunting drummer’s rhythm and drifting violin just add to the uncertainty, the first beginnings of fear in the way the singer almost doesn’t dare to ask, but murmurs the words.
There is a difference between sad songs and songs of sorrow, that is hard to articulate, but it can be clearly discerned from some of Nick Cave’s works. His “Push the sky away” is fundamentally a sad song:

[…]And if you feel you got everything you came for
If you got everything and you don’t want no more
You’ve got to just keep on pushing it
Keep on pushing it
Push the sky awayAnd some people say it’s just rock and roll
Ah but it gets you right down to your soul
You’ve got to just keep on pushing it
Keep on pushing it
Push the sky awayYou’ve got to just keep on pushing it
Keep on pushing it
Push the sky away

This song is all the more horrible because it deals with a variation of the team of meaninglessness – it is the feeling of being finished. All has been done. There is a certain kind of sadness that follows on completing complicated tasks or reaching ones goals, and the sense one gets from this song is that for this one person that sadness has spilled over, and now everything seems finished. They got everything they came for. They don’t want no more.

The album following this was the one Cave made after a horrendous personal tragedy, and I find it almost impossible to listen to – because those songs are not sad, they are filled with sorrow – and it is not that they are too private, it is just that that sorrow is so real that it cuts through. “Girl in Amber” is one example.

Songs of sorrow are songs that seek to construct meaning, sad songs are about meaning slipping away. Songs of sorrow have thing strands of hope in them. Sad songs come from a point of hopelessness, of determinism. Songs of sorrow look backward, and sad songs look forward.

The grammar of sadness is fundamentally distinct from that of sorrow.
Posted in Notes on culture | 1 Comment

Artificial selves and artificial moods (Man / Machine IX)

Philosopher Galen Strawson challenges the idea that we have a cohesive, narrative self that lives in a structurally robust setting, and suggests that for many, the self will be episodic at best and that there is no real experience of self at all. The discussion of the self – from a stream of moments to a story to deep identity – is relevant in any discussion of artificial general intelligence for a couple of different reasons. The perhaps most important one is that if we want to create something that is intelligent, or perhaps even conscious, we need to understand what in our human experiences constitutes a flaw or a design inefficiency, and what actually is a necessary feature.

It is easy to suspect that a strong, narrative and cohesive self would be an advantage – and that we should aim to achieve that if we recreate man in machine. That, however, underestimates the value of change. If our self is fragmented, scattered and episodic it has the ability to navigate a highly complex reality much better. A narrative self would have to spend a lot of energy integrating experiences and events into a schema in order to understand itself. An episodic and fragmented self just needs to build islands of self-understanding, and these don’t even need to be coherent with each-other.

A narrative self would also be very brittle, unable to cope with changes that challenge the key elements and conflicts in the narrative governing self-understanding. Our selves seem able to absorb even the deepest conflicts and challenges in ways that are astounding and even seem somewhat upsetting. We associate identity with integrity, and something that lacks strong identity feels undisciplined, unprincipled. But again: that seems a mistake – the real integrity is in your ability to absorb and deal with an environment that is ultimately not narrative.

We have to make a distinction here. Narrative may not be a part of the structure of our internal selves, but that does not mean that it is useless or unimportant. One reason narrative is important, and any AGI needs to have strong capacity to create and manage narratives, is that they are tools, filters, through which we understand complexity. Narrative compresses information and reduces complexity in a way that allows us to navigate in a world that is increasingly complex.

We end up, then, suspecting that what we need here is an intelligence that does not understand itself narratively, but can make sense of the world in polyphonic narratives that will both explain and organize that reality. Artificial narrativity and artificial self are challenges that are far from solved, and in some ways we seem to think that they will emerge naturally from simpler capacities that we can design.

This “threshold view” of AGI, where we accomplish the basic steps and then the rest emerge from these basic steps, is just one model among many, and arguably needs to be both challenged and examined carefully. Vernor Vinge notes, in one of his Long Now-talks, that one way in which we may fail to create AGI is through not being able to “put it all together”. Thin slices of human capacity, carefully optimized, may not gel together to create a general intelligence at all – and may not form the basis for capacities like our ability narrate ourselves and our world.

Back to the self: what do we believe the self does? Dennett suggests that it is a part of a user illusion, just like the graphic icons on your computer desktop, an interface. Here, interestingly, Strawson lands in the other camp. He suggests that to believe that consciousness is an illusion is the “silliest” idea and argues forcefully for the existence of consciousness. That suggests a distinction between self and consciousness, or a complexity around the two concepts, that also is worth exploring.

If you believe in consciousness as a special quality (almost like a persistent musical note) but do not believe in anything but a fragmented self, and resist the idea of a narrated or narrative life – your stuck in an ambient atmosphere as your identity and anchor in experience. There is a there there, but it is going nowhere. While challenging, I find that an interesting thought – that we are stuck in a Stimmung, as Heidegger called it, a mood.

Self, mood, consciousness and narrative – there is no reason to think that any of these concepts can be reduced to constituent parts and so should be seen as secondary to any other human mental capacities – and so we should think hard about how to design and understand them as we continue to develop theories of the human mind. That emotions play a key part in learning (pain is the motivator) we already knew, but these more subtle nuances and complexities of human existence are each as important. Creating artificial selves with artificial moods, capable of episodic and fragmented narratives through a persistent consciousness — that is the challenge if we are really interested in re-creating the human.

And, of course, at the end of the day that suggests that we should not focus on that, but on creating something else — well aware that we may want to design simpler versions of all of these in order to enhance the functionality of the technologies we design. Artificial Eros and Thanatos may ultimately turn out to be efficient software to allow robots to prioritize.

Douglas Adams, a deep thinker in these areas as in so many others, of course knew this as he designed Marvin, the Paranoid Android, and the moody elevators in his work. They are emotional robots with moods that make them more effective, and more dysfunctional, at the same time.

Just like the rest of us.

Posted in Man / Machine, The man / machine series | Leave a comment

My dying machine (Man / Machine VIII)

Our view of death is probably key to exploring our view of the relationship between man and machine. Is death a defect, a disease to be cured or is it a key component in our consciousness and a key feature in nature’s design of intelligence? It is in one sense a hopeless question, since we end up reducing it to things like “do I want to die?” or “do I want my loved ones to die?” and the answer to both of these questions should be no, even if death may ultimately be a defensible aspect of the design of intelligence. Embracing death as a design limitation, does not mean embracing one’s own death. In fact, any society that embraced individual death would quickly end. But it does not follow that you should also resist death in general.

Does this seem counter-intuitive? It really shouldn’t. We all embrace social mobility in society, although we realize that it goes two ways – some fall and others rise. That does not mean that we embrace the idea that we should ourselves move a lot socially in our life time — in fact, movement both up and down can be disruptive to a family and so may actually be best avoided. We embrace a lot of social and biological functions without wanting to be at the receiving end of them, because we understand that they come with a systemic logic rather than being individually desirable.

So, the question should not be “do you want to die?”, but rather “do you think death serves a meaningful and important function in our forms of life?”. The latter question is still not easy to answer, but “memento mori” does focus the mind, and provides us with momentum and urgency that would otherwise perhaps not exist.

In literature and film the theme has been explored in interesting ways. In Iain M Banks’ Culture World people can live for as long as they want, and they do, but they live different lives and eventually they run out of individual storage space for their memories so they do not remember all of their lives. Are they then the same? After a couple of hundred years the old paradox of Odysseus’ ship really starts to apply to human beings as well — if I exchange all of your memories – are you still you? In what sense?

In the recently released TV-series Altered Carbon death is seen as the great equalizer and the meths – after the biblical character Methusaleh who lived a very long life – are seen to degrade themselves into inhuman deities that grow bored and in that fertile boredom a particular evil grows that seeks sensation and satisfication of base desires at any cost. A version of this exists in Douglas Adams’ Hitchhiker trilogy, where Wowbagger the Infinitely Prolonged fights the boredom of infinite life with a unique project – he sets out to insult the universe, alphabetically.

Boredom, insanity – the projected consequences of immortality are usually the same. The conclusion seems to be that we lack the psychological constitution and strength to live forever. Does that mean that there are no beings that could? That we could not change and be curious and interested and morally much more efficient if we lived forever? That is a more interesting question — is it inherently impossible to be immortal and ethical?

The element of time in ethical decision making is generally understudied. In the famous trolley thought experiments the ethical decision maker has oodles of time to make decisions about life and death. In reality these decisions are made in split seconds in any such situation as what is described in the thought experiments, and generally we become kantian when we have no time and act on baseline moral principles. To be utilitarian requires, naturally and obviously, the time to make your utility calculus work out the way you want it to. Time definitely should never be abstracted away from ethics in the way we often tend to do it today (in fact, the answers to the question “what is the ethical decision” could vary as t varies in “what is the ethical decision if you have t time”).

But could you imagine time scales at which ethics cannot exist? What if you cut time up really thickly? Assume a being that acts in a way where each act takes place in every hundred years – would it be able to act ethically? What would that mean? The cycle of action does imply different kinds of ethics, at least, does it not? A cycle of action of a million years would be even more interesting and hard to decipher with ethical tools. Perhaps ethics can only exist at a human timescale? If so – does infinite life and immortality count as a human timescale?

There is, from what my admittedly shallow explorations hint at, a lot of work done in ethics on the ethics of future generations and how we take them into account in our decisions. What if there were no future generations or if it was a choice to have new generations appear at all? How would that effect the view of what we should do as ethical decision makers?

A lot of questions and no easy answers. What I am digging for here is probably even more extreme, a question of if immortality and ethics are incompatible. If death or dying is a pre-requisite for acting ethically. I intuitively feel that this is probably right, but that is neither here nor there. When I outline this in my own head I guess the question that I get back to is what motivates action – and why we act. Scarcity of time – death – seems to be a key motivator in decision making and creativity overall. When you abstract death it seems as if there no longer is an organizing, forcing function for decision making as a whole. Our decision making becomes more arbitrary and random.

Maybe the question here is actually on of the unit of meaning. Aristotle hints at the fact that a life can only be called happy or fulfilled once it is over, and judged as good or bad only when the person who lived it died. That may be where my intuition comes from – that a life that is not finished never acquires ethical completeness? It can always change and the result is that we have to suspend judgment about the actions of the individual in case?

Ethics require a beginning and an end. Anything that is infinite is also beyond ethical judgment and mening. An ethical machine would have to be a dying machine.

Posted in The man / machine series | Leave a comment