Philosophy of Questions, Philosophy of thinking

Computational vs Biological Thinking (Man / Machine XII)

Our study of thinking has so far been characterised by a need to formalize thinking. Ever since Boole’s “Laws of Thought” the underlying assumption and metaphor for thinking has been mathematical or physical – even mechanical and always binary. Logic has been elevated to the position of pure thought, and we have even succumbed to thinking that is we deviate from logic or mathematics in our thinking, then that is a sign that our thinking is flawed and biased. There is great value to this line of study and investigation. It allows us to test our own thinking in a model and evaluate it from the perspective of a formal model for thinking. But there is also a risk associated with this project, a risk that may become more troubling as our surrounding world becomes more complex, and it is this: that we neglect the study of biological thinking. One way of framing this problem is to say that we have two different models of thinking: computational and biological; the computational is mathematical and follows the rules of logic – and the biological is different, it forces us to ask things about how we think that are assumed in computational thinking. Let’s take a very simple example – the so-called conjunction fallacy. The simplest rendition of this fallacy is a case often called “Linda the bank teller”. This is the standard case: Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned…

Continue Reading

Artificial Intelligence, Legal philosophy, Man / Machine, The man / machine series

Digital legal persons? Fragments (Man / Machine XI and Identity / Privacy III)

The following are notes ahead of a panel discussion this afternoon, where we will discuss the need for a legal structure for digital persons in the wake of the general discussion of artificial intelligence.  The idea of a digital assistant seems to suggest a world in which we will see new legal actors. These actors will buy, order, negotiate and represent us in different ways, and so will have a massive impact on the emerging legal landscape. How do we approach this in the best possible way? One strawman suggestion would be to propose a new legal construct in addition to natural and legal persons, people and companies, and introduce a new legal category for digital persons. The construct could be used to answer questions like: What actions can a digital person perform on behalf of another person and how is this defined in a structured way? How is the responsibility of the digital person divided of the 4 Aristotelian causes? Hardware error, software error, coder error and objective error all seem to suggest different responsible actors behind the digital person. Hardware manufacturers would be responsible for malfunction there, software producers for errors in software and coders for error that could not be seen as falling within the scope of the software companies — finally the one asking for the assistant to perform a task would have a responsibility for a clearly defined task and objective. In n-person interactions between digital persons with complex failures, who is then responsible? Is there a…

Continue Reading

Artificial Intelligence, Man / Machine, The man / machine series

The free will to make slightly worse choices ( Man / Machine XI)

In his chapter on intelectronics, his word for what most closely resembles artificial intelligence, Stanislaw Lem suggests an insidious way in which the machine could take over. It would not be, he says, because it wants to terrorize us, but more likely because it will try to be helpful. Lem develops the idea of the control problem, and the optimization problem, decades before they are then re-discovered by Nick Bostrom and others, and he runs through the many different ways in which a benevolent machine may just manipulate us in order to get better results for us. This, however, is not the worst scenario. At the very end of the chapter, Lem suggests something much more interesting, and – frankly – hilarious. He says that another, more credible, version of the machines taking over would look like this: we develop machines that are simply better at making decisions for us than we would be making these very same decisions ourselves. A simple example: your personal assistant can help you book travel, and knowing your preferences, being able to weight them against those of the rest of the family, the assistant has always booked top-notch vacations for you. Now, you crave your personal freedom so you book it yourself, and naturally – since you lack the combinatorial intelligence of an AI – the result is worse. You did not enjoy it as much, and the restaurants were not as spot on as they usually are. The book stores you found were…

Continue Reading

Artificial Intelligence, Man / Machine, Technology, The man / machine series

Stanislaw Lem, Herbert Simon and artificial intelligence as broad social technology project (Man / Machine X)

Why do we develop artificial intelligence? Is it merely because of an almost faustian curiosity? Is it because of an innate megalomania that suggests that we could, if we want to, become gods? The debate today is ripe with examples of risks and dangers, but the argument for the development of this technology is curiously weak. Some argue that it will help us with medicine, and improve diagnostics, others dutifully remind us of the productivity gains that could be unleashed by deploying these technologies in the right way and some even suggest that there is a defensive aspect to the development of AI — if we do not develop it, it will lead to an international imbalance where the nations that have AI will be akin to those nations that have nuclear capabilities: technologically superior and capable of dictating the fates of those countries that lag behind (some of this language is emerging in the on-going geo-politicization of artificial intelligence between The US, Europe and China). Things were different in the early days of AI, back in the 1960s, and the idea of artificial intelligence was actually more connected then with the idea of a social and technical project, a project that was a distinct response to a set of challenges that seemed increasingly serious to writers of that age. Two very different examples support this observation: Stanislaw Lem and Herbert Simon. Simon, in attacking the challenge of information overload – or information wealth as he prefers to call it…

Continue Reading

Law and time

Law, technology and time (Law and Time I)

I just got a copy of the latest Scandinavian Studies of Law, no. 65. I contributed a small piece on Law, Technology and Time — examining how the different ways in which time is made available by technology changes demands on the law and legislation. It is a first sketch of a very big area, and something I aim to try to dig deeper into. I am very grateful for the chance to start to lay out thoughts here, and especially so since it was on the occasion of celebrating that the Swedish Law and Informatics Research Institute now is 50 years young! As I continue to think about this research project, I would like to think about things like “Long law” for contracts and legal rules that extend in the Long Now (-10 000 years to + 10 000 years) as well as different new modes of time – concurrent time, sequenced time et c. There may also be connections here to the average age of legal entities, the changing nature of law in cities and corporations, foundations and other similar ideas. I really like the idea of exploring time and law thoroughly and from a number of different angles. Stay tuned.

The philosophy of decisions

A small note on cost and benefit

I have picked up Cass Sunstein’s latest book on cost / benefit trade offs, and am enjoying it. But it seems to me that there is a fundamental problem here with the framing. The model being put forward is one in which we straight-forwardly calculate costs and benefits for any choice and then we make the right, informed and rational choice. Yet, we know that this model breaks down in two significant cases – and that is when the costs or the benefits become very large. At that point, the probability is subsumed by the gravity of the cost or benefit and deemed unimportant. These decision spaces, let’s call them the “rights”-space and “risk”-space, are spaces where we navigate in a mostly rule-based fashion, and where deontological and kantian methods apply. We will not calculate the benefit of sacrificing human lives, because people have a right to their own life and the individual benefit of that is vast. We will not calculate the cost of a nuclear break-down accurately because if it happens it has such a great potential cost. Even if the probability is miniscule, and the expected cost and benefit could be calculated well, we don’t. Rationality breaks down at the event horizon of these two decision singularities. Now, you could argue that this is just a human thing, and that we need to get over it. Or you could say that this is a really interesting characteristic of decision space and study it. I find that far…

Continue Reading

Identity / Privacy Series, Philosophy of Privacy

Memory, Ricoeur, History, Identity, Privacy and Forgetting (Identity and Privacy II)

In the literature on memory it is almost mandatory to cite the curious case of the man who who after an accident could remember no more than a few minutes of his life before resetting and then forgetting everything again. He had retained long term memory from before the accident, but lacked the ability to form any new long term memories at all. His was a tragic case, and it is impossible to read about the case and not be dripped by both a deep sorrow for the man, and a fear that something like this would happen to anyone close to us or ourselves. Memory is an essential part of identity. The case also highlights a series of complexities in the concept of privacy that are interesting to consider more closely. First, the obvious question is this: what does privacy mean for someone that has no long term memory? There are the obvious answers – that he will still care about wearing clothes, that he will want to sleep in solitude, that there are conversations that he will want to have with some and not others, but does the lack of any long term memory change the concept of privacy? What this questions brings out, I think, is that privacy is not a state, but a relationship. Not a new observation as such, but it is often underestimated in the legal analysis of privacy-related problems. Privacy is a negotiation of the narrative identity between individuals. That negotiations breaks down…

Continue Reading

Identity / Privacy Series, Philosophy of Privacy

The Narrated Self (Identity and Privacy I)

The discussions and debates about privacy are key to trust in the information society. Yet, the our understanding of the concept of privacy is still in need of further exploration. This short essay is an attempt to highlight one aspect of the concept that seems to be crucial, and highlight a few observations about what we could conclude from studying this aspect.  Privacy is not a concept that can be studied in isolation. It needs to be understood as a concept strongly related to identity. Wittgenstein notes that doubt is impossible to understand without having a clear concept of belief, since doubt as a concept is dependent on first believing something. You have to have believed something to be able to doubt something.  The same applies for privacy. You have to have an identity in order to have privacy, and in order to have that privacy infringed upon in some way. Theories of identity, then, are key to theories of privacy.  So far nothing new or surprising. As we then turn to theories of identity, we find that there are plenty to choose from. Here are a few, eclectically collected, qualities of identity that I think are rather basic. 1. Identity is not a noun, but a verb, it exists not as a quality in itself but as a relationship with someone else. You find yourself strewn in the eyes of the Others, to paraphrase (badly) Heidegger. Your identity is constructed, changed and developed over time. A corollary of this…

Continue Reading

Notes on culture

Sad songs (Notes on Culture I)

A cursory examination of the landscape of sad songs suggest that they fall into a number of categories: break up songs, songs about missing someone, songs about falling apart — but the best ones probably mix all of these different categories and are about the sudden loss of meaning. Think of “Disintegration” by The Cure, and its despair: […]But I never said I would stay to the end I knew I would leave you and fame isn’t everything Screaming like this in the hope of sincerity Screaming it’s over and over and over I leave you with photographs, pictures of trickery Stains on the carpet and stains on the memory Songs about happiness murmured in dreams When we both of us knew how the end always is How the end always is How the end always is How the end always is How the end always is How the end always is A good sad song allows for the complete collapse of concepts and truths around us, and captures that feeling of semantic uncertainty, our inability to assign meaning to what is happening, our lack of pattern. There is something there – the lack of patterns, the inability to make sense of the world, and the feeling that meaning is seeping away. I think one of the best examples of this feel in general – a kind of Weltschmerz – is Nine Inch Nails “Right Where It Belongs”. Here the world is slipping away, the interpretations like claws on a smooth…

Continue Reading

Man / Machine, The man / machine series

Artificial selves and artificial moods (Man / Machine IX)

Philosopher Galen Strawson challenges the idea that we have a cohesive, narrative self that lives in a structurally robust setting, and suggests that for many, the self will be episodic at best and that there is no real experience of self at all. The discussion of the self – from a stream of moments to a story to deep identity – is relevant in any discussion of artificial general intelligence for a couple of different reasons. The perhaps most important one is that if we want to create something that is intelligent, or perhaps even conscious, we need to understand what in our human experiences constitutes a flaw or a design inefficiency, and what actually is a necessary feature. It is easy to suspect that a strong, narrative and cohesive self would be an advantage – and that we should aim to achieve that if we recreate man in machine. That, however, underestimates the value of change. If our self is fragmented, scattered and episodic it has the ability to navigate a highly complex reality much better. A narrative self would have to spend a lot of energy integrating experiences and events into a schema in order to understand itself. An episodic and fragmented self just needs to build islands of self-understanding, and these don’t even need to be coherent with each-other. A narrative self would also be very brittle, unable to cope with changes that challenge the key elements and conflicts in the narrative governing self-understanding. Our selves seem…

Continue Reading

The man / machine series

My dying machine (Man / Machine VIII)

Our view of death is probably key to exploring our view of the relationship between man and machine. Is death a defect, a disease to be cured or is it a key component in our consciousness and a key feature in nature’s design of intelligence? It is in one sense a hopeless question, since we end up reducing it to things like “do I want to die?” or “do I want my loved ones to die?” and the answer to both of these questions should be no, even if death may ultimately be a defensible aspect of the design of intelligence. Embracing death as a design limitation, does not mean embracing one’s own death. In fact, any society that embraced individual death would quickly end. But it does not follow that you should also resist death in general. Does this seem counter-intuitive? It really shouldn’t. We all embrace social mobility in society, although we realize that it goes two ways – some fall and others rise. That does not mean that we embrace the idea that we should ourselves move a lot socially in our life time — in fact, movement both up and down can be disruptive to a family and so may actually be best avoided. We embrace a lot of social and biological functions without wanting to be at the receiving end of them, because we understand that they come with a systemic logic rather than being individually desirable. So, the question should not be “do you…

Continue Reading

Man / Machine, The man / machine series

Consciousness as – mistake? (Man / Machine VII)

In the remarkable work A Conspiracy against Humanity, horror writer Thomas Ligotti argues that consciousness is a curse that captures mankind in eternal horror. This world, and our consciousness of it, is an unequivocal evil, and the only possible set of responses to this state of affairs is to snuff it out. Ligotti’s writings underpin a lot of the pessimism of the first season of True Detective, and the idea that consciousness is a horrible mistake comes back a number of times in dialogues in the episodes as the season unfolds. At one point one of the protagonists suggests that the only possible response is to refuse to reproduce and consciously decide to end humanity. It is intriguing to consider that this is a choice we have as a humanity, every generation. If we collectively refuse to have kids, humanity ends. Since that is a possible individual, and collective, choice we could argue that it should be open to debate. Would it be better if we disappeared or is the universe better with us around? Answering such a question seems to require that we assign a value to the existence of human beings and humanity as a whole. Or does it? Here we could also argue that the values we discuss only apply to humanity as such and in a world where we do not exist, these values or the idea of values become meaningless — they only exists in a certain form of life. If what it means for…

Continue Reading