Attention, Commentary, Fake news, The Fake News Notes, Writing

Notes on attention, fake news and noise #3: The Noise Society 10 years later

This February it is 10 years since I defended my doctoral thesis on what I then called the Noise Society. The main idea was that the idea of an orderly, domesticated and controllable information society – modeled on the post-industrial visions of Bell and others – probably was wrongheaded, and that we would see a much wilder society characterized by an abundance of information and a lack of control, and in fact: we would see information grow to a point where the value of it actually collapsed as the information itself collapsed into noise. Noise, I felt then, was a good description not only of individual disturbances in the signal, but also the cost for signal discovery over all. A noise society would face very different challenges than an information society. Copyright in a noise society would not be an instrument of encouraging the production of information so much as a tool for controlling and filtering information in different ways. Privacy would not be about controlling data about us as much as having the ability to consistently project a trusted identity. Free expression would not be about the right to express yourself, but about the right not to be drowned out by others. The design of filters would become key in many different ways. Looking back now, I feel that I was right in some ways and wrong in many, but that the overall conclusion – that the increase in information and the consequences of this information wealth are at…

Continue Reading

Fake news, The Fake News Notes, Writing

Notes on attention, fake news and noise #2: On the non-linear value of speech and freedom of dialogue or attention

It has become more common to denounce the idea that more speech means better democracy. Commentators, technologists and others have come out to say that they were mistaken – that their belief that enabling more people to speak would improve democracy was wrong, or at the very least simplistic. It is worth analyzing what this really means, since it is a reversal of one of the fundamental hopes the information society vision promised. The hope was this: that technology would democratize speech and that a multitude of voices would disrupt and displace existing, incumbent hierarchies of power. If the printing press meant that access to knowledge exploded in western society, the Internet meant that the production of knowledge, views and opinions now was almost free and frictionless: anyone could become a publisher, a writer, a speaker and an opinion maker. To a large extent this is what has happened. Anyone who wants to express themselves today can fire up their computer, comment on a social network, write a blogpost or tweet and share their words with whoever is willing to listen – and therein lies the crux. We have, historically, always focused on speech because the scarcity we fought was one of voice: it was hard to speak, to publish, to share your opinion. But the reality is that free speech or free expression just form one point in a relationship – for free speech to be worth anything someone has to listen. Free speech alone is the freedom of…

Continue Reading

Fake news, Philosophy, Technology, The Fake News Notes

Notes on attention, fake news and noise #1: scratching the surfaces

What is opinion made from? This seems a helpful question start off a discussion about disinformation, fake news and similar challenges that we face as a society. I think the answer is surprisingly simple: opinion is ultimately made from attention. In order to form an opinion we need to pay attention to issues, and to questions we are facing as a society. Opinion should not be equated with emotion, even if it certainly also draws on emotion (to which we also pay attention), but also needs reasoned view in order to become opinion. Our opinions change, also through the allocation of attention, when we decide to review the reasons underlying them and the emotions motivating us to hold them. You could argue that this is a grossly naive and optimistic view of opinion, and that what forms opinion is fear, greed, ignorance and malice – and that opinions are just complex emotions, nothing more, and that they have become even more so in our modern society. That view, however, leads nowhere. The conclusion for someone believing that is to throw themselves exasperated into intellectual and physical exile. I prefer a view that is plausible and also allows for the strengthening of democracy. A corollary of the abovementioned is that democracy is also made from attention – from the allocated time we set aside to form our opinions and contribute to democracy. I am, of course, referring to an idealized and ideal version of democracy in which citizenship is an accomplishment…

Continue Reading

Algorithmic transparency, Philosophy, Technology

What are we talking about when we talk about algorithmic transparency?

The term ”algorithmic transparency”, with variants and variations, has become more and more common in the many conversations I have with decision makers and policy wonks. It remains somewhat unclear what it actually means, however. As a student of philosophy I find that there is often a lot of value in examining concepts closely in order to understand them, and in the following I wanted to open up a coarse-grained view of this concept in order to understand it further. At a first glance it is not hard to understand what is meant with algorithmic transparency. Imagine that you have a simple piece of code that manipulates numbers, and that when you enter a series it produces an output that is another series. Say you enter 1, 2, 3, 4 and that the output generated is 1, 4, 9, 16. You have no access to the code, but you can infer that the codde probably takes the input and squares it. You can test this with a hypothesis – you decide to see if entering 5 gives you 25 in response. If it does, you are fairly certain that the code is something like ”take input and print input times input” for the length of the series. Now, you don’t _know_ that this is the case. You merely believe so and for every new number you enter that seems to confirm the hypothesis your belief may be slightly corroborated (depending on what species of theory of science you subscribe to).…

Continue Reading

Artificial Intelligence, Data, Writing

Data is not like oil – it is much more interesting than that

So, this may seem to be a nitpicking little note, but it is not intended to belittle anyone or even to deny the importance of having a robust and rigorous discussion about data, artificial intelligence and the future. Quite the contrary – this may be one of the most important discussions that we need to engage in over the coming ten years or so. But when we do so our metaphors matter. The images that we convey matter. Philosopher Ludwig Wittgenstein notes in his works that we are often held hostage by our images, that they govern the way we think. There is nothing strange or surprising about this: we are biological creatures brought up in three-dimensional space, and our cognition did not come from the inside, but it came from the world around us. Our figures of thought are inspired by the world and they carry a lot of unspoken assumptions and conclusions. There is a simple and classical example here. Imagine that you are discussing the meaning of life, and that you picture the meaning of something as hidden, like a portrait behind a curtain – and that discovering the meaning then naturally means revealing what is behind that curtain and how to understand it. Now, the person you are discussing it with instead pictures it as a bucket you need to fill with wonderful things, and that meaning means having a full bucket. You can learn a lot from each-others’ images here. But they represent two very different…

Continue Reading

Artificial Intelligence, Technology, Writing

A note on complementarity and substitution

One of the things I hear the most in the many conversations I have on tech and society today is that computers will take jobs or that man will be replaced by machine. It is a reasonable and interesting question, but I think, ultimately wrong. I tried to collect a few thoughts about that in a small essay here for reference. The question interests me for several reasons – not least because I think that it is partly a design question rather than something driven by technological determinism. This in itself is a belief that could be challenged on a number of fronts, but I think there is a robust defense for it. The idea that technology has to develop in the direction of substitution is simply not true if we look at all existing systems. Granted: when we can automate not just a task but cognition generally this will be challenged, but strong reasons remain to believe that we will not automate fully. So, more of this later. (Image: Robin Zebrowski)

Artificial Intelligence, Philosophy, Philosophy of mind, Wittgenstein

Reading Notes I: Tegmark and substrate independence

Tegmark (2017:67) writes “This substrate independence of computation implies that AI is possible: intelligence doesn’t require flesh, blood or carbon atoms.”. How should we read this? The background is that he argues that computation is independent of what we use for hardware and software and what is required is only that the matter we compute in fulfills som very simple conditions like sufficient stability (what would intelligence look like if it was based on gases rather than more solid matter, one could ask – remembering the gas giants in Bank’s novels, by the way – sufficiently large gases may be stable enough to support computation?). But what is more interesting here is the quick transition from computation to intelligence. Tegmark does not violate any of his own assumptions here – he is exceptionally clear about what he thinks intelligence is and builds on a Simonesque notion of attaining goals – but there still seems to be a lot of questions that could be asked about the move from computation to intelligence. The questions that this raises for me are the following: (i) Is computation the same as intelligence (i.e. is intelligence a kind of computation – and if it is not what is it then?) (ii) It is true that computation is substrate agnostic, but is not substrate independent. Without any substrate there can be no computing at all, so what does this substrate dependence mean for intelligence? Is it not possible that the nature of the matter used for…

Continue Reading

Philosophy, Philosophy of mind, Wittgenstein

Aspect seeing and consciousness I: What Vampires Cannot Do

In the novel Blindsight by Peter Watts mankind has resurrected vampires (no, not a good idea) – found in the book to be real predators that became extinct. One difference between vampires and humans is that vampires can see both aspects of a Necker cube at the same time – they are able to do hyper-threading and think several thoughts at the same time. In other words, vampires are capable of seeing two aspects of something – or more – simultaneously. Wittgenstein studies this phenomenon in the second part of Philosophical Investigations, and one interpretation of his remarks is that he sees aspect seeing as a way to show how language can confound us. When we see only one aspect of something we forget that it can equally be something else, and that this is how we are confused. The duck-rabbit is not either duck or rabbit, it is ultimately both, it can be seen as both animals.   But maybe we can learn even more from his discussion of aspect seeing by examining the device Watts uses? The duck-rabbit, the Necker-cube and the old woman/young woman are all interesting examples of how we see one or the other aspect of something. But what would it mean to see both? Let’s assume for the moment that there is a being – a vampire as Watts has it – that can see both aspects at the same time. What would that be like? Trivially we can imagine _two_ people who look…

Continue Reading

Artificial Intelligence, Philosophy, Writing

“Is there a xeno-biology of artificial intelligence?” – draft essay

One of the things that fascinate me is the connections we can make between technology and biology in exploring how technology will develop. It is a field that I enjoy exploring, and where I am slowly focusing some of my research work and writing. Here is a small piece on the possibility of a xeno-biology of artificial intelligence. All comments welcome to nicklas.berildlundblad at gmail.com.

Philosophy, Technology

Autonomy, technology and prediction I: some conceptual remarks

“How would you feel if a computer could predict what you would buy, how you would vote and what kinds of music, literature and food you would prefer with an accuracy that was greater than that of your partner?” Versions of this question has been thrown at me in different fora over the last couple of months. It contains much to be unpacked, and turns out to be a really interesting entry into a philosophical analysis of autonomy. Here are a few initial thoughts. We don’t want to be predictable. There is something negative about that quality that is curious to me. While we sometimes praise predictability, we then call it reliability, not predictability. Reliability is a relational concept – we feel we can rely on someone, but predictability is something that has nothing to do with relationships, I think. If you are predictable, you are in some sense a thing, a machine, a simple system. Predictable people lose some of their humanity. Take an example from popular culture – the hosts in Westworld. They are caught in loops that make them easy to predict, and in a key scene Dr Ford expresses his dislike for humanity by saying that the same applies to humans: we are also caught in our loops. The flip side of that, of course, is that noone would want to be completely unpredictable. Someone who at any point may throw themselves out the window, start screaming, steal a car or disappear into the wilderness to…

Continue Reading

Reading Notes

Simon I: From computers to cognicity

In the essay “The steam engine and the computer” Simon makes a number of important, and interesting points about technological revolution. It is an interesting analysis and worthwhile reading – it is quite short – but I will summarize a few points, and throw out a concept idea. Simon notes that revolutions – their name notwithstanding – take a lot of time. The revolution based on the steam engine arguably took more than 150 years to really change society. Our own information revolution is not even half way there. We have sort of assumed that the information revolution is over and innovation and productivity pessimism have become rampant in our public debate. Simon’s view would probably be that this is far too early to say – and he might add that the more impactful change comes in the second half of a revolution (an old truth that John McCarthy reminded me of when I interviewed him back in 2006, when AI celebrated 50 years. We still hovered at the edge of the AI-winter then, and I remember asking him if he was not disappointed? He looked at me as if I was a complete idiot and said “Look, 50 years after Mendel discover the idea of inheritance genetics had gotten nowhere. Now we have sequenced the genome. Change comes in the second half of hundred years for human discoveries.” I must say that looking at the field now, the curmudgeonly comment was especially accurate. Makes me also think that maybe…

Continue Reading

Philosophy

Agency and autonomy IV: Agency and religion

So, let’s go back to Wittgenstein’s quote. In the second part of the investigations, now called Philosophy of Psychology, a fragment, chapter iv section 22 he writes: “My attitude towards him is an attitude towards a soul. I am not of the opinion that he has a soul.” In section 23 he continues: “Religion teaches us that the soul can exist when the body has disintegrated. Now do I understand what it teaches? – Of course I understand it – I can imagine various things in connection with it. After all, pictures of these things have been painted. And why should such a picture be only an imperfect rendering of the idea expressed? Why should it not do the same service as the spoken doctrine? And it is the service that counts.” Here, I believe, Wittgenstein is trying to point out that there is after all something fishy about this notion. The “of course” shows how we slip, how we let ourselves be led astray, and he sort of confirms that in section 25 where he simply states: “The human body is the best picture of the human soul” – something can read to mean that we have an attitude towards a soul to bodies. Now, the importance of these sections to our examination of agency is two-fold. First it shows how agency is determined by an attitude to a soul, that we in many ways ascribe agency not through analytical approaches, but in the adoption of an attitude (which…

Continue Reading