Energy and complexity (Philosophy of Complexity II)

A brief note today, about something to look into more.

Could the energy consumption of a civilization be a measure of its complexity? If so, we could easily say that our civilization is becoming more and more complex – since we are consuming more energy all the time. There is something intriguing about this measure – it relates the complexity of a phenomenon to the amount of heat it produces, and so the entropy it drives.

It seems an obvious metric, but it also seems to suggests that there is nothing structural about complexity – by this metric, the sun is more complex than we are. But then, again, we could argue that there is a difference here between natural phenomena like the sun and a constructed artifact.

Can we say, then, that for artifacts it is a good proxy to think about the heat they generate? A car generates more heat than a computer, does it not? Consumes more energy? So again, it seems, the measure is shaky. But the attraction in this kind of metric seems to remain: our civilization is more complex than that of the Egyptians, and we consume much more energy.

A variation on this theme is to look at the energy we can produce, harness — that would connect this measure to the Kardashev scales. Maybe there is something there.

Progress and complexity (Philosophy of Complexity I)

I have heard it said, and have argued myself, that complexity is increasing in our societies, and that evolution leads to increasing complexity. I have also known that this is an imprecise statement that needs some examination – or a lot of examination – in order to understand exactly how it can be corroborated or supported.

The first, obvious, problem is how we measure complexity. There are numerous mathematical proposals, such as algorithmic metrics (how long would the shortest program be that described system A and if that program length expands over time then A is becoming more complex), but they require quite some modeling: how do you reduce society or evolution to a piece of software? Suddenly you run into other interesting problems, such as if society and evolution are indeed algorithmic?

The second problem is to understand if this increase in complexity is constant and linear or if it is non-linear. It seems as if it could be argued that human society plateaued for thousands of years after having organized around cities, leaving our nomadic state – but is this true? And if it is true, what makes a society suddenly break free from such plateaus? This seems to be a question of punctuated equilibria?

So, let’s invert and ask what we would like to say – what our intuition tells us – and then try to examine if we can find ways of falsifying it. Here are a few things that I think I believe:

(I) Human society becomes more complex as it progresses economically, socially and technologically.

(II) Evolution leads to increasing complexity.

(III) Technology is the way we manage complexity, and technological progress internalizes complexity in new devices and systems, leaving the sum total increases intact – and not stopping the increase continuing – but redistributing it across different systems.

These guesses are just that, guesses, but they deserve examination and exploration, so that is what we will spend time looking at in this series of blog posts. The nature of any such investigation is that it meanders, and finds itself stalled or locked into certain patterns — we will learn from where this happens.

This seems important.

A good, but skeptical, note on Sandboxes

The idea of regulatory sandboxes is getting more traction as the legislator is trying to grapple with regulating new technology while still allowing it to develop in unexpected ways. These sandboxes present a number of problems (i.a. how do you graduate from them?), but are worth thinking about. This is a useful piece with criticism to start exploring the idea more in detail.

One thought, though: innovation hubs – suggested as an alternative – are really in a different category and seem incommensurable to the sandbox-concept.

What a year it has been

As I return to this notebook, or collection of musings, I find that everything has changed. Not in the dramatic way of everything has changed but in the rather more subtle way of everything has changed.

A shift in the way we see ourselves and the societies we are in.

The day before yesterday I celebrated my 49th birthday, and I reflected on what I would like to do with my life in general going forward. And one of the things I would really like to do is to write more, simply because writing is thinking. I do write a fair bit in my day job, but that writing is often concentrated on the issues and challenges facing the tech industry in different ways – albeit from a futures studies standpoint – and so not something that I can always share.

But what I would like to do is to write more widely, think through things and develop this as a commonplace book, a Zibaldone. A place to collect thoughts and find over time if there are patterns in them, ideas, stories, theories – a good example is HP Lovecrafts collection of story ideas, sketching other uncharted parts of the Mythos.

So – a commitment to write, then. We will see how that goes.

The epistemology of fake news

Doubt comes too cheap. The idea that you are allowed to suspend belief and doubt anything without an effort or valid reason is key to understanding the challenge to our democratic discourse. When you doubt something you should really have to show why you doubt it, and not just why you believe something.

Hartmunt Rosa and the acceleration of our lives (Rosa I)

Hartmunt Rosa has observed, in numerous essays and texts, that it is useful to analyze our age with a mental model built around acceleration. He finds that we accelerate along three different axes — technological, social and subjective — and that this acceleration has profound impact on the way we can live our lives.

It is, for example, hardly viable to have a life plan if you know that the world is changing so fast that you will have to change jobs four or five times over your active career. It also seems hard to innovate in a world where the future is a moving target and you are not sure how to invest your energies. Any intergenerational projects will seem vain and increasingly all of our thinking becomes intragenerational.

This will, among other things, make it harder for us to tackle long term problems like climate change since the future horizon we operate against is closing in on the present all the time.

Rosa’s model is compelling and probably resonates with most of us, but there are a couple of questions that we need to ask when we start to examine it closer.

First, it seems that any claim of acceleration needs to be qualified by a metric of some kind. What is it that is getting faster? And relative to what? If we only look at technology, we find that there are competing claims here: while a lot of voices will argue that things are changing faster than ever before, it is also true that a growing set of voices now claim that innovation has all but died down in the West (Thiel et al). So which is it? And by what metric?

Let’s first eliminate a few metrics that we know are quite useless. No-one should get away with measuring the speed of innovation by looking at the number of patents filed. This was always a noisy signal, but with the increase in defensive and performative patents (where the patent is filed to give the impression of great waves of innovation in official statistics from some countries) the signal is now almost completely useless.

The other set of metrics that should at least be viewed with suspicion are all metrics that have to do with the increase in a particular technology’s capacity. If we argue that we should be seeing speed reductions in, say, international flights, we assume that the pace of technology needs to be measured in relation to the individual technologies, not to how the change overall. This ignores things like the possibility to be connected to the Internet while flying, technical change that is related to but not confined to a specific technology.

Connectivity is interesting because it happens across the board, it is a “horizontal” innovation in the sense that it affects all technology across the technosphere. The improvements in an engine are vertical to that technology (even if the web of technologies related to an engine will be affected in different ways).

This raises the more complex question of if we should speak of the pace of innovation or if it is more accurate to speak of the pace of Innovation as the sum total of different innovation vectors. The latter is not easy to even approximate, however, and so we end up as lost as if we were asked what the pace of evolution is. This should not surprise us, since technology is closely connected to evolution in different ways and indeed can be described as a kind of evolving systems (See W Brian Arthur’s work).

What all of this means is that the notion of acceleration is not as clear as Rosa’s model seems to assume. Of the three kinds of acceleration he studies it is the third that is most clearly evident: the subjective feeling of acceleration and of things speeding up. Here it is without a doubt clear that many people seem to share a sense of increasing speed all around them. But could we find other causes for that?

One strong candidate that I feel Rosa should have looked closer at is complexity. Our world is increasingly connected and complexity is increasing. This can be perceived as acceleration, but is very different. Imagine that you are playing a tune. Now, acceleration would be asking you to play it faster. Complexification would be asking you to play a second and third melody at the same time.

So is the change we are experiencing more like be asked to play a tune faster or like being asked to play a fugue?

This matters when we start looking at the broader social consequences and how they play out.

Models of speech (Fake News Notes XI)

One thing that has been occupying me recently is the question of what speech is for. In some senses this is a heretical question – many would probably argue that speech is an inalienable right, and so it really does not have to be for anything at all. I find that unconvincing, especially in a reality where we need to balance speech against a number of other rights. I also find it helpful to think through different mental models of speech in order to really figure out how they come into conflict with each-other.

Let me offer two examples of such models and what function they have speech serving – they are, admittedly, simplified, but they tell an interesting story that can be used to understand and explore part of the pressure that free expression and speech is under right now.

The first model is one in which the primary purpose of speech is discovery. It is through speech we find and develop different ideas in everything from art to science and politics. The mental model I have in mind here is a model of “the marketplace of ideas”. Here the discovery and competition between ideas is the key function of speech.

The second model is one in which speech is the means through which we deliberate in a democracy. It is how we solve problems, rather than how we discover new ideas. The mental model I have in mind here is Habermas’ public sphere. Here speech is collaborative and seeks solutions from commonly agreed facts.

So we end up with, in a broad strokes, coarse grained kind of way, these two different functions: discovery and deliberation.

Now, as we turn to the Internet and ask how it changes things, we can see that it really increases discovery by an order of magnitude – but that it so far seems to have done little (outside of the IETF) to increase our ability to deliberate. If we now generalise a little bit and argue that Europeans think of speech as deliberative and Americans think of speech as discovery, we see a major fault line open up between those different perspectives.

This is not a new insight. One of the most interesting renditions of this is something we have touched on before – Simone Weil’s notion of two spheres of speech. In the first sphere anything would allowed and absolutely no limitations allowed. In the second sphere you would be held accountable for the opinions you really intended to advance as your own. Weil argued that there was a clear, and meaningful, difference between what one says and what one means.

The challenge we have is that while technology has augmented our ability to say things, it has not augmented our ability to mean them. The information landscape is still surprisingly flat, and no particular rugged landscapes seem to be available for those who would welcome a difference between the two modes of speech. But that should not be impossible to overcome – in fact, one surprising option that this line of argument seems to suggest is that we should look to technical innovation to see how we can create much more rugged information landscapes, with clear distinctions between what you say and what you mean.


The other mental model that is interesting to examine more closely is the atomic model of speech, in which speech is considered mostly as a set of individual propositions or statements. The question of how to delineate the rights of speech then becomes a question of adjudicating different statements and determine which ones should be deemed legal and which ones should be deemed illegal, or with a more fine-grained resolution – which ones should be legal, which ones should be removed out of moral concerns and which ones can remain.

The atom of speech in this model is the statement or the individual piece of speech. This propositional model of speech has, historically, been the logical way to approach speech, but with the Internet there seems to be an alternative and complimentary model of speech that is based on patterns of speech rather than individual pieces. We have seen this emerge as a core individual concern in a few cases, and then mostly to identify speakers who through a pattern of speech have ended up being undesirable on a platform or in a medium. But patterns of speech should concern us even more than they do today.

Historically we have only been concerned with patterns of speech when we have studied propaganda. Propaganda is a broad-based pattern of speech where all speech is controlled by a single actor, and the resulting pattern is deeply corrosive, even if individual pieces of speech may still be fine and legitimate. In propaganda we care also about that which is being suppressed as well as what is being fabricated. And, in addition to that, we care about the dominating narratives that are being told because they create background against which all other statements are interpreted. Propaganda, Jacques Ellul teaches us, always comes from a single center.

But the net provides a challenge here. The Internet makes possible a weird kind of poly-centric propaganda that originates in many different places, and this in itself lends the pattern credibility and power. The most obvious example of this is the pattern of doubt that increasingly is eroding our common baseline of facts. This pattern is problematic because it contains no single statement that is violative, but ity opens up our common shared baseline of facts to completely costless doubt. That doubt has become both cheap to produce and distribute is a key problem that precedes that of misinformation.

The models we find standing against each-other here can be called the propositional model of speech and the pattern model of speech. Both ask hard questions, but in the second model the question is less about which statements should be judged to be legal or moral, and more about what effects we need to look out for in order to be able to understand the sum total effect of the way speech affects us.

Maybe one reason we focus on the first model is that it is simpler; it is easier to debate and discuss if something should be taken down based on qualities inherent in that piece of content, than to debate if there are patterns of speech that we need to worry about and counter act.

Now, again, coming back to the price of doubt I think we can say that the price of doubt is cheap, because we operate in an entirely flat information landscape where doubt is equally cheap for all statements. There is no one imposing a cost on you for doubting that we have been to the moon, that vaccines work or any other thing that used to be fairly well established.

You are not even censured by your peers for this behaviour anymore, because we have, oddly, come to think of doubt as a virtue in the guise of “openness”. Now, what I am saying is not that doubt is dangerous or wrong (cue the accusations about a medieval view of knowledge), but that when the pendulum swings the other way and everything is open to costless doubt, we lose something important that binds us together.

Patterns of speech – perhaps even a weaker version, such as tone of voice, – remain interesting and open areas to look at more closely as we try to assess the functions of speech in society.


One last model is worthwhile looking closer at, and that is the model of speech as a monologic activity. When we speak about speech we rarely speak about listeners. There are several different possibilities here to think carefully about the dialogic nature of speech, as this makes speech into a n-person game, rather than a monologic act of speaking.

As we do that we find that different pieces of speech may impact and benefit different groups differently. If we conceive of speech as an n-person game we can, for example, see that anti-terrorist researchers benefit from pieces of speech that let the study terrorist groups closer, that vulnerable people who have been radicalised in different ways may suffer from exposure to that same piece of speech and that politicians may gain in stature and importance from opposing that same piece off speech.

The pieces of speech we study become more like moves on a chess board with several different players. A certain speech act may threaten one player, weaken another and benefit a third. If we include counter speech in our model, we find that we are sketching out the early stages of speech as a game that can be played.

This opens up for interesting ideas, such as can we find an optimisation criterion for speech and perhaps build a joint game with recommendation algorithms, moderator functions and different consumer software speech and play that game a million times to find strategies for moderating and recommending content that fulfil that optimisation criterion?

Now, then, what would that criterion be? If we wanted to let an AI play the Game of Speech – what would we ask that it optimise? How would we keep score? That is an intriguing question, and it is easy to see that there are different options: we could optimise for variance in the resulting speech our for agreement or for solving any specific class of problems or for learning (as measured by accruing new topics and discussing new things?).

Speech as Game is an intriguing model that would take some flushing out to be more than an interesting speculative thought experiment – but it could be worth a try.

Gossiping about AI (Man / Machine XII)

There are plenty of studies of gossip as a social phenomenon, and there are computer science models of gossiping that allow for information distribution in system. There are even gossip learning systems that compete with or constitute alternatives to federated learning models. But here is a question I have not found any serious discussion about in the literature: what would it mean to gossip about an artificial intelligence? I tend to think that this would constitute a really interesting social turing test – and we could state it thus: ¨

(i) A system is only socially intelligent and relevant if it is the object of gossip or can be come the object of gossip.

This would mean that it is first when we confide in each-other what we heard about an AI that it has some kind of social existence. Intelligence, by the way, is probably the wrong word here — but the point remains. To be gossiped about is to be social in a very human way. We do not gossip about dogs or birds, we do not gossip about buildings or machines. We gossip about other subjects.


This connects with a wider discussion about the social nature of intelligence, and how the model of intelligence we have is somewhat simplified. We tend to talk about intelligence as individual, but the reality is that it is a network concept, your intelligence is a function of the networks you exist in and are a part of. Not only, but partly.

I feel strongly, for example, that I am more intelligent in some sense because I have the privilege to work with outstanding individuals, but I also know that they in turn get to shine even more because they work with other outstanding individuals. The group augments the individual’s talents and shapes them.

That would be another factor to take into account if we are designing social intelligence Turing tests: does the subject of the test become more or less intelligent with others? Kasparov has suggested that man and a machine always beats machine – but that is largely because of the ability of man to adapt and integrate into a system. Would machine and machine beat machine? Probably not — in fact, you could even imagine the overall result there as negative! This quality – additive intelligence – is interesting.


I have written elsewhere that we get stuck in language when we speak of artificial intelligence. That it would be better to speak of sophisticity or something like that – a new word that describes certain cognitive skills bundled in different ways. I do believe that would allow us a debate that is not so hopelessly antropocentric. We are collectively sometimes egomaniacs, occupied only with the question of how something relates to us.

Thinking about what bundles of cognitive skills I would include, then, I think the social additive quality is important, and maybe it is a cognitive skill to be able to be gossiped about, in some sense. Not a skill, perhaps, but a quality. There is something there, I think. More to explore, later.

The noble and necessary lie (Fake News Notes X)

Plato’s use of the idea of a noble lie was oppressive. He wanted to tell the people a tale of their origin that would encourage them to bend and bow to the idea of a stratified society, and he suggest that this would make everyone better off — and we clearly see that today for what it was: a defense for a class society that kept a small elite at the top, not through meritocracy or election, but through narrative.

But there is another way to read this notion of  a foundational myth, and that is to read it as that “common baseline of facts” that everyone is now calling for. This “common baseline” is often left unexplained and taken for granted, but the reality is that with the amount of information and criticism and skepticism that we have today, such a baseline will need to be based on a “suspension of disbelief”, as William Davies suggests:

Public life has become like a play whose audience is unwilling to suspend disbelief. Any utterance by a public figure can be unpicked in search of its ulterior motive. As cynicism grows, even judges, the supposedly neutral upholders of the law, are publicly accused of personal bias. Once doubt descends on public life, people become increasingly dependent on their own experiences and their own beliefs about how the world really works. One effect of this is that facts no longer seem to matter (the phenomenon misleadingly dubbed “post-truth”). But the crisis of democracy and of truth are one and the same: individuals are increasingly suspicious of the “official” stories they are being told, and expect to witness things for themselves.

[…] But our relationship to information and news is now entirely different: it has become an active and critical one, that is deeply suspicious of the official line. Nowadays, everyone is engaged in spotting and rebutting propaganda of one kind or another, curating our news feeds, attacking the framing of the other side and consciously resisting manipulation. In some ways, we have become too concerned with truth, to the point where we can no longer agree on it. The very institutions that might once have brought controversies to an end are under constant fire for their compromises and biases.

The challenge here is this: if we are to arrive at a common baseline of facts, we have to accept that there will be things treated as facts that we will come to doubt and then to disregard as they turn out to be false. The value we get for that is that we will be able to start thinking together again, we will be able to resurrect the idea of a common sense.

So, maybe the problem underlying misinformation and desinformation is not that we face intentionally false information, but that we have indulged too much in a skepticism fueled by a wealth of information and a poverty of attention? We lack a mechanism for agreeing on what we will treat as true, rather than how we will agree on what is – in any more ontological sense – true.

The distinction between a common baseline of facts and a noble lie is less clear in that perspective. A worrying idea, well expressed in Mr Davies’ essay. But the conclusion is ultimately provocative, and perhaps disappointing:

The financial obstacles confronting critical, independent, investigative media are significant. If the Johnson administration takes a more sharply populist turn, the political obstacles could increase, too – Channel 4 is frequently held up as an enemy of Brexit, for example. But let us be clear that an independent, professional media is what we need to defend at the present moment, and abandon the misleading and destructive idea that – thanks to a combination of ubiquitous data capture and personal passions – the truth can be grasped directly, without anyone needing to report it.

But why would the people cede the mechanism of producing truth back to professional media? What is the incentive? Where the common baseline of facts or the noble lie will sit in the future is far from clear, but it seems unlikely that it will return to an institution that has once lost grasp of it so fully. If the truth cannot be grasped directly – if that indeed is socially dangerous and destructive – we need to think carefully about who we allow the power to curate that new noble lie (and no, it should probably not be corporations). If we do not believe that the common baseline is needed anymore, we need new ways to approach collective decision making — an intriguingly difficult task.


Jottings III: the problem with propositions

In a previous post we discussed computational vs “biological thinking” and the question of why we assume that chunking the world in a specific way is automatically right. The outcome was that it is not obvious why the sentence

(i) Linda is a bank teller and a feminist

should always be analysed as containing two propositions that each can be assessed for truth and probability. It is quite possible that given the description we are given the sentence actually is indivisible and should be assessed as a single proposition. When asked, then, to assess the probability of this sentence and the sentence

(ii) Linda is a bank teller

we would argue that we do not compare p & q with p, but x with p where both sentences carry a probability and where the probability of x is higher than the probability of p. Now, this begs the question of why the probability for x – Linda is a bank teller and a feminist – is higher.

One possibility is that our assessment of probability is multidimensional – we assess fit rather than numerical probability. Given the story we are told in the thought experiment, the fit of x is higher than that of p.

A proposition’s fit is a compound of probability and connection with the narrative logic of what preceded it. So far, so good: this is in fact where the bias lies, right? That we consider narrative fit rather than probability, and so hence we are being irrational – right? Well, perhaps not. Perhaps the idea that we should try to assess fragmented propositions for probability without looking at narrative fit is irrational.

There is something here about propositions necessarily being abbreviations, answers and asymmetric.

Jottings II: Style of play, style of thought – human knowledge as a collection of local maxima

Pursuant to the last note, it is interesting to ask the following question: if human discovery of a game space like the one in go centers around what could be a local maxima, and computers can help us find other maxima and so play in an “alien” way — i.e. a way that is not anchored in human cognition and ultimately perhaps in our embodied, biological cognition — should we then not expect the same to be true for other bodies of thought?

Let’s say that a “body of thought” is the accumulated games in any specific game space, and that we agree we have discovered that human-anchored “bodies of thought” seem to be quietly governed by our human nature — is the same then true for philosophy? Anyone reading a history of philosophy is struck by the way concepts, ideas, arguments and methods of thinking reminds you of different games in a vast game space. We don’t even need to deploy Wittgenstein’s notion of language games to see the fruitful application of that analogy across different domains of knowledge.

Can, then, machine learning help us discover “alien” bodies of thought in philosophy? Or is there a requirement that a game space can be reduced to a set of formalized rules? If so – imagine a machine programmed to play Herman Hesse’s glass bead game, how would that work out?

In sum: have we underestimated the limiting effect on thinking across domains that our nature has? The real risk that what we hail as human knowledge and achievement is a set of local maxima?


Jottings I: What does style of play tell us?

If we examine the space of all possible chess games we should be able to map out all games a really played look at how they are distributed in the game space (what are the dimensions of a game space, though?). It is possible that these games cluster in different ways and we could then term these clusters “styles” of play. We at least have a naive understanding of what this would mean.

But what about the distribution of these clusters overall in a game space – are they equally distributed? Are they parts of mega clusters that describe “human play”, clusters that orient around some local optimum? And if so, do we now have tools to examine other mega clusters around other optima?

Is there a connection to non-ergodicity here? A flawed image: game style as collections of non-ergodic paths (how could paths be non-ergodic?) in a broader space? No. But there is something here – a question about why we traverse probabilities in certain ways, why we cluster, the role of human nature and cognition.The science fiction theme of cognitive clusters so far a part that they cannot connect. Styles that are truly, and necessarily alien.

How would we answer a question about how games are distributed in a game space? Surely this has been done. Strategies?