Herbert Simon, complexity and the end of blind evolution

Draft only, not to be quoted without author’s permission. 

Introduction

Herbert Simon is sure to awaken deep intellectual envy in anyone who starts to study his amazing career. Present at the birth of behavioral economics, artificial intelligence and design science – and with a reasonable claim to having been early in complexity science as well – he sits at the roots of much of the scientific progress that we have seen in the last decades.

And not only is he a portal figure in these sciences, he also brings a perspective that is sorely needed today: he has a relentless focus on the human center of artificial systems. Simon’s perspective consistently is that of human beings dealing with systems and tools of different kinds. Take the example of bounded rationality: here he is simply saying that if you really look at what people do you will see that they do not behave as rational maximizers, and you may want to ask why that is. One reason is that the couldn’t. They do not have infinite time and the utility of maximizing actually sinks rapidly after an initial period, so obviously they look to satisfice – not maximize – utility.

I simplify, but it is a good example of a model of thinking that I think we need more of. We could call it, with a term that already exists with a variety of interpretations, human-centric, and it would be composed of at least three main pillars:

  • It respects our biological nature – realizes that we are not idealized theoretical constructs, but real beings of flesh and blood, with instincts and limitations.
  • It is focused on overcoming those limitations through the design of systems of different kinds. It asks not what a system can do, but what we can do with a system.
  • It brings back fundamental concepts into play that have often been reduced out – like attention, thinking, decision making and other concepts that are at a basic level descriptions of human action.

Simon has a lot to teach us, and asks questions that are becoming more acute for us as an information society – and his answers (or sketches of answers) seem more and more relevant. In this essay we will look at one fundamental talk that he sketched out and gave in 1969 – Designing Organizations in Information-Rich Environments – and what we can learn from that paper about the evolution and future of our information society.

The challenge of information overload

The problem of information overload is not new. In fact, it has probably been around for as long as we have had information. There is a certain sense of information-fatigue that probably has nothing to do with the absolute amount of information in a society, but rather depends on an individual’s recent consumption of information in certain amounts. We have all suffered from information overload at some point.

Simon was asked to discuss this issue in a talk, given at the Brookings Institute, and that would later become a well-known paper, often quoted, but, as you will see, I think poorly understood. In that paper he outlines how we can meet the challenge of information overload by strengthening human attention.

The central quote in the paper is this:

“a wealth of information creates a poverty of attention and a necessity to allocate that attention efficiently…”

It is worthwhile to step back a little and look at the gap he is sketching out between a ”wealth of information” and ”a poverty of attention” here. In between these two poles a lot of today’s information discovery industry resides. As has been pointed out by Tim Wu, attention is a merchandise today, something that more and more people want to access and allocate in order to make money from it. We have heard of the attention economy, and the idea that ”eyeballs” are money is not new – but was widely bandied about in the late 1990s and early 2000s, during the first tech bubble.

But the true value of this quote, I would argue, is that it really articulates a market. The market for attention allocation – and augmentation. Now, Simon does not stop here. He does not sit back and say that this is our problem, and then admires it – he progresses to say that what we need are filters that can help us solve this particular problem, and the only way we will really make progress on those filters is through the use of artificial intelligence. See the following series of quotes:

“an information processing subsystem will reduce net demand on the rest of the organization only if it absorbs more information, previously received by others, than it produces”

“if it listens and thinks more than it speaks”

“it can transform (“filter”) information into an output that demands fewer hours of attention than the input information”

“Our capacity to analyse data will progress at an adequate pace only if we are willing to invest in […] artificial intelligence.”

It is a remarkable set of observations. From information overload to artificial intelligence – more or less giving us the arc of the information discovery market from AltaVista to Facebook, Google, Amazon, Microsoft and Apple. Everyone, with an ounce of self-respect, today is looking at artificial intelligence, machine learning and similar solutions to deal with the quickly developing information space.

Information wealth – attention poverty – allocation problems – AI. The evolution that he sketches is then completed when he makes his final point about why this is important, and why he feels we should have a new Manhattan project devoted to this. He writes:

“The exploration of the Moon is a great adventure, after the Moon there are other objects still further out in space. But Man’s inner space, his mind, has been less well known than the space of the planets. It is time we establish a National policy to explore that space vigorously, and that we establish goals, time-tables and budgets.” (my emphasis).

Learning – individual and social – is the objective of this whole field of research. It is a very simple argument, but one tha captures our information society in an intriguing way:

As information increases we need to allocate attention through the use of artificial intelligence to support individual and social learning.  Artificial intelligence is a responseto information overload – not a parallel phenomenon. A nice narrative to explore, and think about!

Generalizing Simon

As we take Simon’s narrative and start thinking about what it means for our information society, we need to look at the statement and argument and see if we can generalize it for greater clarity. Simon was focused on the issue of information wealth and attention poverty, and rightly so as that was the subject of his talk. What I would submit to you is that this tension can be generalized into one between complexity and cognition. Our generalized statement would then be:

(Simon Generalized): With a wealth of complexity comes a poverty of cognition and a need to learn efficiently.

This proposition is just a slight tweak of Simon’s original statement, but it is a useful tweak. With it we can start looking at the relationship between complexity, cognition and artificial intelligence – or learning. And this is also what Simon had in mind:

“Will you think me whimsical or impractical if I propose, as one of those goals […] an order of magnitude increase in the speed with which a human being can learn a difficult school subject?”

Think about this for a second – increases of an order of magnitude in learning difficult subjects. What would that mean? To learn algebra not in 6 months, but 6 weeks — imagine the kind of social change this would enable! It is important to realize that there is a loop here: the pace of learning we are capable of forms the upper boundary of the pace of technological change that we can absorb. This means that if we want to have faster technical change (and yes, I think that is what we need), we also need to be able to learn faster. The “Learning Boundary” of technical development is the perhaps most important social restraint on technical evolution.

What I would like to suggest is the following emphasis. Artificial intelligence is not just a response to the explosion of information around us, but also a response to the increasing complexity in our world and ultimately needs to augment learning. In itself almost obvious, right? But I think it has interesting consequences.

How do we measure World-complexity?

Is the world more complex today than it was ten years ago? We intuitively feel that it must be (at least I do), but when we are asked to figure out exactly what metric we are using to justify that belief it becomes much harder.

The idea of complexity is, not surprisingly, complex. There are multiple different definitions, but no single one that is more canonical than any other. If we wanted to give a general view of the concept we would say that it is about a lot of different pieces interacting in different ways, and as the interactions and actors or nodes increase in number and complication, complexity also increases.

There are mathematical definitions of the concept too, like the idea of algorithmic complexity, where – simplified – the length of the computer program required to replicate a phenomenon is used as a measure of the complexity of that phenomenon. Writing an endless list of zeroes just requires a program that contains an infinite printing loop, but writing a program that finds prime numbers is slightly more complex than that and so requires more code, and so we would, trivially, say that the prime numbers series is more complicated than the endless row of zeroes.

Using such measures to measure complexity in society is messy, however, and if we are speaking about social complexity, the reality is that there is no good measure. So, our argument in the following will be based on the hypothesis that complexity in our world – social, political, technical and economic complexity – is increasing.

Cognition absorbs complexity

We need cognition to deal with complexity, and when I say deal with I mean that we need cognition to operate in, use and adapt to increasingly complex environment – to learn. If we lack cognitive resources we will quickly find ourselves unable to learn about our environment, and the long-term consequences of being unable to learn are increasingly dangerous.

One way to think of learning is to equate it with darwinian adaptation. When we learn we become able to survive in an environment. Some environments require more learning to survive than others, but learning determines fitness. This is a slightly broader definition of learning, but, I think, a useful one.

Humans develop solutions and tools to deal with environmental complexity, and these are developed through cognition. We learn about the mammut, and realize that rocks, spears and axes may help us to get the food we need in a certain kind of ecosystem with a certain kind of complexity.

As our environment becomes more and more complex we need to not just use our cognition to create tools, but we need to use it to create more cognition. As we start doing that we start building out our own cognition in tools around us. An abacus, a pocket calculator, a super-computer are all examples of distributed cognition that helps us solve problems and produce new solutions and new tools of different kinds.

Cognition helps absorb complexity, and when the environment reaches a certain level of complexity we need distributed cognition. The interesting thing here is that cognition helps absorb and deal with complexity, and that as we reach our absolute biological limits of cognition we need to build out new cognition.

 

Designing systems that withhold cognition

Simon makes another observation in his discussion about information wealth and overload:

“It is conventional to begin the design of an IPS by considering the information it will supply. In an information-rich world, that is doing things backwards. The crucial question is how much information it will allow us to withhold from the attention of other parts of the system.”

This is crucial. The systems that were designed to provide information now should be rethought as systems that withhold information, and filter a lot of information out. In our generalized version this means that we should think of our cognitive support systems not as providing cognition for us, but, at a certain point, to ensure that we do not think about certain things.

This is when we need to start paying attention to the distribution of cognition in society. Let’s call our distributed cognition “machine cognition” and note that the sum total of cognition in society is human cognition plus machine cognition. But what about the human quotient? Let’s call this the Simon quotient: the quota of human to machine cognition. What happens when that becomes 0.1? 0.0000001? (What is the right measure or unit of cognition, by the way?).

What happens when the majority of all complexity absorbed is absorbed by machine cognition?

Einstein and fundamental equations of cognition

Is there a good way to express the relationship between cognition and complexity? It is tempting to see them as matter and energy, to see complexity as cognition at rest, or some such metaphor. Let’s play around with that notion a bit and see where it leads us.

If we assume that complexity and cognition are two sides of the same phenomenon, two different phases (a bit like ice and water, or perhaps more radically as energy and matter) of the same thing, then we could imagine that there is a series of analogue ideas we could test.

Is the complexity in a system constant, just like energy is constant? What is the relationship between complexity in thermodynamical terms and cognition?

At this point it is fair to protest that we are not thinking rigorously and I will admit that this is true. We are not looking for the maths here, we are teasing out the connections here to build a model or an analogy that we can use for thinking about how our society will change when the majority of complexity/cognition is dealt with outside of ourselves. Or put differently, how we deal with environments where the complexity levels are orders of magnitude higher than what we can absorb through biological cognition alone.

Our question can then be simplified: what happens when almost all thinking in the universe is done by machines?

Machines and mechanical metaphors

One of the first things we have to discuss is what it means to say that a machine thinks. Computers are machines, but they also contain software, and so the idea that we should use mechanistic metaphors to described them is not necessarily one that we should put too much faith in. This is the view of computer scientist and complexity researcher Sam Arbesman, who in his book Overcomplicated suggests that the complexity of our computer systems has reached a level where we need to ditch the mechanical metaphors and instead adopt biological ones. As our distributed computing absorbs more and more complexity, they slowly start to resemble organisms, rather than machines.

Black boxes and black bugs

In our current debate there is a lot of discussion about opacity. The idea is that companies and others that develop machine learning cannot be allowed to developed them in a black box, but must make sure that there are ways to open, audit and review the contents of the box – for social, legal and political reasons. The power inherent in controlling the black boxes is simply to large.

There is some truth to this, if we believe that “black box” is the right metaphor here. But what if it is more likely that these systems should be described, as we have said, as organisms – not black boxes, but black bugs?

I use the word bug intentionally, although I know it is synonymous with something not working the way it was intended to work in computer software. The reason is not that I expect that these systems will malfunction, but rather that I want to draw our attention to a fundamental fact — that we have always assumed that we will be able to review, understand and audit software. So when our systems become impossible to audit, that is in a sense also the very last bug – because all bugs that come after that will be impossible to detect.

Today, there is a lively debate and a lot of research going into the idea of auditability as a design requirement, and there are many who would argue that we will never reach this point. I grant that this is a possibility, but am interested in following the line of thought that assumes that auditability is a competitive disadvantage, and that perhaps the technology completion hypothesis (“all that can be built will be built”) holds true for systems of increasing complexity.

The end of the fork

Fast forward a few thousand years, and imagine a world in which we have developed cognitive ecologies, with organisms that absorb and eat complexity, and organisms like ourselves, who live to tend to and grow these complexity eaters. What is it that has happened here?

One thing that has happened is that we have closed the fork between nature and technology. One of the distinctions we make, in a way that is quite shaky, in our world view is that between the natural and the artificial, between technology and nature. It seems that as technology becomes more complex and cognitive, we will no longer need that distinction. We will go back to a unitary concept of nature, and an ecological classification of different kinds of organisms, on the basis of what they eat and what they do.

We will say that “there was a short time in history, where we believed that technology and nature where different things, but after a few thousand years that chasm was closed and nature healed again”.

Maybe that is what the singularity is – not a singularity of exponentially increasing speed, but a singularity of being, an end to the nature-technology divide.

Designing organisms

One of the more challenging problems we will face along the road here is the question of what kind of organisms we design. Do we, for example, implement drives? Sexual attraction? Aggression? Do we replicate the way organisms have been designed by evolution or do we try to design these things out?

One could argue that designing in Eros and Thanatos in these machines would spell the end of us, and that this would be an existential mistake – since this would mean that we ended up creating powerful beings with all the flaws of evolved creatures, all the wrath and love of humankind. Why would we ever do that?

I can imagine at least one reason. That would be because we find that the best way to quickly find new answers to shifts and increases in complexity is to use evolution and sex to match the changing environment. When we reach a point where we no longer understand the complexity in the environment, and need our organisms to continue dealing with it in different ways, we need to find a way for them to design themselves.

It may well be that the most effective way of enabling that is to make them full members of evolution – with all that this entails. And if we do – how will they feel about us? This is a version of the “control problem”, but it is an interesting version, because it assumes not that there is a single entity that we need to control, but what we need to do is to design an ecology that will enable that control for us.

We need to build good predators that prey on the complexity eaters.

Evolution evolved

Will anything have changed at all here? Will we just have reverted to an evolved version of evolution? Or is there a real shift that happens when we reunite technology and nature? Perhaps that we shift from a blind evolution to one that actually has a a stated and malleable purpose?

The idea of a purpose to evolution is complex and difficult, and filled with traps (it may well be that the strength of evolution is that it lacks purpose, as many have argued), but maybe, just maybe, this is the end result of the arc of technological development. Evolution with a purpose, and objective – a goal?

20 March 2017, Stockholm.

Comments are closed.