This autumn I have spent some time thinking about Simon’s excellent 1969 paper on designing organizations in information rich environments. It is a wonderful piece of writing and it contains so much more than it seems to have been remembered for. The key insight that this paper brings is simple to express – that with a wealth of information comes a poverty of attention – but how this dilemma plays out and how Simon addresses the ways in which we can think about this is a lot more complex than I think we have given him credit for. This short essay contains a re-reading of Simon’s paper with some reflections that I used for a series of lectures and talks.
Attention is increasingly becoming a key concept for informatics. It has always figured as a core idea, and as witnessed by Simon’s treatment it is a concept that is tied to at least two key trends in the information society: the fast growth of data and the evolution of artificial intelligence. The first is obvious to us, the second was obvious to Simon. That artificial intelligence – the necessity for AI – flows from the growth of information is an insight that is rather interesting, and one that I think can be complemented by noting that with the growth of information we also see a growth of complexity.
So, spending some time re-reading Simon is definitely worthwhile. Let’s walk through his paper in excerpts and discuss a few key quotes. We should start with the key proposition in his argument, and carefully note the wording:
“a wealth of information creates a poverty of attention and a necessity to allocate that attention efficiently…”
I think it is important to note that Simon is making to assertions here. The first is that we in fact are seeing a wealth of information growing around us (I will refrain from arguing that this is the case – I believe it to be true, and will leave the argument for a later piece of writing), and the second that this growth is casually connected with the poverty of attention that we all seem to experience on an almost personal level. But what he then notes is far more complex: that this creates a necessity to allocate attention efficiently.
What does this mean? What is efficient allocation of attention? It seems reasonable to assume that we can only determine this if we relate our allocation of attention to certain objectives, but we could go further than that. We could argue that since we use attention to understand even what objectives we formulate – we need to consume attention to form objectives – the allocation of attention has to flow from something more fundamental. It needs to reflect som basic values.
If this is true, it means that our allocation of attention expresses our basic values. We could, in fact, read Simon as saying that our allocation of attention – individually and collectively – is a choice of fundamental moral importance. Is this over-reading Simon? I actually do not think it is – especially because of the way that he closes his argument (and we will return to this).
As Simon states that we are in a state of poverty of attention, Simon realizes that he has to define attention. Information, he assumes, is already defined (this is a weakness in the argument) and he feels the need to also allow us to think operationally about attention by quantifying it. Here we see one of the great strengths of Simon’s thinking overall, I think: he translates his often abstract arguments into concrete and operationally available parameters that we can work with in different ways. So, then, what is attention? Simon offers a deceptively easy definition:
“we can measure how much scarce resource is consumed by a message by noting how much time the recipient spends on it.”
Attention, then, is time. But is this true? Intuitively we probably agree with the idea that attention is something that exists in time — but we also know that we can spend time in a rather inattentive way. So just staring at a piece of paper for a given period of time is not devoting attention to it. Attention seems to be a peculiar activity, something that we do. It is not just something that happens – or passes – as time seems to be. So why does Simon ignore this basic intuition in his definition of attention?
There are, I think, two possible answers to this question.
The first is that he is introducing a definition that is as complex as is required by the rest of his argument – if we are looking at the design of organizations we have no realistic way of measuring “attention” but we can measure time. In fact, time as proxy for attention may offer the most efficient way of understanding attention without entering into deep, psychological analysis of attention as a concept. Here we would argue that Simon is simplifying the concept in order to be able to use it effectively.
The second possible answer is that Simon disagrees – that he believes that attention is only time, but that attention can be honed or trained or developed in different ways. If someone stares at a message for three hours and someone else stares at it for two minutes before comprehending it does not mean that the first could have used only two minutes – it just means that the different recipients had different quality of attention. Such a reductive view of attention would assume that attention is perception, and nothing more. If we assume that attention and perception are synonymous we could state that a wealth of information creates a poverty of perception, and I think that shows why this is an untenable interpretation. There is no poverty of perception – perception is constant, but attention is not.
Now, this is interesting because if we assume scarce resources we seem to have two scarces resources here: perception and attention, and of the two attention is the more scarce. Perception is the upper boundary of attention. Reading Simon as saying this (and assuming that his rather hasty definition is just an operational move) we can make more sense of his suggested solutions – and I think that he would agree with this interpretation; not least because it is necessary to solve the problem he has posed. If we have a poverty of perception there is nothing we can do.
When approaching the solution, Simon makes a few very simple observations. Remember that he is studying organizations in information rich environments. So what he is describing is simply an organizational response to the abundance of information. Still, we can learn a lot from him in thinking about individual responses as well. In fact, we may even entertain the hypothesis that there is no difference between an organization and an individual at all — an individual can be thoughtfully modeled as an organization in just the same way a corporation can. This fractal nature of organization is another aspect of Simon’s thinking that we will not have time to dig deeper into here, but it remains an interesting line of investigation.
In looking at the solution, then, Simon says this:
“an information processing subsystem will reduce net demand on the rest of the organization only if it absorbs more information, previously received by others, than it produces”
At a first blush this is deeply trivial and very disappointing. Of course we need an asymmetrical relatiomnship between reception and production if we are to deal with information wealth, but Simon expands on this in a way that immediately makes it more interesting:
“if it listens and thinks more than it speaks”
And then he notes that how this is done is “largely independent of specific hardware, automated or human”.
This is where we see the link between artificial intelligence – or machine learning – and information abundance. Because what we need to build is something that transforms information:
“it can transform (“filter”) information into an output that demands fewer hours of attention than the input information”
This subsystem – automated or human – is a filter. This is a key insight — the response to information wealth is filtering, and filtering is something that requires, as the information abundance grows, more and more intelligence. The ultimate filter is able to reduce vast amounts of information to insights. It “listens and thinks more than it speaks”. So as we are thrown into a world of ever-increasing information, the need for artificial intelligence is an inevitable consequence of that information avalanche.
It is worthwhile thinking about this for a moment. What I am saying here is that we can read Simon as making the argument that information rich environments – information abundance – leads to the necessity to build new systems that think, listen and then speak. The data explosion necessitates the development of thinking, listening and speaking technologies.
“Our capacity to analyse data will progress at an adequate pace only if we are willing to invest in […] artificial intelligence.”
“In a knowledge-rich world, progress does not lie in the direction of reading information faster, writing it faster, and storing more of it.”
“Progress lies in the direction of extracting and exploiting the patterns of the world […] so that far less information needs to be read, written or stored.”
Extrapolating here we could say that “Big Data” (oh, that horrid term) or data analytics is not merely a tool – it is an evolutionary response strategy without which we could not navigate the complex and information rich environments we have created – and if we fail at it the information wealth we enjoy will devolve into complexity and generate increasing costs for us as a civilization.
Fine: so we need ML or AI to deal with information avalanches. So what? Well, imagine a world where the majority of the information you consume has been filtered heavily to give you a chance to consume it within your biological boundaries. In such a world the information systems will be designed following this peculiar reversal that Simon notes:
“It is conventional to begin the design of an IPS by considering the information it will supply. In an information-rich world, that is doing things backwards. The crucial question is how much information it will allow us to withhold from the attention of other parts of the system.”
Simon seems to say that we will live in a society – an organzation – where information withholding will be more important than information access. The shape, design and architecture of information withholding – filtering – will be more interesting than the design of information access.
There is a risk in all of this, of course. We could imagine Simon outlining an apocalyptic risk quite different from all the scenarios in which we end up in a war against the machine, and we could see him painting a world in which the AIs simply create a massive desinformation society in which we are given anodyne information and data opiates to keep us calm, and where any knowledge we can claim will be filtered and dependent – by necessity, because of the enormity of the data available – on AIs that provide it for us. Not so much an information society defined by the access to information as an ignorance society defined by what has been withheld. (I have lectured elsewhere about the ethics of ignorance, and we may want to return to that theme later).
Simon, however, is optimistic. And his point is that the response to information abundance needs to be a deeper and fuller understanding of the human mind, and that this needs to be the dominating science project of our time. He contrasts our view of the human mind with that of – in 1969 – the moon:
“The exploration of the Moon is a great adventure, after the Moon there are other objects still further out in space. But Man’s inner space, his mind, has been less well known than the space of the planets. It is time we establish a National policy to explore that space vigorously, and that we establish goals, time-tables and budgets.”
To Simon AI was then at least as important a project as the moon landing. And his purpose was simple: it was to augment man:
“Will you think me whimsical or impractical if I propose, as one of those goals […] an order of magnitude increase in the speed with which a human being can learn a difficult school subject?”
There is a humanism in this that is quite important. Simon was never interested in technology for its own sake, or because it would help solve some minor problem. He felt that it was the way that we would be able to augment ourselves. And perhaps that also provides the answer to the question about trust – if we integrate this technology in ourselves, as attention augmentation technologies, we will have become the filters we need to make sense of the world.