Black boxes I: Different kinds of black boxes

We sometimes hear that we are living in a Black Box Society (see e.g. Frank Pasquale’s book on this theme). The immediate assumption may then be that we are dependent on a black box that is being kept closed, on purpose, and that we should ask for it to be opened. This, at least, seems to be a reasonable conclusion faced with the concept as such.

But what if there are different kinds of black boxes? What if some boxes are black because they are closed, and some are blacked because the complexity of the content make the cognitively black, in the sense that they cannot be understood by a single individual human being? As systems become more and more complex, some of them are going to go beyond the biological, cognitive boundaries of transparency – surely?

If so, how do we tell different kinds of black boxes from each-other?

Posted in Uncategorized | Leave a comment

Attention V: What we focus on shapes us

Consider this quote from Robert Nozick:

The ability and opportunity to focus our attention, to choose what we will pay attention to, is an important component of our autonomy.

And the footnote attached to it:

What we presently focus upon is affected by what we are like, yet over the long run a person is molded by where his or her attention continually dwells. Hence the great importance of what your occupation requires you to be sensitive to, and what it ignores de jure or de facto, for its pattern of sensitivities and insensitivities — unless a continuing effort is made to counterbalance this — will eventually become your own.

The implication is clear: we become what we pay attention to. There is probably a lot of truth to this, but I think there is a nuance: what we pay attention to and the way we pay attention matters a lot. The latter is not included in Nozick’s quote, but I like to imagine that he would agree. If we pay deep attention, continuous and focused, we become different people than if we flit from subject to subject. Our patterns of attention are actually fundamental for our identity.

In “Looking for My Self: Identity-Driven Attention Allocation” researchers Nicole Verrochi Coleman, of University of Pittsburgh, and 
Patti Williams, University of Pennsylvania, suggest the same thing. They write:

Drawing from identity-based motivation (Oyserman, 2009; Reed et al., 2012) we suggest individuals use attention to enhance identity-fit; selectively focusing on cues and stimuli that are identity-consistent.

We use “attention to enhance identity-fit”. We pay attention to what confirms our identity, and what strengthens this identity, even if it is one that has just been triggered by a simple exercise like writing about ourselves in a particular role:

Attention is a basic component of cognition; it determines what we see, and how we view the world. If individuals’ identities change attention processes, then the world we inhabit and experience is influenced by that identity. In our studies, people wrote about themselves as an athlete or as a volunteer—triggering the self-labeling process, and making participants’ athletic or charitable identities active.

So we can revisit Nozick and suggest that yes, we are molded by what we pay attention to, but what we pay attention to is also determined by our identities. The result then is a network effect: the two tendencies will reinforce our identity and lock us in an identification process.

But, remembering the last post, we can note that this means that what distracts us may well stop that effect, and help us create moments where we can shape ourselves. Perhaps distraction is the only thing standing between us and ever more focused identity?

Perhaps the ethics of distraction have to allow for the positive effects of distraction in allowing us to break out from the vortex of identification? 

Posted in Attention | Leave a comment

Attention IV: Ethics of attention?

As we proceed studying attention we note that it is really a very scarce resources. We can probably enhance our ability to pay attention and we can strengthen our investment in attention in different ways, but we are also constantly under pressure to shift attention to other things.  All around us we see attention sinks that try to attract us.

In a world of information abundance the ability to attract attention becomes crucial for surviving, not only for an organizations ability to adapt to its environment. Attention can, in different ways, be converted to money or other kinds of utility, so how do you attract it?

Let’s pause and ask instead what kind of question that is. There are researchers who argue that this is not a business question or an informatics question, but rather a deeply ethical question.

Brian Djikema writes in a review of Matthew Crawford:

Most of us think of distraction as a mere annoyance—a mild humming in the background while you work, a fly landing on your book while you read. We don’t usually think of it carrying the weight of a serious moral problem. In his new book, The World Beyond Your Head, Matthew Crawford argues that our inability to pay attention does carry such moral weight, and he argues this precisely because it dissolves our individuality and our freedom.

This perspective allows for a further question about how we choose to distract people, and what the moral implications of such distractions are. Former Googler Tristan Harris examines these issues in his writings and suggests that we need a new kind of labelling for information resources, a labelling that he calls “Time Well Spent”.

Let’s look at what that suggests to us: the way we pay attention determines who we are, individually and as a society, and so that means that wasting somebody’s attention is an unethical thing to do. This is an interesting statement, and worth exploring more – but we can see that there is a problem here. If we take the currency analogy and extend it we could argue that shops should come with labels that say “money well spent”. But how do we determine that? Our consumption of resources – money or attention – are deeply individual. Some people buy books by Nietzsche, and others prefer to watch movies by Herzog – are the money well spent and is the time well spent? The question quickly becomes one of external validation of individual choice, and we run the risk of ending up in a carefully constructed and well-intentioned paternalism.

But that also is a less satisfying objection. We do feel that there are better and worse ways to spend money, and hence also time and attention. And luring people to spend money in certain ways is hardly acceptable to us as a society either. So why should luring someone to spend their attention badly be allowed?

Let us do a thought experiment – assume that you are asked to vote on a new law that would prohibit link bait, and make it an offense to use link bait to lure people to site. How would you vote?

You could argue that link bait is simple the use of free expression and it should not be regulated at all. In fact, you could say, there is no good definition of what link bait is, so any legislation here would be horribly arbitrary.

You could also argue that, yes, we have legislation that protects consumers against fraudulent advertising, and we have self-regulatory bodies and institutions that monitor the advertising business, and link bait is nothing else than advertising so it should be included — it does not matter that it is not luring people to buy something. It lures them to pay attention, and that has distinct and important value to the individual as well – and it is less well spent as a consequence of the link bait.

Where do you land?

Posted in Attention | Leave a comment

Reading Notes II: Harcourt on the expository society

In Exposed: Desire and Disobedience in the Digital Society (Harvard University Press 2015) we find author Bernhard Harcourt making the point that ours is not a surveillance society as much as an expository society where we willingly and consciously expose ourselves. His argument feels incomplete, though. He notes the desire that drives exposing ourselves in new media, but he does not explore it in-depth. He places almost all responsibility for this trend outside of the individuals, painting a picture of victims in a machine of technology geared towards generating information for control — but does not explore why this machine exists. Is it only because the state wants power and corporate bodies want money? That almost seems to be what the argument boils down to. But his observations are interesting, and often thoughtful, and perhaps asking for more analysis is difficult since he is writing about our time, the time we live in, and not necessarily has the distance or data needed to dig deeper? Harcourt writes:

To the contrary: we crave exposure and knowingly surrender our privacy and anonymity in order to tap into social networks and consumer convenience—or we give in ambivalently, despite our reservations. But we have arrived at a moment of reckoning. If we do not wish to be trapped in a steel mesh of wireless digits, we have a responsibility to do whatever we can to resist. Disobedience to a regime that relies on massive data mining can take many forms, from aggressively encrypting personal information to leaking government secrets, but all will require conviction and courage.

But there is so much more to discuss. Like, is this a stable state? What happens when data growth doubles every 70th day? How can noise interjected into the processes change everything? If we want to change the future we need to describe it as changing, and look at what the most likely changes are, I think. But these are difficult things.

Posted in Uncategorized | Leave a comment

Reading Notes I: Luttwak on postheroic war

In his book on strategy, Strategy: The Logic of War and Peace (Harvard University Press 2001)  Luttwak makes a number of observations about modern war that are all very thought-provoking.

On why the nature of war has changed he writes:

If the significance of new family demographic is accepted, it follows that none of the advanced low-birth-rate countries of the world can play the role of a classic Great Power anymore, not the United States or Russia, not Britain or France, least of all Germany or Japan.

He calls these limitations “post-heroic”, and uses this concept to describe the complete unwillingness to risk lives that now characterizes modern states in the West. This trend pushes towards the mechanization of war, and we could read Luttwak as presaging the rise of drone warfare. The de-personalization of war is driven not by technology, but by the increased, perceived, value of human lives. Luttwak writes:

…their societies are so allergic to casualties that they are effectively “de-bellicized,” or nearly so.

Is this de-bellicization a driving force in shaping the future of war?

Posted in Reading Notes | Leave a comment

Attention III: Attention and time

We saw that Simon thinks that attention can be measured as the time someone spends on a message, and that this is a pretty good approximation of attention overall. However, we also suggested that attention is focused time, and that we have more perception than attention, and so we need to figure out if there are other dimensions we could add to attention. I think there are, and will try to explore a few here.

The first, and most obvious one, is how long your attention is. For how long can you focus on something? We talk, often, about attention spans and how they are changing, and in fact research seems to suggest that our attention span is shrinking in the information age. A study from Microsoft Research found that our attention spans maybe down to 8 seconds from 12 seconds just a few years ago. This would not be so depressing if it was not for the fact that an average goldfish has a 9 second attention span, according to the same study.

Attention spans are at the centre of one of our age’s big issues in education – the attention deficit discussion. With more and more children suffering from ADD or ADHD we see that attention spans matter for the way that we teach and train in the modern knowledge society. But how, then, can it be that some have such short attention spans? Surely evolution would have selected for those with the longest attention span and the deepest ability to concentrate? Not necessarily. According to some writers we are seeing the evolved responses from two different groups play out against each-other in the information society, and they trace it back to the hunter/gatherer distinction. For hunters, a short attention span that allows quick scanning of an environment would be an asset. But for gatherers, slowly examining a field, it would be a weakness. One hypothesis, then, could be that we are descended from different genetic lineages, and that the attention spans are indeed inheritances from our ancestors – and will depend on if they were hunters or not. This may well be bunk – it is hard to imagine how we could test this hypothesis – but it highlights a more difficult question: what kinds of attention is selected for in nature and how does that affect us?

Now, attention span is not the only way to think about and measure attention. We can also look at how it changes and shifts over time and how our investment in attention is translated into mental fruits. Lanham (1994) suggests that we need more and more attention to gain deeper insights, and that the relationship is not linear. In a simple graph:

Screen Shot 2016-01-10 at 10.00.55 PMAttention gives us sense and information quickly, knowledge more slowly and ultimately wisdom at a very different exchange rate (and perhaps through different processes in addition).

We can also ask how attention shifts and is directed over time. One interesting attempt at understanding this is found in the paper “Competition among memes in a world with limited attention” by L. Weng, A. Flammini, A. Vespignani & F. Menczer, published in NatureScientific Reports 2, Article number: 335 (2012) doi:10.1038/srep00335.

We find, in this paper, that our ability really is limited – as the system increases in “entropy” and more and more subjects are introduced, an individual’s entropy is flat and bounded – we cannot add a lot of new things to our attention without dropping others. The authors write:

The key observation here is that a user’s breadth of attention remains essentially constant irrespective of system diversity. This is a clear indication that the diversity of memes to which a user can pay attention is bound. With the continuous injection of new memes, this indirectly suggests that memes survive at the expense of others.

Attention as the constant is then a key limiting factor in the design of information systems. And looking at this we can expand Simon’s observation by noting that it seems that attention is indeed time – but in a few different dimensions: time needed to transform a message to information, and then knowledge and then…as well as span of attention and breadth of attention in time.



Posted in Attention | Leave a comment

Attention II: Small, big and vast data – and the Goldilocks hypothesis

The idea that we are now living in the age of big data has become common wisdom, and it is not uncommon to see big data referenced as one of the “key trends” and “environment factors” in any analysis of our society and age. There is truth to this – we have more data available for analysis and innovation now than we have ever had before (it is an open, and perhaps more philosophical question (but interesting!) if this data has always existed, but first now is being made available through technology for our use in a simplification of Heidegger’s analysis of technology). But this view is also complicated, and mostly because of the adjective in big data – the “big”.

There are at least two problems with this.

The first is that if we use “big” now we seem at a loss for words for what we will se in a few decades. The driving trends of data production seem to indicate that we will see an enormous data growth in the coming years, and if data today is “big” we seem to lack a good qualifier for the data sets of the future. I will suggest that we call these “vast” and try to argue another point about them later in this post, but it is worthwhile noting that this is a problem.

The second is that the size of the data set matters most relative to something – relative to, say, our ability to consume it. We could call a data set big, for example, if it creates what Herbert Simon calls an “information rich environment”. This means that we judge the size of the data set relative to the environment as well as the ability of agents in that environment to use it productively. While a reasonable interpretation, we usually seem to default to other, less meaningful measures like the byte size of a data set or the nested complexity. In fact, we could suggest that a data set is big if we cannot make use of it using our individual, biologically bounded attention. This is not a perfect measure by any means, but it makes more sense than just looking a data set and judging it big because it is 2.4 TB large – because it also allows us to talk about the adjecent sets – small and vast.

Small data sets then become data sets that we can consume using our individual, biologically bounded attention – and vast data sets become data sets that we cannot consume or use even with tools for different reasons.

We see the outline here of a possible theory of two crucial thresholds in data society. The first is when the data at hand exceeds our individual, biological attention – and we have seen Simon uses time as a good proxy for how much of that we have. The second is equally interesting and occurs when we cannot make sense of the data set even with the augmented attention technologies that we have at our disposal. When that happens we enter the world of vast data sets, where the rapid growth of spurious correlations and noise in the data sets makes it impossible to use them.

Let us unpack this a bit. Assume that for any data set (I am using data set and information in very loose ways, and am aware of it – and it is partly because I am not too comfortable with the attempts to rigorously define and distinguish between them) we can state the following:

(a) Data set X consumes attention A.

What we are arguing then is that something happens with vast data sets which makes it impossible to also state the following:

(b) Any data set X can be broken down into smaller sets x(1)…x(N) that each consume less attention than A.

We are not arguing that data sets cannot be broken down into smaller data sets (more manageable data sets), but that this process requires attention. It is not costless. This is a version or variant of the transaction cost argument, I would argue, and means that there are data sets where transaction costs are such that analysis is impossible since the attention needed to break the data set into analysable subsets is too costly.

Such data sets we will call vast data sets. They resists analysis and thus become useless to us. What we find now is that there are two boundaries to gaining knowledge from information or data sets. What happens with these data sets is essentially that they collapse under their own complexity weight into the informatics version of black holes.

We can now formulate an alarmist hypothesis:

(I) We now live in the goldilocks zone of data: our big data sets can be converted into insights and we can innovate and learn from them. We have just enough data to be able to do this. Not too far into the future, however, we will reach the second threshold and more and more data sets will become vast – unanalyzable and useless, and we will not be able to break them into smaller, useful subsets at a reasonable cost.

This is a simplification, and wrong in important ways, but it is allows us to think about the probability that we are living in this Goldilocks zone now, and that all the extrapolation that build on the opposite hypothesis may simply be wrong:

(II) More data will increasingly offer uss better and better insight and control into all possible, different phenomena.

I would argue that (II) is the dominant hypothesis today – more data provides better control and insights – and that it is important to challenge and examine that. So while (I) is wrong it provides us with a possible way of examining (II).

When I say that (I) is wrong I think it is weak in the following ways, at least:

+ We may develop techniques and technologies to keep data sets big rather than let them grow vast.
+ Different uses require different specificity and vast data sets may be just big for some purposes.
+ We may learn filtering techniques/develop filtering technologies that allow us to keep up with data growth (a red queen hypothesis).

There is much else that could be challenged in the hypothesis, like the word “useless” – will these vast data sets really be useless? Or just less useful? But the hypothesis provides us with a useful counter-narrative to the main one, and deserves to be at least discussed, even if that is just to be disproven.

But we should really ask ourselves what is most reasonable – that more data automatically means more value, or could we imagine that the growth of data will reach a point where it collapses into noise?

Posted in Attention, data | Leave a comment

Generative analogies II: A list of interesting analogies

As I explore the idea of thinking through analogies I wonder if there is a basic set that one could develop the habit of always using when looking at something in order to force a number of different perspectives? If you get five analogies – which would you use? The model question we will ask is:

(I) How is this like a …

And the five models I would pick are probably the following, I think (subject to revision).

  1. …a game. I find games extraordinarily powerful devices. As a part of an offsite once I asked my team to design a game that describes what we do. The only requisites were that they had to have a way to keep score, a number of players and define winning. Three groups constructed three entirely conceptually different games.
  2. …a city. The city is a fundamental concept, as witnessed by the use of this analogy in, for example, Plato and Augustinus.
  3. …an evolved ecology. The ecology – especially evolved – forces us to think about how things interrelate. The game is agonistic – or could be – but this is one where we focus on relatedness.
  4. …music. Musical pieces are interesting analogies, and quite hard to apply. Maybe poetry belongs to this as well?
  5. …computer software. Exploring a problem as a set of algorithms and data structures is usually quite helpful.

One could choose any number of other possible analogies too, I think. It would be interesting to ask people which ones they tend to default to. It would tell us a lot about their style of thinking.

Posted in Generative analogies. | Leave a comment

Attention I: Simon and attention – a re-reading

This autumn I have spent some time thinking about Simon’s excellent 1969 paper on designing organizations in information rich environments. It is a wonderful piece of writing and it contains so much more than it seems to have been remembered for. The key insight that this paper brings is simple to express – that with a wealth of information comes a poverty of attention – but how this dilemma plays out and how Simon addresses the ways in which we can think about this is a lot more complex than I think we have given him credit for. This short essay contains a re-reading of Simon’s paper with some reflections that I used for a series of lectures and talks.

Attention is increasingly becoming a key concept for informatics. It has always figured as a core idea, and as witnessed by Simon’s treatment it is a concept that is tied to at least two key trends in the information society: the fast growth of data and the evolution of artificial intelligence. The first is obvious to us, the second was obvious to Simon. That artificial intelligence – the necessity for AI – flows from the growth of information is an insight that is rather interesting, and one that I think can be complemented by noting that with the growth of information we also see a growth of complexity.

So, spending some time re-reading Simon is definitely worthwhile. Let’s walk through his paper in excerpts and discuss a few key quotes. We should start with the key proposition in his argument, and carefully note the wording:

“a wealth of information creates a poverty of attention and a necessity to allocate that attention efficiently…”

I think it is important to note that Simon is making to assertions here. The first is that we in fact are seeing a wealth of information growing around us (I will refrain from arguing that this is the case – I believe it to be true, and will leave the argument for a later piece of writing), and the second that this growth is casually connected with the poverty of attention that we all seem to experience on an almost personal level. But what he then notes is far more complex: that this creates a necessity to allocate attention efficiently.

What does this mean? What is efficient allocation of attention? It seems reasonable to assume that we can only determine this if we relate our allocation of attention to certain objectives, but we could go further than that. We could argue that since we use attention to understand even what objectives we formulate – we need to consume attention to form objectives – the allocation of attention has to flow from something more fundamental. It needs to reflect som basic values.

If this is true, it means that our allocation of attention expresses our basic values. We could, in fact, read Simon as saying that our allocation of attention – individually and collectively – is a choice of fundamental moral importance. Is this over-reading Simon? I actually do not think it is – especially because of the way that he closes his argument (and we will return to this).

As Simon states that we are in a state of poverty of attention, Simon realizes that he has to define attention. Information, he assumes, is already defined (this is a weakness in the argument) and he feels the need to also allow us to think operationally about attention by quantifying it. Here we see one of the great strengths of Simon’s thinking overall, I think: he translates his often abstract arguments into concrete and operationally available parameters that we can work with in different ways. So, then, what is attention? Simon offers a deceptively easy definition:

“we can measure how much scarce resource is consumed by a message by noting how much time the recipient spends on it.”

Attention, then, is time. But is this true? Intuitively we probably agree with the idea that attention is something that exists in time — but we also know that we can spend time in a rather inattentive way. So just staring at a piece of paper for a given period of time is not devoting attention to it. Attention seems to be a peculiar activity, something that we do. It is not just something that happens – or passes – as time seems to be. So why does Simon ignore this basic intuition in his definition of attention?

There are, I think, two possible answers to this question.

The first is that he is introducing a definition that is as complex as is required by the rest of his argument – if we are looking at the design of organizations we have no realistic way of measuring “attention” but we can measure time. In fact, time as proxy for attention may offer the most efficient way of understanding attention without entering into deep, psychological analysis of attention as a concept. Here we would argue that Simon is simplifying the concept in order to be able to use it effectively.

The second possible answer is that Simon disagrees – that he believes that attention is only time, but that attention can be honed or trained or developed in different ways. If someone stares at a message for three hours and someone else stares at it for two minutes before comprehending it does not mean that the first could have used only two minutes – it just means that the different recipients had different quality of attention. Such a reductive view of attention would assume that attention is perception, and nothing more. If we assume that attention and perception are synonymous we could state that a wealth of information creates a poverty of perception, and I think that shows why this is an untenable interpretation. There is no poverty of perception – perception is constant, but attention is not.

Now, this is interesting because if we assume scarce resources we seem to have two scarces resources here: perception and attention, and of the two attention is the more scarce. Perception is the upper boundary of attention. Reading Simon as saying this (and assuming that his rather hasty definition is just an operational move) we can make more sense of his suggested solutions – and I think that he would agree with this interpretation; not least because it is necessary to solve the problem he has posed. If we have a poverty of perception there is nothing we can do.

When approaching the solution, Simon makes a few very simple observations. Remember that he is studying organizations in information rich environments. So what he is describing is simply an organizational response to the abundance of information. Still, we can learn a lot from him in thinking about individual responses as well. In fact, we may even entertain the hypothesis that there is no difference between an organization and an individual at all — an individual can be thoughtfully modeled as an organization in just the same way a corporation can. This fractal nature of organization is another aspect of Simon’s thinking that we will not have time to dig deeper into here, but it remains an interesting line of investigation.

In looking at the solution, then, Simon says this:

“an information processing subsystem will reduce net demand on the rest of the organization only if it absorbs more information, previously received by others, than it produces”

At a first blush this is deeply trivial and very disappointing. Of course we need an asymmetrical relatiomnship between reception and production if we are to deal with information wealth, but Simon expands on this in a way that immediately makes it more interesting:

“if it listens and thinks more than it speaks”

And then he notes that how this is done is “largely independent of specific hardware, automated or human”.

This is where we see the link between artificial intelligence – or machine learning – and information abundance. Because what we need to build is something that transforms information:

“it can transform (“filter”) information into an output that demands fewer hours of attention than the input information”

This subsystem – automated or human – is a filter. This is a key insight — the response to information wealth is filtering, and filtering is something that requires, as the information abundance grows, more and more intelligence. The ultimate filter is able to reduce vast amounts of information to insights. It “listens and thinks more than it speaks”. So as we are thrown into a world of ever-increasing information, the need for artificial intelligence is an inevitable consequence of that information avalanche.

It is worthwhile thinking about this for a moment. What I am saying here is that we can read Simon as making the argument that information rich environments – information abundance – leads to the necessity to build new systems that think, listen and then speak. The data explosion necessitates the development of thinking, listening and speaking technologies.

He writes:

“Our capacity to analyse data will progress at an adequate pace only if we are willing to invest in […] artificial intelligence.”

“In a knowledge-rich world, progress does not lie in the direction of reading information faster, writing it faster, and storing more of it.”

“Progress lies in the direction of extracting and exploiting the patterns of the world […] so that far less information needs to be read, written or stored.”

Extrapolating here we could say that “Big Data” (oh, that horrid term) or data analytics is not merely a tool – it is an evolutionary response strategy without which we could not navigate the complex and information rich environments we have created – and if we fail at it the information wealth we enjoy will devolve into complexity and generate increasing costs for us as a civilization.

Fine: so we need ML or AI to deal with information avalanches. So what? Well, imagine a world where the majority of the information you consume has been filtered heavily to give you a chance to consume it within your biological boundaries. In such a world the information systems will be designed following this peculiar reversal that Simon notes:

“It is conventional to begin the design of an IPS by considering the information it will supply. In an information-rich world, that is doing things backwards. The crucial question is how much information it will allow us to withhold from the attention of other parts of the system.”

Simon seems to say that we will live in a society – an organzation – where information withholding will be more important than information access. The shape, design and architecture of information withholding – filtering – will be more interesting than the design of information access.

There is a risk in all of this, of course. We could imagine Simon outlining an apocalyptic risk quite different from all the scenarios in which we end up in a war against the machine, and we could see him painting a world in which the AIs simply create a massive desinformation society in which we are given anodyne information and data opiates to keep us calm, and where any knowledge we can claim will be filtered and dependent – by necessity, because of the enormity of the data available – on AIs that provide it for us. Not so much an information society defined by the access to information as an ignorance society defined by what has been withheld. (I have lectured elsewhere about the ethics of ignorance, and we may want to return to that theme later).

Simon, however, is optimistic. And his point is that the response to information abundance needs to be a deeper and fuller understanding of the human mind, and that this needs to be the dominating science project of our time. He contrasts our view of the human mind with that of – in 1969 – the moon:

“The exploration of the Moon is a great adventure, after the Moon there are other objects still further out in space. But Man’s inner space, his mind, has been less well known than the space of the planets. It is time we establish a National policy to explore that space vigorously, and that we establish goals, time-tables and budgets.”

To Simon AI was then at least as important a project as the moon landing. And his purpose was simple: it was to augment man:

“Will you think me whimsical or impractical if I propose, as one of those goals […] an order of magnitude increase in the speed with which a human being can learn a difficult school subject?”

There is a humanism in this that is quite important. Simon was never interested in technology for its own sake, or because it would help solve some minor problem. He felt that it was the way that we would be able to augment ourselves. And perhaps that also provides the answer to the question about trust – if we integrate this technology in ourselves, as attention augmentation technologies, we will have become the filters we need to make sense of the world.

Posted in Attention, Lectures and essays | Leave a comment

Generative analogies I: Software

One question that seems to recur quite often in my conversations with students and others is how technology affects business models, or how innovation in different sectors of the economy are affected by technical change. While an interesting question it often ends up being answered by generalities such as “it is important to adapt to change” or “new business paradigms are emerging” (or other similar less helpful commentary). Another way to approach the question is to seek out some models or analogies that can help in understanding a particular business. Let’s think of this as the practice of employing generative analogies.

A generative analogy, in the way I think of the term, is analogy that allows us to explore a particular problem through the analogy’s lens. If the form of the analogy is:

(i) A is to B as C is to D

our generative analogy is simply an analogy we use to explore A through assuming that X is as Y. There is nothing new here, but it is important to remember that this is what we are doing. When we explore war as a game we are emphatically not claiming that war is a game. Most of us realize this, but yet fall afoul of the implication that we need to recognize that there may still be very important differences between X and Y, and so the last step of a generative analogy methodology should always to be to list the ways in which X is not as Y. War costs human lives. Games do not.

Generative analogies (some prefer to call them models, I think, but I think a model tends to require more description and detail than an analogy – I believe that the idea that we should be able to build a model physically in order to call it a model is a good one) are interesting in a lot of different ways. Here are a few guesses that I think hold true of them. They probably could best be described as a subclass of Dennett’s tools for thinking.

(A) There exists a fundamental set of perhaps 10 to 100 generative analogies that are extremely useful for general problem solving. These can be fruitfullly applied to a broad range of problems and the results will work as a battery of diagnostic tests for any problem space as well. I think these include examples like: …as a game, …as music, …as an organism and …as a city. Any phenomenon we study can be examined as game, music, organism and city – in different ways.
(B) Powerful generative analogies are cultural, and a wealth of then can be found in art and literature. One important way of acquiring creative analogies is to read – and read lots.
(C) Generative analogies do not provide strictly replicable results – two different people using them will come to different insights and conclusions about a problem – but they provide rough consensus results. Two individuals studying war as a game will discover roughly the same similarities and differences.
(D) Generative analogies are more useful if you use more than one.

There is much more to say about the general idea of generative analogies (and we will explore it more in a series of posts going forward). Before we move on to a particular case I think it is important to point out that this is not a novel observation of mine, but builds on the work of, most importantly, Douglas Hofstadter and his research group FARG. Hofstadter argues that analogy is in fact fundamental to any kind of thinking, and that our ability to think – our cognitive abilities overall – are heavily determined by our ability to handle and work with analogies. I fully subscribe to this view.

Now, I want to make a slightly more specific argument here. I want to argue that with the rise of computer science a new fundamental, enormously powerful generative analogy has emerged: that of software.

Technology holds a special role in our use of analogy. We have, self-consciously, always used the most complicated technology of our time to describe ourselves, or our minds. The water mill, the machine, the computer — the analogical use of technology says a lot about how we have perceived ourselves over time. The clockwork of the universe is another example that brings home how we try to capture complexity in our use of analogies based on technology.

Thus every new technology automatically lends itself to analogical use. But I would argue that software is slightly different. Software is not a technology, but an abstract concept. The idea of the algorithm and data structures that algorithm use is not dependent on any specfic technology but can in fact be implemented in a number of different ways. Step-wise processing and data structure storage is something that we could imagine happening altogether without computers.

Take the first search engine, the Repertoire Bibliographique Universel. Created by underestimated, and sadly almost forgotten genius Paul Otlet, this search engine allowed people to ask questions and get answers from an index card system that grew from 400 000 to 15 million cards. The whole search algorithm was implemented in handwritten letters, index cards, boxes, drawers and the algorithms were executed by librarians who received the letters and answered them. No computers. This was – undoubtedly – a search engine, but it was implemented entirely in hardware both human and analogue.

In order to be able to answer the question we started with – about service sectors, business models and more – we need to add one consideration. We need to see that all software is the solution to a well-delineated problem. An online bookstore sells books. It solves the problem of how to sell books effectively and maximizes for that. Implemented in computer systems it can solve the same problem much more effectively than a brick and mortar store (which is why, I think, Marc Andreesen made the excellent observation that “software will eat everything” – it is an observation on the efficiency of computer implemented software visavi software implemented in other ways). Now, that does not mean that all book stores need to die and that online will take over everything; but where the problem solved is one of just selling books, the online selling model will probably win. The new generation of indie book stores also can be seen as software, but software solving another problem. The problem these stores solves is “how can I interact with a literary community, feel a part of a certain culture around books, writers, art and music?” – such problems are much harder to solve in digital code visavi analogue code.

So we must know of every single business model we study: what is the problem it is solving, and what does the software look like that solves the problem? If that software is more efficiently implemented in digital code it will be. If it needs a hybrid code implementation it needs to get that. If it is analogue only (nursing comes to mind) it will remain implemented that way.

Now, the questions we ask about the future of work, service innovation and business models depend on the answer to another, perhaps more interesting problem: how many problems exist in each category? How many problems can solely be solved by digital code implementation, how many by hybrid code implementation and how many by analogue code implementation? And of course: what does the hybrid set of solutions look like in terms of different hybrid arrangements?

Business as software is not a question of how we write a program, but of how we can conceive of a business as a set of algorithms and data structures and ask for each and everyone if they should be implemented digitally or not. And this generative analogy provides us, as D C Dennett has noted, with a supremely efficient tool for thinking.

Posted in Generative analogies. | 5 Comments