Is there a xenobiology of artificial intelligence?

Draft – do not cite.

We may or may not be getting closer to the point where we can achieve artificial general intelligence. If we do we will face a number of different problems, including the so-called “control problem” that in one of its many versions is a question about how we can control something that could be orders of magnitude more intelligent than us. This challenge, as pointed out by Bostrom and others, is not easy to solve and we may end up having to think hard about the design of optimization functions and similar problems, but there is also another way that we could approach this challenge – and approach thinking about how a world with AGI in it will evolve, and that is to chart and explore the possible xenobiologies of artificial intelligence.

Often, when we discuss artificial intelligence, we discuss it an a disembodied way. We focus on the notion that this will be pure thought, a rationality that is completely without any anchoring bodily reality. This seems wrong, and potentially mis-leading, and it stops us from asking a lot of questions that have to do with the biological limitations of artificial general intelligence.

*

First, a conceptual note. When we speak of the “biology of x” we speak of x as being alive. That can mean a lot of different things, but if we just look at it in a coarse-grained way we can see that life is something that evolves, has metabolism and is constrained by the frames of biology (it is probably wrong to speak of the laws of biology, so in analogue to laws of physics I would suggest that we speak of the frames of biology – limits that biological existence imposes on systems). Will AGI be alive? I think it is fairly obvious that it will – it will have metabolism and it will certainly be able to reproduce (we may think we can stop this by design, but here I am with the authors of the movie script to Jurassic Park – life finds a way – even into that which is not living yet), and it will be existing in an ecosystem of sorts.

If we achieve AGI we will not only have created human level intelligence, we will have created a form of life.

This is what I read Wittgenstein as saying when he discusses thinking machines. His idea, that when a machine is said to think we will no longer think of it as a machine, but we will have to adopt the attitude we have to a soul, is based on the realization that thinking and intelligence are inextricably connected with the idea of humanity and life. Wittgenstein’s use of the religiously loaded “soul” should not confuse us. I think we could say that we will have an “attitude towards life” instead.

We can formulate a hypothesis here for discussion:

(i) Any system that achieves artificial general intelligence levels that are human equivalent will also, for all intent and purposes, be a living system. 

I am going to assume that this is true for this speculative essay, but realize that it can be debated. Furthermore, I am going to assume that directly from (i) follows.

(ii) Any living system exists in a series of biological frames, such as metabolism, size, lifespan, evolution, mutation and similar biological concepts. 

Again, I can see where this might be challenged, but want to adopt this as a premise as well, since I think it leads to an interesting set of thoughts and an interesting mission: to chart and understand the xeno-biology of artificial intelligence.

*

Xeno-biology is a field of study that attempts to understand what other possible forms of darwinian trees would look like, or what other forms of non-darwinian life could look like. What we do when we ask about the xeno-biology of artificial intelligence is to open up a field of questioning that treats an AGI as a biological system, and that will allow us to ask what I think are important questions about a world with AGI in it. There are many different such questions, and below I will list only a few, preliminary ones with a few thoughts on what the answer space would look like. The idea is not to do a thorough examination, but just hint at a few interesting avenues of research.

*

Will AGI exist under darwinian evolution? If we assume AGI is alive, it makes sense to ask if it will exist under selection pressure and mutate in different ways, or if it will be exempt from evolutionary pressure. This matters in a couple of different ways, and actually could have design implications as well.

It is today generally recognized that the technological systems we create have passed a point where they could be easily explained and audited. A key work stream in the field of machine learning is to figure out how to deal with this complexity and find other ways of controlling and reviewing systems that we no longer can explain on what Dennett would call the design level. In fact, it is useful to think about Dennett’s idea about different stances here, and look at if there is a need of a new stance in the Dennett framework to explain what machine learning systems are doing in a way that allows us both economy of explanation as well as maximal power of prediction – but that is a separate question.

The complexity conundrum – explored by for example Arbesman – leads to a situation where we no longer deal with black boxes, but in fact are better served by applying biological metaphors and psychological models when we describe the systems in question. (This in itself hints at the hypothesis we are adopting here: it is not just a metaphor. When we are better of describing something as biological we should consider the chance that it may actually be biological in some basic sense). The black boxes are actually black bugs, and there is no way that we can open them and find out what is happening without killing them.

The design implications, then, lead to the question if we build evolution into these complex systems. Should we ensure that an AGI mutates and evolves in order to ensure that it also can develop? Or, let’s phrase it differently: how would you design life and evolution if you could build it from scratch?

This leads to a series of familiar questions about evolution’s efficiency, life’s wasteful search throough darwinian space and other questions that belong in the philosophy of biology, and that it would take too long to explore here (but may be worthwhile to come back to in future essays). Suffice it to say that the deisgn of AGI – if we accept it comes with designing life – not only contains a control problem, but a choice of evolution for that system.

It will be tempting to replave the blind watchmaker with an eagle eyed brain surgeon here, but whether that is the right thing to do or not is not clear. The idea that we may design life that could design its own evolution is intriguing, and also asks questions about how you build an evolution and if it should be darwinian or not. The varieties of evolution are discussed in philosophical literature, and one piece of work here would be to figure out what the design space of evolution looks like. What can we design and change, and what is given as a larger biological frame?

To return to our initial question: it seems hard to imagine an AGI that is totally exempt from all evolutionary pressures, and so the question then will be how its evolutionary system will be designed or evolve in itself. The evolution of evolution with AGI added is also a related and interesting question.

Imagine a world thousands of years into the future where AGI has taken over evolution and runs it more efficiently and has created a post-darwinian tree that can evolve within individuals to meet any small changes in the environment, optimizing second by second rather than generation by generation. Would that be more robust or less robust than our raw, natural selection?

*

Assuming AGI is alive, also allows us to ask more mundane questions – like how large is an artificial intelligence? Science Fiction offers us two main alternatives: a disembodied system spanning the world (trying to kill us) or a robot (trying to kill us). More interesting examples exist in for example the Culture series by Banks. He surmises that AGIs may be the size of ships – and that this ships may run civilizations of billions in them.

Banks’ answer to the question of the size of AGI is important for two reasons. One is that he understands that an AGI will limited by a purely metabolic condition – it needs energy in order to act and interact with its environment. That energy needs to be produced in some way and simple physics of energy loss require certain size limitations.

Now, an AGI may draw on different energy sources, and be polycentrically metabolic (i.e. draw on more than one metabolism) but even so there will be limitations in the design and efficiency of a metabolism.

And if we assume a world in which there is some kind of competition for resources or evolutionary selection pressures, the AGIs will need to figure out or adopt an optimal metabolic system.

That does not limit the sizes significantly. I expect you could imagien Dyson-sphere sized AGIs that turn the solar system into a metabolic system, but you could also imagine much, much smaller systems that enjoy enormous efficiency through very controlled metabolisms.

But Banks’ other observation is more interesting, and that is that the ships are hosts to billions of humans. In Banks’ world AGIs are symbiotic with mankind, and have developed a partnership where there is a certain human ability that AGIs cannot easily replicate. This may be wishful thinking on Bank’s side, but is a possibility. In fact, a simple economic equation of cognitive system economy may argue that AGIs link themselves symbiotically to humans.

It suffices to assume that the cognitive cost for different tasks vary across humans and AGIs in such a way that there is a set of important and valuable cognitive tasks such that the cost for a human is an order of magnitude lower than for an AGI, but the value to the AGI is still very high. Further, imagine that the AGI is able to create an environment that reduces complexity to acceptable levels for humans and that this in itself encourages humans to partner up. If that is the case, the size of an AGI will be determined by both an energy metabolism and what we can call a cognitive metabolism. Both matter.

Initially, that size may be more determined by humans than AGIs, and before we evolve into the poetic and perhaps likely vision Banks has, where the universe is populated by AGI-human civilization sized ships, we would see for example the emergence of city-sized AIs.

The city is a key human organizational form, and may even be an extension of our eusociality. The city is to us as the hive to the bee. It is not far fetched to imagine that the most fruitful size for early AGI to adopt would be the city.

All of this is speculation, naturally – but the key is that the question of size exists in a biological frame.

*

Another question that we will face will be the life span of an AGI. Why, you may ask, should we assume anything but that an AGI would be immortal? Well, partly that question has to do with what kind of evolutionary context we imagine for an AGI, but partly it also has to do with the question of optimal life spans from an individual perspective.

What happens if death becomes optional in a biological system? This question is not merely theoretical, but it challenges the notion that a system has to die for it to be biological. Is death really a biological necessity across all possible biologies? If it is and we agree that AGIs are biological in some sense, do we not also then argue that they have a life span and will die?

Death can come in different ways – we could imagine deaths that depend on the energy sources available to an AGI. A Dysonsphere-sized AI could die if its star happened to become unstable and exploded as a supernova (making the trite point that material destruction also matters to an AGI). But we could also imagine that we will discover new types of death: perhaps there is death from complexity, for example: where a system becomes so complex as to become unpredictable or chaotic and so collapses and loses its intelligence and just becomes multiple disorganized parts again.

There may well be complexity boundaries for intelligence, that limit the possible size, age and power of intelligence.

How old will an AGI become? That question is also interesting, and it will force us to rethink a few basics about life and death.

*

Finally, the last question I wanted to at least hint at is the question of what sense of time an AGI will have. Our sense of time is a mysterious thing and we do not know enough about it.

Roughly, we believe that our nervous system sends signals that are consolidated into a now in our brain, and that the pulse with which this happens is far from real time. Nothing we experience is real time. This lag – we can call it our cognitive lag – depends on the size of our nervous system. It has been shown, for example, that tall people have slightly larger cognitive lags than shorter people – because of the length of the nerve pathways. This is in itself interesting, but it also means that we can generalize the question of cognitive lag to our AGIs.

The size of an AGI will determine the cognitive lag that it suffers, and hence the size of the chunk of time that becomes its now.

Going back to our idea of the Dyson-sphere sized AI: its absolut smallest cognitive lag will be the time it takes an electro magnetic pulse to travel across the Dyson-sphere to consilidate into a now.

For our solar system that would mean that if the entire physical system was involved in the constitution of the now and the AGI consilidated close to the sun, it would take 5.3 hours for the signals to travel from Pluto to the center of the solar system.

A now that consolidated with a pulse of 5.3 hours would be on the order of 10 000 – 100 000 times slower than our now. We could imagine cognition that takes place at even slower time and longer distances. A galaxy wide AGI could even have a now that excluded life on earth because it could arise and disappear between nows.

And this is assuming that an AGI has the same kind of sense of time that we do. Could we perhaps imagine a life form that had a poly-chronological consciousness? A being that had several parallel nows or perhaps no now at all?

How would such a being interact with us?

*

The study of possible AGI-biologies and the consequences of these seem to be a worthwhile field of study, and raises interesting questions about the division between life and matter, as well as between technology and biology.

Maybe we are moving to a version of Clarke’s law that could state: any sufficiently advanced technology is indistinguishable from biology.

 

Nicklas Lundblad, Stockholm, August 2017

Comments are closed.