Kate's Comment

Thoughts on British ICT, energy & environment, cloud computing and security from Memset's MD

So which apocalypse should we worry about and what to do about it?

The last few months have been interesting for “preppers”. Preppers are generally excessively bright people who are consequently rather paranoid and worry about events which might bring about the end of civilised society and try make preparations so that they are better-suited to survive the breakdown a civilisation. I consider myself among them, though my efforts are not particularly extreme; I don’t have a bunker or anything like that, but I do own an ex-MOD Landrover Defender with the tithonus modifications (see right – note that the weapons are legal airsoft/BB guns!), a lot of tinned food, a small cache of diesel, a small generator, lots of wood for heating and an extremely secure home.

But back to the point. Not only did some ancient civilisation’s spiral calendar run out two weeks ago causing some doomsayers to get a bit excitable, but people widely have been starting to seriously consider what might bring about our end. Last month there was an interesting article in New Scientist on the new Centre for the study of Existential Risk (CSER). It seems that the focus is on nuclear war, doomsday viruses, climate change & singularity (machines) as the modern day horsemen of the apocalypse.

Since it appears musing about such things is no longer considered a bit looney, and given that the Mayans were clearly wrong, I thought I would share some of my own thinking on the subject of which apocalypse we should actually worrying about.

Is the clock ticking?

This is something I have been thinking about for a long time – roughly 22 years. When I was 12/13 I became very interested in astronomy and developed my hobbyist interest in astrophysics. At that tender age one of the things that occurred to me is that, if we assume that humanity as a species will continue to expand in numbers exponentially over time, and if we assume a somewhat deterministic universe, then it was extremely odd / improbably that I should find myself alive now when humanity only occupies one tiny planet in one small corner of the cosmos.

Consider this: I give you a bag and tell you that it has a number of blue balls in it and one red ball and instruct you to start picking balls from the bag without looking. The third ball is the red one. I then ask you whether you think there are 10 or 1,000 balls in the bag. You would likely suspect the former. Now, this isn’t a terribly robust hypothesis but it got me thinking since it could be used to suggest that there will likely not be many more humans.

At a similar age I also decided that one lifetime was definitely not going to be enough to explore all the interesting things in this little world, let alone the universe, and thus began my interest in life extension. That is why my Masters degree was in biomedical science, specialising in neurology, but more about that later. It also led me to conclude that there might be some threats to my planned immortality which although at the time were questionably-based I have, in the mean time, come to think was probably right.

The four horsemen

Back to the CSER’s four horsemen. They are looking at things which humanity might do or create to bring about their own demise, which seems sensible as it is those which we have a chance of avoiding. Let’s run through those four threats.

Nuclear war. Yes I can see this as a threat but I think it unlikely that it would result in our total extinction. We are a highly adaptive species and though it might decimate us there are plenty of remote areas which would be largely untouched. A nuclear war would only focus on areas of strategic importance to the warring nations. Even in scenarios like a nuclear winter it seems unlikely all would die since as the population diminishes the consumption of finite food and fuel resources remaining would eventually plateau at a level which would allow a few of us to survive the winter.

Doomsday virus. There are some good models and even fun games on this, and the conclusion is that it is actually extremely hard to design a realistic pathogen which would entirely eliminate us. This is mainly because most countries have good systems for shutting down their borders in the event that neighbours have a pandemic outbreak and we have become rather good at developing immunisations swiftly. Having been hurt by such in the past we are quite well prepared. Perhaps the most credible threat would be a pathogen with a very long gestation period that also affected the hosts in such a way as to make them “aggressively infectious” (ie. rage virus / zombies) but with my background in biomedical science I think that very unlikely!

Climate change. The worst possible outcome of climate change is probably an extraordinarily rapid transition out of our current ice age. We’re actually in an interglacial since the ice caps have receded, but the normal cycle would be for us to return to “normal” ice age in a few tens of thousands of years where the glaciers would advance about as far as Birmingham. Now, the glaciers are receding at an alarming rate so the idea that we might, within a couple of lifetimes, revert the planet to a state which dinosaurs like (no ice caps and rather warm with temperate zones having shifted a long way North) is a remote possibility, but it is hard to see how this might be a terminal event. Indeed, I can’t actually see this killing more than a few tens of percent of our population due to famine.

Singularity event (rise of the machines / artificial intelligence). Now on this one I have to agree with them, but actually I think it is inevitable and should be embraced, not feared. More on that later.

Threats from above

But what about extraterrestrial intelligence? Yes I am serious; surely that must be high on the risk-factor of things human activity could do to end our own existence? We recklessly announce our presence to a universe which appears to be capable of supporting life and yet the skies are mysteriously quiet.

Again, I first started thinking about this in my early teens. I was bamboozled by the night sky; where the hell was everybody? It seemed (and still seems) to me that if life could spontaneously spring up on this planet then surely among the 100,000,000,000 stars in our galaxy there would be other such occurrences, and failing that then surely the observable universe consisting of something on the order of 80,000,000,000 galaxies and about 6 x 10^22 stars would be teeming with life? Further, there have recently been tantalising hints that there might be organic matter on Mars and the recent penetration of lake Vida in Antartica revealed wholly new forms of life which could in theory survive under the ice on Europa, one of Jupiter’s moons (ref). If our closest neighbours could or might have once supported life then it really does seem very odd that there isn’t more out there.

As an aside I did go through a solipsistic phase and even now have not discounted the possibility that reality is some form of simulation, but in the absence of better information it seems prudent to accept the reality with which we are presented!

Back to these malevolent aliens. I came across the anthropic cosmological principal in my late teens – the idea that there would only be a narrow window in which intelligent civilisations could recognise each other as such – but discounted it. Again, if there were many civilisations surely some would be hegemonistic, spreading out among the stars and leaving some sign of their passage? Further, if a benevolent society evolved then presumably there would remain more primitive elements still recognisable to us? Instead I concluded that it is probably a good survival trait for intelligent life to be quiet about their presence and/or that noisy intelligent life gets silenced by more aggressive species.

This might seem a bit paranoid, but consider this: there are limited accessible resources in the universe. I conclude this from the evidence that a) the expansion rate of our universe is accelerating and b) that general relativity suggests that spacetime can expand faster than the speed of light. Therefore, if you can only travel at less than or equal to the light speed there is finite amount of space which can be explored, and thus a finite amount of resources which can be exploited. This is also supported by one of the modern hopefuls of a grand theory of everything; string theory holography suggests that the universe is limited to the stuff within the cosmoogical horizon.

Given this limited resource, If I were the dominant species in the neighbourhood I would sure as hell not want to risk some young upstart becoming a threat and a competitor for resource. The logical conclusion would be to eliminate them while vulnerable. If we draw from our local experience too, what happens to lesser species (eg. flora and fauna that we don’t find useful or bend to our will through selective breeding), a less advanced evolutionary cousin (eg. Neanderthals) or more primitive civilisations (eg. native Americans) that we encounter? Generally they get eliminated or severely curtailed by the dominant species/civilisation (us).

So, back to CSET’s mandate, what are we doing to encourage this untimely end at the hands of genocidal extraterrestrials? Some might argue the cosmological anthropic principal’s view that we are not broadcasting in a manner that would be noticed, but if we look at our own technological advancement we do still use old modes of communications (eg. amplitude-modulated radio) as well as modern ones (eg. digital spread spectrum or directed microwaves).

Further, the more we advance the “louder” we get! Picture planet earth in the electromagnetic spectrum; we are a giant Catherine wheel of microwave beams lancing out into the sky from our equator (all the geostationary orbit satellite uplinks – most of the radiation misses them), spinning handily in the same plane as the rest of the galaxy as a lighthouse might shine out across the sea.

So what to do about it?

Unfortunately that horse has probably already bolted. We have been announcing our presence to the neighbourhood for decades and therefore I think the answer is to be prepared. We probably have a lot of time before anyone unpleasant turns up. First, we are quite early in the universe’s existence; the earth is a third of the age of the universe and it is likely that the first generation of stars would not have been able to foster “goldilocks” planets (not too hot and not too cold for water-based life).

Second, the galaxy is about 70,000,000 light years across and even if we assumed that 1 in 100,000 stars had “goldilocks” planets and 1 in 100,000 of those billion stars supported life which had evolved into intelligence that would be about 8 in our galaxy. Based on a random distribution it is very unlikely anyone would be within a few hundred millennia of us. However, one might argue that a hegemonistic civilisation would be more likely to have spread into our locale but even then we’re likely to have hundreds if not thousands of years in the very worst-case scenario.

However we have another problem; thanks to medical science we are no longer evolving as a species and remain, frankly, rather “squishy”. In fact one could argue that thanks to the aforementioned medical wonders combined with socio-economic forces we are actually devolving as a species (highly intelligent women tend to breed less due to careers). It should be noted that the recent increase in average intelligence among humans can be attributed to a combination of improved diet and education – it will plateau, and soon.

Escaping from wetware-reliance

For me with my goal of living long enough to explore the entire universe, this lack of evolution presents a bit of a problem – I don’t think we will advance swiftly enough to be able to defend ourselves, and if anything immortality would likely slow us down. Thankfully, my preferred solution for immortality also presents a handy answer. My Bachelor’s thesis at university was an exploration of the most effective methods of life extension.

My conclusion was pretty simple; while it will probably be possible to genetically engineer an immortal child in my lifetime that’s not going to help me, and instead my best hope would be the progressive replacement of my mind’s substrate (body and brain) with maintainable artificial systems. I’m not necessarily advocating wholesale replacement – though possible by microtoming (very thinly slicing) a brain and scanning it under an electron microscope then running an emulation in software it would arguably have the unfortunately side-effect of the subject having died and a copy living on – but rather a gradual replacement of failing bits.

Consider someone who has a stroke; it is likely that within my lifetime we will have the technology to replace the dead brain tissue with an implant which would take over the functions. Repeat this many times and your mind is running mostly in silicon. Thanks to the brain’s massive redundancy it should then be possible to disconnect the “wetware” and hey presto you’re consciousness is in something that can be maintained indefinitely.

Now, I do fear that this might not happen in my lifetime, but that’s why I chose neuroscience to specialise in for my Masters since neither the technology necessary to do the human-machine interfacing nor the neurological understanding to emulate a brain exist yet.

The other important piece of the puzzle is efficient computing, and that is partly why I’m doing my PhD researching computing energy efficiency. The goal is challenging, I admit. There are 10^11 neurons in the human brain and about 10^10 of those are cortical pyramidal cells (the important ones). It has been estimated that you need 1,000 instructions per second (IPS) to emulate 1 neuron, but I think they are a bit cleverer than that so let’s say 10^6 IPS/neuron. A modern Xeon can do about 100 million IPS (MIPS). This suggests that it is possible to emulate a human brain in real time using 100,000 modern Xeon processors – a very achievable number (Google has at least that much compute).

Of course given my proposed “staged replacement” that presents a bit of a challenge, but arguably not for long given Moore’s law. The human brain is really quite remarkable; it is doing all that work with only 20 Watts whereas my 100,000 Xeons would consume at least 6 Mega-watts! However, Moore’s law can also be roughly applied to the energy efficiency of computation, with it doubling every 18 months. On that basis it should take us only about 30 years to be able to do the equivalent of my 100,000 Xeons’ work using less than 20 Watts.

Some of you might point out that we will approach the limit of Moore’s law in the traditional sense (the exponential increase in transistors crammed onto integrated circuits) towards the end of this decade due to electron tunnelling as you bring the “wires” in a circuit very close together. However, it should be noted that transistor density is merely the fifth paradigm of Moore’s law and it is likely that it will continue well beyond this decade in some other form, be that quantum, optical or DNA-based computing (more here).

There is of course a danger that we still might not develop the necessary technology in my lifetime, so my backup plan is still the whole microtoming my brain thing. Not ideal but probably subjectively no more of a death than having a general anaesthetic. Also, I suspect that experimenting with massive compute resource in the mean time will be an important step towards developing the necessary technology. That’s partly why I became an IT infrastructure entrepreneur (the other reasons being its fun and I’m good at it); phase 1 of the master plan is to amass wealth and compute resource, and that’s is proceeding quite well so far. 🙂 In a decade or two I’ll be ready to move onto phases 2 and 3; how to emulate a brain in software and how to interface brain tissue with electronics – after all, if you need something doing it is better to be in charge of it yourself!

Dues ex machina

So how does this help? Well if, hypothetically, we were able to “upload” a consciousness then it would not only render the subject immortal and much less “squishy” (ie. vulnerable) but it would also allow them to enhance their own cognitive abilities; one could re-write/re-wire one’s own brain, adding enhancements directly plumbed in – a literal mind expansion. This would likely be an exponential process and is generally termed a “technological singularity”; where an artificial intelligence’s development accelerates it beyond human comprehension.

I must admit that I’ve been a bit obsessed with this for most of my life and have given it a lot of thought. For a time I liked the idea of completely eschewing my humanity (especially the emotions bit – they are terribly distracting) but more recently have decided that a purely intellectual existence would probably result in self-termination since you’d likely either conclude that there was a limited amount of interesting stuff you could explore/experience, or that the mysteries of the universe are fractal in nature but also ultimately pointless (I’d explore this philosophical epiphany more, but I’ve rambled on too long already! 😉 ). As an aside, my personal theory is the latter (fractal/effectively infinite possibilities), especially being someone who operates at the bleeding edge of technology creating new stuff who is also a scientist doing research on the unexpected effects of the things her field was creating only a few years ago!

Further, I think that our best bet for creating an artificial intelligence (AI) will be to base it on the intelligence that already exists in nature; ie. our brains. As mentioned above, the human brain is astonishingly complex and as yet neuroscience has only barely scratched the surface. Yes, on a micro scale I can describe to you how an individual neuron operates and interacts with its neighbours, and yes on a macro scale I can explain the psychological effects of pharmacologically encouraging a subset of a 100 million or so neurons to behave a bit differently. But how those 10 billion neurons operate in symphony to produce the emergent property that is self awareness is at this stage simply beyond our ken.

So, on the one hand I don’t think we can expect AI to be “built” from the ground up in our lifetimes. On the other hand, I think that the AI that we will create will be based on people like me. Since a purely intellectual existence whose only reward is learning seems a bit pointless to me I decided that I’d want some company when I get around to ascending my mortal form. Additionally, as part of that I think we would want to retain some of our animalistic humanity – specifically emotionality.

I don’t therefore think that a “SkyNet” type singularity is actually particularly likely. Also, even if one did end up with something of a divide (the fleshies and the immortals) I think that the immortals’ basic human nature would be retained – indeed I think it would be necessary to give the immortals’ lives meaning. I don’t therefore believe that the technological singularity would present a significant threat to humanity. Instead, I expect it to dramatically accelerate our collective understanding of the universe and also to equip us with the tools to avoid whatever fate befalls our galactic neighbours.

So, in summary, I think the thing we should worry about is why the night sky is so quiet, and what we should do about it is evolve, fast.

1 comment