The year is 2119
Text: Dominic Munton, Bachelor in English
Illustration: Ingvild Andersen.
The average global temperature is 3 degrees higher than it was at the dawn of the previous century. The ice caps have melted and barren sand dunes now mark the site of the Amazon rainforest, yet human civilisation survives, forever evolving to meet the harsh demands of this brave new world. There are no more multinational corporations, governments or nation-states. There is no more crime nor war. The planet-wide creation and allocation of resources is entirely controlled by a single, superhuman intelligence: AI.
It reads like the start of a sci-fi novel, but with recent developments in the field of artificial intelligence (AI) the above scenario is no longer so far-fetched as you might think. Simpler forms of AI already permeate human technology and are indispensable in processing the vast amounts of data we generate daily. Meanwhile, the ever increasing incidence of extreme weather events stands testament to the inadequacy of human efforts to curtail anthropogenic climate change. AI is already used extensively within the climate sciences, but the coming decades will likely see it take an increasingly significant role. However, optimism concerning the possible uses of AI is tainted with fears concerning its application in military affairs. The creation of AI programmed to recognise and kill human beings has come to stand for a Rubicon that many would rather not cross.
Research into AI began in 1956 when early investigators claimed that they would be able to replicate human-level intelligence within a short few decades. When these claims failed to materialise, interest in AI dwindled, beginning a series of periods that researchers later termed the ‘AI winters’. It was during the first ‘AI winter’ in 1977 that Exxon’s long-concealed climate research first indicated the relationship between the combustion of fossil fuels, increasing levels of atmospheric CO2 and climate change.
Since the turn of the millenium, AI researchers successfully mimicked structures found in the brain by neurologists and created the first ‘neural net’: a digital brain capable of rudimentary learning. From these neural nets arose the latest development in the field, adaptive machine learning, which brought AI back in from the cold and set it centre stage in an increasingly digitised society.
Adaptive machine learning allows programmers to ‘train’ neural networks to recognise specific patterns and isolate corresponding examples from within enormous banks of data. The potential application of these network is wide; most likely you personally use AI every day. Google’s search engine and Maps application both use AI to bring you results and help you plan journeys respectively.
In the environmental sciences, AI is already used to create complex models of the Earth’s climate and make predictions about how it will change in the future. It also has more localised climate-specific applications such as helping farmers in India to increase groundnut yields and enabling Norwegian engineers to integrate more intermittent renewable energy sources into the electrical grid.
However, there is a significant gap between an AI predicting how the climate is changing and and an AI telling us how to fix it. Researchers in the field differentiate between the terms ‘narrow AI’ and ‘general AI’. A narrow AI might be able identify a single picture of a cat out of a billion photos of dogs; general AI would be able to question why you wanted to do such a thing in the first place. Narrow AI is already widely-distributed and operational whilst general AI remains, as yet, a pipe dream. Opinions vary widely as to when or if general AI will become a reality. Director of Engineering at Google, technologist and author Ray Kurzveil predicted that we would have human-level AI by 2029 in his seminal text The Age of Intelligent Machines. Others, such as Professor Selmer Bringsjord of Rensselaer Polytechnic Institute, NY, argue that “the human mind will forever be superior to AI”. Regardless, one thing that few in the field disagree on is that AI will continue to develop rapidly over the coming decades.
The two principal factors driving the development of AI are economic and military. AI presents a unique opportunity for corporations to increase profits by using AI-enabled systems to produce superior goods and services. AI or robotic workers are already cheaper to employ than humans in a growing number of sectors. This combination of increasing profit whilst reducing expenses is catnip to the business sector with AI startups attracting 6 times more venture capitalist investment from 2000-2015, according the Stanford University AI100 report.
” We are evolutionarily incapable of considering the needs of 7.7 billion of our fellows and yet, if we do not, then we all risk extinction.”
More alarmingly, recent developments in AI have triggered the largest global arms-race since the end of the Cold War. AI offers science-fiction possibilities to military planners. The conventional goal sought for is a military robot with either partial or full autonomy. Such a robot would have an increased survivability in comparison with human soldiers, for example being able to operate in the presence of high-radioactivity or poisonous gas. Ideally, such a robot would also be able to use initiative and creativity when given orders, just as a human soldier would. By using robots on the battlefield, militaries can reduce the exposure of their service personnel to combat situations. Less soldiers dying means less domestic resistance to foreign military intervention, or so the thinking goes. With all the largest global powers working to develop autonomous weapons of various kinds, it is easy to understand why nobody wants to be left out.
Anxieties around the creation of killer robots made headlines last year when 2,400 scientists including Elon Musk, head of SpaceX and Tesla, called for a total ban on the development of autonomous weaponry. The late Stephen Hawkings suggested that “AI could spell the end of the human race”, as robots trained to kill only certain kinds of human could then expand their operating parameters indefinitely. Anyone who has seen The Matrix can tell you that this is not an avenue that we wish to go down. Indeed, Elon Musk recently suggested that the only way we can avoid such a ‘human versus AI’ nightmare is if we ourselves become post-human cyborgs in a “merger of biological intelligence and machine intelligence.”
Assuming we manage to avoid creating indefatigable killer robots, we are unlikely to counter catastrophic climate change without machine intervention of some kind. To begin with, despite extensive discussion of CO2 and its relationship to climate over the past 40 years, global CO2 emissions are still rising year after year. We might be good at discussing climate change, but the evidence clearly reflects that the actions we are taking to mitigate it are insufficient. Nor can we hope that nature will simply adapt to the new world that we are creating. The rate of climate change is now so intense that a large part of higher life on the planet is unable to adapt to the changing conditions. The WWF 2018 Living Planet Report revealed that “population sizes of wildlife decreased by 60% globally between 1970 and 2014.” And, if nature can`t physically adapt quick enough, neither can we. Our hominid ancestors evolved over the course of 6 million years to function in small social groups of around 150 individuals. We are evolutionarily incapable of considering the needs of 7.7 billion of our fellows and yet, if we do not, then we all risk extinction.
Likewise, human institutions are ill-equipped to face such unprecedented changes. The sovereignty of the modern nation state traces its roots back to the Peace of Westphalia, a European treaty which ended the Thirty Years War in 1648. It was signed in a world that was naïve to the internal combustion engine, the internet or the transnational corporation and yet it forms the basis of interaction for all international politics. In a similar vein, capitalism is ill-suited to remedying the issues that it has, in large part, created. Capitalism functions on the basis of infinite growth and an ability to indefinitely externalise costs. Unfortunately, climate change forces us to recognise the fundamentally finite nature of our circumstances within which infinite growth is pure fantasy. Costs are never truly externalised, payment is only deferred. In conclusion, the problems we face are too large, too fast and too complex for either our biology or our culture to cope with. Unless we are able to bring some radically new ideas to the table, the future of humanity remains more uncertain than ever.
But if we did have an AI sufficiently large and intelligent to act on a global scale, what then? It is not hard to imagine that a powerful AI tasked with solving the problem of climate change would solve the problem simply and effectively: by killing all humans. Given our own history of how we treat beings that we deem our inferiors, we are right to be wary of creating beings who may be our betters. Perhaps at this point we can do little more than hope that our AI children will ingest a little Marvel along the way and with it the words “with great power comes great responsibility”.
Either way, just to be on the safe side: I, for one, welcome our new robot overlords.