Turn off your phones: Scientists predict humanity will be wiped out by 2050
Artificial Intelligence has come leaps and bounds in the past two decades. At its current rate, it could advance fast enough to pose a considerable threat to humanity.
SPOILER ALERT: If you have read Dan Brown's latest book 'Origin,' which follows Harvard iconology and symbology professor Robert Langdon as he attempts to decrypt futurist Edmond Kirsch's bombshell revelation after his death, you have some idea of how humanity's demise will present itself in the coming decades. Artificial Intelligence. Or to put it into more precise terms, Artificial Super Intelligence, or ASI.
Doomsday purveyors have come up with innumerable theories on how the human race will erase itself from the history books: bioterrorism, meteor strikes, alien super civilizations, global warming, nuclear winters, the next ice age, but none have struck a note with, nor led to as unanimous a consensus as has ASI.
Moore's law states that the number of transistors in a dense integrated circuit doubles approximately every two years. A consequence of this law is that computers are getting exponentially more powerful by the year and AI's capabilities have improved leaps and bounds since the turn of the century. From performing rudimentary tasks, machines are now trusted enough to be employed in some of humanity's most important fields: medicine and science.
From being programmed to perform one task exceedingly well - take IBM's Deep Blue beating reigning world chess champion Gary Kasparov in 1996 - present-day AI is far more multi-faceted. Undoubtedly the golden age for science has been aided by AI's rapid development and will continue to do so, but as some of the world's brightest minds point out, it will also present some unprecedented pitfalls.
While the current day AI is not capable of free thought or retrospection, the ultimate aim will be to reach the point where that dream becomes a possibility, and if scientists are to be believed, that year is not too far away. A lot of these AI have already scored highly on the Turing Test, which measures how indistinguishable a machine is from a human, and it its current pace, AI is expected to reach levels of human intelligence as early as 2029, with the most optimistic estimates suggesting we will then achieve ASI by 2050.
As it is with topics that involve ethical ambiguity, there are those who champion both sides of the argument: one which states that will be our end, and the other that say ASI will be our salvation and our greatest triumph. Along with them are those who do not know what to expect, as it should be concerning such a complicated topic.
Jeff Nesbit, the former director of legislative and public affairs at the National Science Foundation and who has authored over 20 books, says that ASI will either be the end of humanity or that it will see us become immortal.
Tony Stark-lite and billionaire entrepreneur Elon Musk seems to share that sentiment. He likened humanity's obsession with sentient AI to 'summoning the demon' and speaking at the AeroAstro Centennial Symposium at the Massachusetts Institute of Technology, said: "If I had to guess at what our biggest existential threat is, it's probably that [artificial intelligence]. So we need to be very careful."
"With artificial intelligence, we are summoning the demon. In all those stories where there's the guy with the pentagram and the holy water, it's like - yeah, he's sure he can control the demon. Doesn't work out."
In a 2015 open letter, Musk and the late Stephen Hawking even penned a letter detailing out how it could lead to the development of autonomous weapons and how it would revolutionize warfare for the worse. It read: "Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control."
There is a very real fear that once they are conceptualized and given birth to, ASI will spell our end. Those that argue 'We created it, we can control it,' are not taking into account how much smarter these computers will be compared to the average human. The vast chasm in intelligence levels will be akin to that between an ant and human; incomparable and incomprehensible.
But it doesn't necessarily have to be a Terminator-esque situation, as visionary computer scientist, inventor, and futurist Ray Kurzweil points out. Kurzweil, who in the past, has predicted some future technological developments with eerie accuracy, espouses a more positive future for humanity that will be aided by ASI, not impeded by it.
Kurzweil says that mental capabilities will be enhanced by AI, and points to the decreasing rates of violence, war, and murder as proof of his argument. He also suggests that AI will help find cures for diseases, develop alternate sources of energy, and help care for the disabled and elderly.
No one can say for sure what the development of ASI will spell out for the human race, but for introspection, we leave you with this poignant quote from Eliezer Yudkowsky: "By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it."