Thanks to multiple billion years of evolution, the human race has now become indomitable in a million ways. Homo Sapiens have stayed on the face of this planet for 200,000 years, having achieved the unique position of altering the planet as we want. We’re ahead of every other species by a huge margin, and our prowess over physics and technology ensures we maintain our lead. We may be roadblocking evolution for other species simply because of our dominance. But longevity is no guarantee of sustainability. In fact, there’s a solid possibility we may not exist to dominate the planet in the long run. Human extinction is a highly probable scenario, and there are multiple reasons making it a distinct possibility.
Let’s delve into extinction. Consider two cases – one where 100% of our population has been exterminated, and another one where slightly less – about 99% of the existing humans are off the picture. Many people may quickly point out that the later case isn’t extinction at all – a scenario where 99 percent of Earth’ existing population is erased might not really eradicate the presence of human civilization on the planet. The remaining population could re-populate and, while we may have lost a lot of centuries of progress, we can still get back on track. But that’s not entirely true.
Our chances of survival depend more on how the extinction will be caused, instead of how many it kills. An asteroid strike, for example, is theorised to wipe out every human being on the planet, except maybe those on research stations in Antarctica? But how much of a purpose will that serve, when folks there are stuck without supplies, and getting back to other continents is fatal (courtesy extreme temperatures and lack of transport)?
If we chalk down the probable reasons leading to human extinction, we can categorize them on the level of the potential threat they carry. Let’s start in an ascending order, with the least likely scenarios first.
Natural Disasters
Natural disasters are never killing us off. Sure, they cause a huge amount of destruction and loss of life, despite all our preventive and predictive technology. But almost all natural disasters are localised, and we’ve spread all over the planet for a disaster to even be a remote threat to humanity.
Volcanic eruptions and earthquakes can be daunting. A volcanic super-eruption (VEI 8) happens on average every 50,000 years. The most recent one erupted at Oruanui in New Zealand around 25,000 years ago. However, over the past billion years, we’ve become an adaptive species to the point that we’d survive even the largest volcanic eruption in the Earth’s history.
Cosmological Disasters
Solar Flares
Phenomenon like solar flares, however intimidating they are, can never never large enough to destroy our entire atmosphere, and thus our species. A supernova explosion in the planet’s proximity might strip the Earth of its ozone layer, allowing penetration of UV rays which in turn, would wipe out most of our population. But parts of Earth will remain habitable, preventing human extinction.
Climate Change
Climate change won’t happen quickly enough to kill more than a couple billion people in the next two centuries, even at worst. A natural pandemic of the worst kind in Earth’s history would kill, at most, a couple billion people. While climate change may remain a threat, it isn’t likely to wipe off humanity. Other species however, may be pushed to extinction.
Gamma Ray Burts
Similarly, a gamma ray burst aimed directly at Earth would vanquish most of us, but would spare some of those who happen to be underground at the time. But they won’t last long either – a gamma ray burst is going to alter the climate enough to make the planet inhabitable for a long time.
Magnetic Field Reversals
The Earth’s magnetic field weakens every few hundred thousand years as it reverses itself. Whether these events correlate with mass extinction events is up for debate. While some argue that the magnetic weakening allows atmospheric oxygen to escape, killing most of its species, others strongly oppose this hypothesis.
Global Nuclear War
The idea that global nuclear war could kill most or all of the world’s population has been critically examined. On a series of investigations done by world’s leading climatologists, the plausibility of the idea was found to range from negligible to signifcant. A number of possible reasons for beliefs about nuclear extinction are presented, including exaggeration to justify inaction, exaggeration to justify concern, fear of death, exaggeration to stimulate action, white western orientation, the pattern of day-to-day life and reformist political analysis.
Results of several researches evaluated the long-term environmental consequences of a nuclear war. These involved baseline scenarios fought with barely 1% of the explosive power in the US and/or Russian launch-ready nuclear arsenals. They concluded that the consequences of even a “small” nuclear war would include catastrophic disruptions of global climate and massive destruction of Earth’s protective ozone layer. These and more recent studies predict that global agriculture would be so negatively affected by such a war, a global famine would result, which would cause up to 2 billion people to starve to death. If future nuclear weapons even more powerful than the ones which exist today are used, it’d probably be enough to wipe out all the support system our planet provides us.
Asteroid Impacts
The Earth has been getting hit by asteroids and comets for almost all its life so far. Even our water reserves are hypothesised to have come largely from comets. Heavy bombardments may have continued until as recently as 3 billion years ago, making it difficult for life to get a foothold at all. Impact by a large asteroid or comet seems to be another reason to fear the chances of the human race being wiped out completely. It is estimated that a celestial body of dimensions greater than 25km in diameter could wipe out the human species entirely. In the current scenario, there are a lot more small rocks than big ones. So while Earth is constantly being ‘targeted’ by celestial shootouts, most of this is in the form of dust or tiny sand-grain sized meteors that appear as shooting stars. School bus-sized asteroids may hit the surface of the Earth in thousands of years, medium sized (of around 300 meter in diameter) asteroids might be once every 500 centuries, and extinction level events only every billion years.
A 10-15km asteroid impact in Mexico seems to have been what killed the dinosaurs. The Spaceguard Survey is a measure devised to locate and track as many near-Earth objects as possible. As and when an asteroid is identified, that isn’t on a collision course with Earth, the calculated odds of an impact go down a little bit. If the rock is in a close-earth orbit, we’ll have a good margin of time for advance warning, but a comet might not give us warning time to react or take precautionary measures at all. Quickest extinction ever.
Death by ASI
The most hyped technological leap in recent times is artificial intelligence. We’re witnessing exponential growth in computing, and predictions are this will peak with the birth of Artificial Super Intelligence within a couple of decades.
Related: Where is Artificial Intelligence headed?
The idea of superintelligence might be fascinating for the most of us but here’s what most of us don’t really grasp: you can broadly categorise intelligence on two parameters – speed, and quality. Take computers for example – they’re incredibly fast, and therefore score on the speed parameter – but they’re stupid. There’s no self-awareness, there are no self-learning algorithms, and thus, they’re low on the quality scale.
Humans beings though, happen to be incredibly slow in terms of speed – we can’t compute nearly as fast as our most basic machines, but we’re endowed with incredibly intelligent algorithms that give birth to consciousness. So we’re high quality.
The goal the computer industry is currently racing to, is to achieve a high-quality, high-speed artificially intelligent computer.
The fear here is that once birthed, artificial intelligence will could exponentially learn at an uncontrollable speed. And then turn on us.
Right now, we’re living in an age where Artificial Narrow Intelligence (ANI) and Artificial General Intelligence (AGI) are becoming increasingly common. There are computers defeating humans at chess, personal assistants that give us information before we need it, and even self-driving cars.
Artificial Superintelligence (ASI) is special – an ASI system can learn and grow on its own. Ray Kurzweil expects ASI will arrive on the planet between 2035 to 2060. It isn’t necessarily a pleasant scenario, though. Some fear, perhaps rightly, that ASI could be utilized by irresponsible humans, say a group of terrorists or rogue governments to disrupt human civilisation.
If not that, the superintelligent machines might ‘go rogue’ themselves – tracing a path of their own, causing uncontrollable damage and extinction of human race. However, with controlled growth of Artificial Intelligence, we might even reach a point where machines prompt humans to compete with their thinking ability. If channelized in an appropriate method, humans might evolve to become more capable in terms of intelligence. This may be the most important race in human history. The probability that we are heading towards a blissful era or straight to the gallows still hangs in the balance.
Related: How Far Are We From The Singularity?
Eminent Australian scientist Professor Frank Fenner, who helped to wipe out smallpox, predicts humans will probably be extinct within 100 years because of overpopulation, environmental destruction and climate change. It is important to note here his mention of the multiple factors that may lead human extinction.