Where will the human race end up in the far, or not so far future?
Everyone has their own set of theories or fears, but they essentially all break down into one of five scenarios.
Scenario 1: Mankind destroys itself. This is the one most feared by left-wingers. This is when mankind wipes itself out because of massive environmental damage it does to the planet, or because of a nuclear war, or other existential threat. I would also place an extinction event caused by an asteroid or meteor hitting the earth in this category, even though it’s not technically mankind’s fault. However, it could be argued that death-by-asteroid would be our fault, since I’m pretty sure we have the technology to be forewarned and prevent such a catastrophe. This means that if the asteroid does hit us and we all die, it’s because we didn’t get our act together in preventing it when we could have.
Scenario 2: AI destroys us. This is the scenario feared by many in the tech industry, and portrayed in many popular moves like The Terminator and The Matrix. This is when AI becomes as smart as us (which will happen in the 2030’s), then gets way, way smarter than us, then decides we’re in the way, and it quickly destroys us (or, in some scenarios, enslaves us, though I find that far less likely; it would have all the robots it wanted so it would probably just destroy us).
Scenario 3: High tech paradise. This is a Star Trek-like future where we invent amazing technology that cures all of our problems, including war, racism, cancer, death by old age, aging, and so on. Humanity lives in a near-perfect paradise where we are free to pursue deep thoughts (spirituality, art, etc) or deep activities (like exploring our galaxy).
Scenario 4: We merge with the AI. This is a halfway point between scenario two and three, foretold by men like Ray Kurzweil. In this scenario, the AI indeed surpasses us, but it doesn’t destroy/enslave us, because we become technological beings ourselves, at least to some strong degree. The AI doesn’t identify us as “humans” anymore, but like-minded technological creatures, or perhaps even part of itself. Either through cybernetics, nanotech, virtual reality, something else, or all three, we lose aspects of our humanity, which is the bad news, but live amazing lives not possible using our current biological meat-bag bodies, which is the good news. Maybe we live as virtual beings in virtual worlds inside the internet. Or maybe we have immortal nanotech-based bodies that can change shape and do anything. Crazy stuff.
Scenario 5: Mankind regresses. In this scenario, something horrible happens (asteroid, nuclear war, massive EMP attack, etc) and we don’t get wiped out, but we lose our technology and our civilization. We start all over, reverting back to the middle ages. Another variant of this is the Idiocracy scenario, where dumb people keep breeding while smarter people breed less, and soon we have an entire planet full of idiots who overwhelm the resources and management ability of the few smart people left, turning the human race into a bunch of fat, stupid, high-tech cavemen. Watch the movies Wall-E or Idiocracy for more details on how this would look (though I frankly think such a world would look much worse than portrayed in movies, and not very funny).
That’s it! Someday we humans will end up in one of those five scenarios whether we like it or not. Which one will it be?
I have no idea. But as always, I can guess, and lay odds based on what research I have done on this. I’m no expert and I’m no futurist, but looking back over the past 25 years of my life, I’ve been pretty good at predicting major events and trends. I guess in a few decades we’ll see if I was right or not.
The least likely of the five scenarios is the high tech paradise, simply because I think technological growth, it if happens, will advance so fast that it will be impossible to say that mankind in the distant future will be like Star Trek; in other words, the same humans with the same brains and bodies, but with cool spaceships. No, I think if technology continues its exponential growth, the very meaning of the word “human” will not only change, but radically change, or even become obsolete, as with the merging with AI scenario.
I put the high tech paradise scenario as least likely of the five scenarios, at about 4% probability.
Next least likely is the one where mankind destroys itself. Over and over again, for literally hundreds of thousands of years, mankind has suffered numerous existential threats and catastrophes, and pulled through. Sometimes we pull through at the last minute, but we always seem to pull through. Technology has repeatedly solved problems that the “experts” said would destroy us, from whale oil shortages in the 1800’s to global cooling and crude oil shortages in the 1970’s, to the ozone layer being “destroyed” in the 1980’s, to Al Gore’s stupid Inconvenient Truth movie being wrong about just about every prediction it made, and on and on. Either the hysteria is inaccurate, or someone invents something, again, sometimes at the last minute, that fixes the doomsday scenario.
It’s the same with nuclear war. History has shown that even with evil sociopaths controlling nuclear weapons (both in the US and Soviet Union), they don’t use them. The world is even smaller now, with nuclear war between the superpowers being functionally impossible, since too many elites (the ones who actually rule the world) would lose too much money.
I place mankind wiping itself out at about a 5% probability. However, that might not necessarily be a good thing…
…because what is more likely than mankind destroying itself out is mankind doing something really stupid and regressing. This can either be a slow regression, as you’re seeing in the Western world right now with people acting more insane, tribalism increasing, testosterone dropping among men, obesity skyrocketing, drug use skyrocketing, art decreasing, rates of innovation decreasing, and so on. Or it can be a sudden regression, such as a limited war or limited nuclear exchange that really fucks up the planet but doesn’t kill everyone.
Indeed, one of the lesser reasons why I’ve chosen (so far) New Zealand to be my future permanent home base because it’s in the southern hemisphere, which has an independent weather pattern from the northern hemisphere. You’ll notice that all the countries that are polluting the planet and the ones that would be most likely engaged in a nuclear exchange are all located in the northern hemisphere, well above the equator: USA, Canada, Europe, Russia, India, and China. So if can live anywhere in the world I want (and I can because of the Alpha Male 2.0 lifestyle), it makes sense to me to live as far away from these nations as I can. If there’s a serious problem with the planet due to human stupidity, I’ll be much better off in the distant south of the planet.
Sadly, I place the odds of human regression at a reasonably high 20%. I really think it’s that likely. I wish I didn’t.
That leaves AI destroying us, or us merging with the AI. I think both of these possibilities are very likely, far more likely than any of the above scenarios. My bottom line on this is that I think we are destined to merge with the AI, but it’s entirely possible the AI may kill us all before this gets a chance to happen. AI’s intelligence will grow at an insanely fast, exponential fashion after about 2030. In or around 2030, it will be as smart as us, but by 2040, it won’t be double as smart as us, but could be 10,000 times smarter than us. It’s very easy to assume that it will simply look at us as a bunch of drooling monkeys that need to be put down.
I also want to make clear that the AI may not kill us intentionally. For example, it’s very easy to imagine a few scientists accidentally releasing a self-replicating nanotech cloud that escapes the lab and literally eats everything in its path, covering the planet, killing all forms of biological life. That’s only one example of technology gone mad. Something like this is far more likely than some idiot in the Middle East or South Korea setting a nuke off somewhere and sparking a nuclear World War III.
I put the AI destroying us at about 30% probability, with the remaining 40% possibility that we will merge with the AI, Kurzweil style. (For math nerds, I purposely left 1% left over for decimals.)
Bottom line, I think that if the AI doesn’t destroy us and if we don’t regress, merging with the AI is what will happen. I don’t see any other path for humanity unless something very unusual and unexpected happens. In a few decades, once the AI is thousands of times smarter than we are, and we haven’t become a bunch of Idiocracy barbarians, I will place the odds of merging with the AI at around 80-90% instead of 40%.
Will I be right? Again, I have no idea. But it’s interesting to think about.