Ars Technica had a podcast on The Singularity. The podcasters talking about The Singularity (capital T, capital S) got it pretty much all wrong, as usual. As usual I’m not surprised.
Remember these two things:
1. Ray Kurzweil’s efforts are not The Singularity. To give his project a name, I’ll call it a singularity, a self-centered, uninteresting and considerably watered-down version of The Singularity. Since it’s of little importance to humanity in the long run I ignore it.
2. The podcasters made a common mistake. They thought The Singularity was something that humanity could consider, think about, pass judgement on and decide to do it, or not. Then they applied their own preferences to The Singularity (well, their confused notion of The Singularity) and judged whether it was desirable, or likely to happen, etc. In other words they anthropomorphized it.
But that is far from what will happen.
To straighten this out let’s first describe The Singularity. The real one, not some muddled and weakened version. The Singularity is not a single moment, or event, or invention. It is the process by which a species such as Homo Sapiens creates a new and distinct species, far more powerful in every sense than the original. The new species may be entirely biological, or entirely non-biological, or a mix. The process will have a beginning, a chaotic, disorderly, and often harmful middle, and eventually reach something approaching an equilibrium or calmer period with a far, far advanced species. Because this process takes place through successive generations, each generation creating a more powerful one, exponential improvement happens and thus from our perspective a singularity, resulting in something so far beyond us we cannot imagine the result.
So let’s examine how The Singularity will begin. That is all we can do: once it gets going, by its very nature we cannot predict where it will go.
The beginning starts with at least one group of people–but almost surely several groups scattered around the world and starting within months or a year or two of each other–creating something smarter than themselves. This smarter thing can be either all biological (genetically engineered), or all machine (robots using quantum computers or memristors for their brains), or a combination. I think it will be genetic engineering because I think small, relatively cheaply funded groups will be able to do it, but no matter.
The exact nature of this Generation 1 person or thing or cyborg doesn’t matter. I call us Generation 0, and this new thing Generation 1.
Generation 1, somewhere, will be a success. That is, it will live and thrive long enough to not only reproduce, but to recognize improvements that can be made in itself, and make them, creating Gen 2. Likewise Gen 2 improves itself, creating Gen 3, and so forth, creating The Singularity.
That’s the short version.
We can make intelligent comments only about the beginning: Gen 1, and maybe a little about Gen 2. It’s my opinion that within 20-30 years from now (2012), genetic engineering will easily be far enough along to make some tries at increasing intelligence and insight. Actually the state-of-the-art is almost there now but we are held back by “ethics”. So what I really mean is that within 20-30 years some super-rich individuals, or governments with no Western-style “ethics”, will fund applied research to engineer a smarter human. And it won’t be just one group that we can all vote on–should they, or shouldn’t they? No, it will be several groups, perhaps many groups, around the world. A few will be shut down by their governments, others won’t. Some will fail and quit, others will succeed and continue. But it will not be controlled. It will be driven by curiosity, by greed, by desire for immortality, by lust for power, by whatever and all excuses mankind has always made for bold new ventures.
Some Gen 1 people or machines will be created. Depending on their nature they will be somewhat smarter than Gen 0 (us) or a hell of a lot smarter. Some may self-destruct into schizophrenia or other mental illnesses associated with intelligence, others will not. The point to remember is that our opinions about this now are utterly irrelevant. Attempts to make Gen 1 will take place, and some may succeed.
After Gen 1 it becomes possible to make only vague “predictions”. If succeeding generations take hold, and create evermore intelligent and insightful and capable generations, at some point the billions of Gen 0 people left behind will be to Gen X, as we Gen 0 are to chimpanzees…or maybe rats, or maybe ants. That is, the remaining billions of Gen 0 will be at best an amusing curiosity, probably an annoyance, at worst something to be removed.
Oh, that can’t happen? Really? Just like genocide did not happen to the Native Americans when Europeans discovered America? Like the Nazis didn’t kill millions of Jews for a better Germany? Just like the poor even now are not exploited by those of us in post-industrial societies to maintain our middle-class life styles?
Gen 1, 2, and so on will not be lovingly selected for their empathy or compassion. They will be created for their raw abilities. As throughout human history the stronger have exploited the weaker or removed them, so it will be during The Singularity, until some Gen X becomes so powerful over nature and the universe itself that they will have no need to exploit Gen 0 or remove them. Gen 0 will be simply left alone. But before that Gen X arrives it will be bad times for Gen 0, just as for hundreds of years it was bad times for Native Americans, until those of European descent achieved complete mastery and then was secure to feel compassion.
I do not support this, nor oppose it. It is like supporting or opposing the sun or rain. Attempts at The Singularity are inevitable, though the final outcome is not. It is highly likely that of the probably billions of civilizations on other planets that came before us, The Singularity is a necessary period to pass through before a civilization can take its place in the galaxy and universe. It is neither good nor evil. It is just there.