In a nutshell — the technological singularity is a hypothesis which states that a highly upgradable self-regulating agent such as a computer running AI software will eventually leap into a runaway reaction of self-improvement cycles, leading to a super-intelligence explosion of batches and batches of even more intelligent agents. The Singularity concept stands on exponential growth.
Artificial General Intelligence should not have internal conflicts. Should be able to navigate through the world of data without worrying about competition. Even selection. The AGI should boost itself via self-reflection and constant redesign. At this pace, one shall become all-powerful, almost immortal. Only to transform the world to match its value vision.
The absurdity of this thought was the nucleus of the remarkable story, almost certainly legendary, of the invention of chess. The mathematician/inventor kindly asked for a simple reward. A single grain on the first square of the board and that the next would have double that until all squares were filled. The granaries of the kingdom could not fulfill this wish. Exponential growth eventually hits a limit.
Here we are on the doorstep of 2020 just 25 years from Ray Kurzweil’s initial predictions of things as we know them. We should start thinking about replicating the Elon Musk organism as we may not even be on Mars by then. The demise of cancer is also less than certain by then. > The only certainty regarding the world in a quarter century is uncertainty.
Turning empty space into CPUs or memory units is going to be a hard trick to pull off our sleeve. Stars are far too tumultuous and chaotic for this purpose.
Quantum Computers hold promise for computing things too difficult for traditional computation, but they do not hold the promise of storing more data.
If one chooses to buy into the Singularity, then the homeless folk on the streets will swiftly become part of the singularity machine, whatever that may be, on this mysterious magical date. What do you think?
On life extension
Let’s consider the next part — life extension capabilities. A singularity is unnecessary for this progress.
Senolytic drugs currently in trial mode for increasing the average lifespan. A disease such as Alzheimer may succumb to science within five years. A multitude of genetic anomalies is close to becoming a thing of the past — at least for people of means in first-world countries. It will not be long before the maximum human lifespan jumps from approximately 100 years to 150 years. This advance will take place, in my humble opinion, long before 2045 and without any merging of human and computer consciousnesses.
At a macro-level — Nature hates infinity. Maybe we cannot live forever because there is no forever. It’s merely an abstract concept without any full-fact based reality at all. Everything winds down and ends. Planets will vanish into their suns. Stars will die. Galaxies will fuse. Other galaxies might disappear beyond our horizon.
We can briefly increase our life spans. That is the closest we will ever come to immortality. The immortality gene or set of genes given to you today can be hit by a meteor tomorrow. Clearing up a corpse and buh bye.
Let’s assume that the likelihood of you dying tomorrow is one in a million. (Probably much greater though.) By the end of a million years, the probabilities of you still being alive would be less than 50%. After a billion years, your chances of survival would be essentially zero.
Transferring your consciousness to a machine will create what I call a second self that would immediately deviate from the real you and swiftly become someone different — not you.
Technological Singularity is remarkably unnecessary for life extension and numerous other benefits to those wishing for long and healthful lives. The continuous progress e of medical science will handle that nicely on its own. The end of this century might well see life expectancies measured in centuries rather than years.