Friday, February 3, 2012

The Big Bad Wolf: The Singularity and Humanity

This post was meant to be up many moons ago, but life got in the way (as it often does).  I promise to keep a much more regular schedule from here on out.  Anyway, here is my take on what is likely one of the most visceral possible problems of the Singularity.

Much of this blog will be focused on technologies that will help to bring about what is called the Singularity.  This is a point in time where we finally create a computer system that is more powerful than the human brain, or are able to enhance the human brain technologically, past which point the intelligence of the human race will increase exponentially.  There are a host of possible dangers with this.  From destruction at the hands of our Robot Overlords, to the Gray Goo Scenario.  However, there is another problem about which we speak (in specifics, there is almost always a subtext going through every discussion) much less often.

Many have called the Singularity the “end of humanity as we know it,” or the invention of a computer more powerful (intelligent) than a human the last invention that mankind ever need make.  The fear is that we will either lose out in the evolutionary battle against the superior AI that we create, or that by merging with our technology that we will become less “human.”

The idea of the Singularity (though the term was coined much later) first arose in 1965 when I.J. Good wrote of an “intelligence explosion” suggesting that if machines could ever surpass humans in intelligence that they could then improve their own intelligence in ways unforeseeable by their now-outmatched human counterparts.  In 1993 Vernor Vinge wrote, in what may be the most famous piece about the Singularity, that “Within 30 years we will have the technological means to create superhuman intelligence.  Shortly after, the human era will be ended.”  Vinge also posits four possible ways that this superhuman intelligence may happen:

1)      A computer that is sufficiently advanced as to be “awake” or a singular AI.
2)      A network of computers may “wake up” as a superhumanly intelligent entity.
3)      Computer/human interfaces may become so intimate that the users may be considered to be superhumanly intelligent.
4)      Biological science may advance to increase human intelligence directly.

The first three depend on the advancement of computer technology, based in large part upon Moore’s Law which posits (extrapolated, the original law was merely for transistors on a circuit) that the power of computers will increase exponentially, doubling every 18 months or so.  Ray Kurzweil, another important figure in the realm of Singularity study, has studied the history of information systems, from DNA to computers, and has shown that this exponential growth is fairly consistent through nearly every paradigm.  Basically every generation of computer benefits from the previous generation’s power and so can reach the next generation in a shorter amount of time.

“But!”  You may exclaim, “We certainly must be reaching the limits of our current technology.  They can only make silicon so thin, and current chips are already measured in the nano-scale, they must be reaching a limit to how powerful they can make our computers!”

The answer is: sort of.  Yes, we are reaching the limit of our current computer design, but there is already a wealth of research into new technologies that will allow us to build computer chips in 3 dimensions instead of the flat plane that our current chips currently use.  Improvements in nanotechnology and the use of carbon nanotubes and graphene are progressing rapidly, and will likely be able to take over where silicon leaves off.  And we’re even beginning to poke at the edges of quantum computing which uses the wackiness of entanglement and multi-state particle physics to transfer information at speeds that are all but instantaneous.  The point is that, even if the exponent slows, the advance in technology will not.  Barring an extinction event we should have computers more powerful than the human mind by 2045-ish.  We may also have neural prosthetics that enhance human intelligence.

But with the discussion of artificially enhancing a portion of ourselves comes the inevitable worry: will making artificial “improvements” create an artificiality in a person’s being/personality/soul?  Will the additional computing power of the new brains push morality by the wayside, for cold, robotic logic?  What use will emotion, community, family, and human connection have in a world where we are all supercomputers?  Where will heroism and altruism fit in a world where probabilities and best-case scenarios can be calculated by anyone in an instant?  What will inspire us when we can do anything we want?

Thing is… I don’t know how to answer that problem.  There is research into parts of the brain that include neurons thought to be the source of emotional intelligence that may allow us to improve our ability to connect and care about other people if we can (or want to) enhance those areas, but honestly I don’t know that that sufficiently answers the problem.  There is also the Kant-ian answer that through enhanced intelligence and logic, a greater morality will emerge, as we can better calculate the greatest possible good to come out of our moral decisions.  This doesn’t really comfort those afraid that we’ll all become unfeeling robots, however.  Quite the opposite, I would expect.

My only real advice in this regard is similar to the advice I would give people who worry about rising Governmental or corporate control: active, diligent attention.  Question the motives and effects of technology, especially that tech that could improve or enhance abilities.  There is a very real danger of enhancement technology creeping into modern ubiquity without a lot of attention paid to the repercussions.  We can only have a moral Singularity if we pay attention to the world and how we change it.