Science:
Future of Intelligence

Many futurists believe that in the next few years we’ll have the ability to enhance human intelligence. In the near term this most probably involves intelligence enhancing drugs, longer term it may mean either genetic engineering of some sort or human/computer augmentation. Alternatively it could also mean that we finally develop an artificially intelligent computer. When any of these things happen, these futurists believe the world will end as we know it. This may not be a bad thing, but it very well may be an incomprehensible thing to us humans living without the intelligence enhancement.

 

The argument these futurists put forth goes as follows…

Step 1, make humans smarter.

Step 2a, watch progress in all other areas of human endeavor speed up.

Step 2b, make a better intelligence enhancement faster than before.

Step 3, repeat.

 

Say, for example, that active research into intelligence enhancement began in 1980, and let’s assume that we get an intelligence pill in 2030, that’s 50 years of research. All the researchers working on intelligence enhancement take this pill and in 25 years, a genetic method for intelligence enhancement is created. Again, all the researchers take that, and in 12 years, the first computer AI is created, then in 6 years, that computer creates an AI of it’s own that’s even better than it is, and in 3 years, that AI creates an even better one and… Matrix, Terminator, Battlestar Galactica or even Colossus: The Forbin Project, take your pick.

 

These futurists call this The Singularity. The flaw in the Singularity argument is the assumption that intelligence can increase without bounds. I don’t believe it will happen quite like that. I believe it will go in steps, rather than a single smooth exponential growth to infinite intelligence. Let’s break it down.

 

On the human augmentation side, there must be an upper limit to the intelligence that’s possible within the human brain structure. It’d be very surprising if the current brain architecture could support “limitless” intelligence. We may be able to make the current brain architecture 2 or 3 or maybe even 10 times smarter than we are today, but limitlessly so? No… at some point increased intelligence will probably require massive redesign of the brain. And when that time comes, that smoothly increasing “singularity” curve will grind to a halt as new architectures are developed, tested and finally the right one is discovered. In which case we would have a “second Singularity” as things take off again.

 

On the computer AI side, a similar argument applies… beyond a certain intelligence level the underlying architecture will probably max out and further increases in intelligence will require a massive redesign of the system. This argument then becomes exactly the same as the one above. Additionally, here we have the added benefit of an example of such an architecture change, it is widely expected that quantum computers will be a vastly superior architecture to the current computer architectures.

 

But additionally, computers can be made more intelligent by merely becoming faster within a given architecture. Or does merely being faster lead to increased intelligence? If I thought faster, I could play games like chess and go at a higher level of skill than I do now. Imagine a chess program on a faster computer compared to the same one on a slower computer. But I doubt a person who got Bs in calculus and physics, if suddenly blessed with the ability to think faster, or equivalently, given an extended lifespan of 200 years, will ever be able to come up with general or even special relativity. I believe that that sort of leap requires a higher level of intelligence, one that’s more akin to an architecture change, and no amount of extra speed will help. So, even in the computer case, we’ll soon max out on exponentially growing improvements because extra speed will only take us so far. Putting us right back at the need for an architecture improvement.

 

These architecture improvements will act as speed bumps on the standard Singularity ramp up that most futurists envision. It’s my belief that we will never see a single infinitely large Singularity climb, but instead will see a rapid ramp up which will then hit a wall of no growth as a new architecture is devised, and once devised, will then ramp up again and then hit another wall and so on. Not one BIG Singularity, but lots of littler ones.

 

—Mike Davis
May 2006

 

Fate of Humanity: JoT's take on it

 

top

SCA fighters at Norwescon 2006

For more SCA photos, click here.

colorDraft1