F E A T U R E S    Issue 2.12 - December 1996

The Case for Human Computers

By Max More

He reckons the average human's causal reasoning equals that of a chimpanzee. He wants us to shove computer implants in our brains to improve them. Paul Churchland: please explain yourself.



Even as we enter a new millennium, we are arguably still burdened with medieval beliefs about mind and consciousness. Paul Churchland, a foremost scientific philosopher of mind, advocates a new neuroscientific understanding of our inner life. He has long championed the developing of neural networks to achieve artificial intelligence, and the importance of combining research from computer science, neurobiology, cognitive science and philosophy of mind to better understand consciousness. Churchland, professor of philosophy and a member of the cognitive science faculty at the University of California at San Diego, lays out his vision in his book, The Engine of Reason, the Seat of the Soul. This is no armchair Platonist.

Wired: Will we ever implant computers - synthetic neurons - in our brains, to take over damaged areas or to augment thinking capabilities?

Churchland: Certainly! Of course. Absolutely. And the sooner the better, given the cruelty of many deficits in the brain. I don't think there is any difference between putting in an artificial cognitive prosthetic than there is in giving someone a stainless steel hip, or a prosthetic hand. It is a functional device that steps in and takes over where nature was cruel enough to leave off. [But] imagine someone having new knees put in: would we put in superknees, so that he can win the hundred yard dash? I suppose we could. Will we put in superbrain implants that let you think superintelligently? We might do it for specific purposes. Certainly we don't hesitate to make someone walk better than they ever have before because they had a genetic defect from birth. If we can make someone's brain function better than it has before, I don't view that with horror - I view that with enthusiasm.

When will it happen?

It's hard to say. The limiting factor isn't going to be developing the implants themselves. It's going to be getting the existing brain to reach out and grow onto the prosthetic. After all, the brain doesn't come with a bunch of neat little plugs like you have at the back of your computer. Brain hardware is profoundly proprietary. Making it compatible with hardware from a different company is going to be extremely difficult.

How close are we to machines with human-level intelligence?

For isolated capacities, we already have neural networks that exceed humans in certain of their abilities. To create a big neural network that knits together all of the things we can do, especially things like writing symphonies or holding wide-ranging conversations, I don't think we'll see that within 50 years. I'm not even sure it will be something we will aim at. I think we will be aiming for more specific kinds of cognitive abilities. After all, it's very easy and cheap to make a brain with the capacity of a human. It's called sexual reproduction.

Where are the obstacles to such machines?

The biggest problem will be building neural nets and getting them the training data. The speed at which neurons can move information in the human brain is about 100 metres per second - tops. The speed of transmission in a copper wire is a million times faster. If you've got a machine that can think a million times faster, then it can learn a million times faster - if you can feed it the information fast enough.

Maybe with high speed, fast-forward videotapes you could train these things in artificial worlds in six minutes. Once you've got one trained neural net, you can make copies with no difficulty at all. You would be creating identical personalities with identical skills.

But putting together a system of 100 billion neurons that is wired up roughly the same way we are would be very hard to do, and I'm not sure there's much payoff. The payoffs will be better spent elsewhere, in specialised nets to do things like fly aircraft for us, or monitor meat-packing plants, or diagnose hospital patients.

Which will tell us more about how the brain functions: computer science or neurobiology?

It's a false dichotomy. Empirical neuroscience and computational modelling are equally important. They stand to one anotheras theoretical physics stands to experimental physics. We will learn most from a healthy, ongoing interaction between the two. That is what I think is so exciting about the 1980s and 1990s. We've finally got some computational models that are suggesting experimental questions. We go to empirical neuroscience and we get answers that send us scurrying back to the models to modify them. So you go back and forth, and you ratchet yourself up the ladder of understanding far more efficiently than is possible if you're just doing experimental groping or just doing free-wheeling theory.

In your books, you give several powerful arguments that the mind is not independent of the brain. Why do most people still believe the mind survives after death?

People don't learn enough science. They are methodologically impaired: their causal reasoning is about the same as that of a chimpanzee or a fox. But that can be repaired.

What human science has managed to achieve over a period of 2,000 years is a system of checks and balances whereby these conceptual impulses that we have naturally are subjected to a unique systematic scrutiny. They are forced to go through a filter that knocks most of them out. That filter can come to be the possession of any individual who learns the scientific history of the human race, or at least enough of it.

I would like us to better understand learning in neurophysiological and neuropharmacological terms. Even if we have the knowledge to change a particular child's learning capacity by only 2 or 3%, that's like interest on an investment - it will compound as the years go by.

Max More is president of the Extropy Institute and editor of Extropy magazine.