F E A T U R E S    Issue 1.06 - October 1995

Super Humanism

By Charles Platt

According to Hans Moravec, by 2040 robots will become as smart as we are. And then they'll displace us as the dominant form of life on Earth. But he isn't worried - the robots will love us. And besides, he asks, do we really want more millennia of the same old human soap opera?



Hans Moravec reclines in his chair and places his palms against his chest. "Consider the human form," he says. "It clearly isn't designed to be a scientist. Your mental capacity is extremely limited. You have to undergo all kinds of unnatural training to get your brain even half suited for this kind of work - and for that reason, it's hard work. You live just long enough to start figuring things out before your brain starts deteriorating. And then, you die."

He leans forward, and his eyes widen with enthusiasm. "But wouldn't it be great," he says, "if you could enhance your abilities via artificial intelligence, and extend your lifespan, and improve on the human condition?"

Since his earliest childhood, Moravec has been obsessed with all forms of artificial life. When he was 4 years old, his father helped him use a wooden erector set to build a model of a little man who would dance and wave his arms and legs when a crank was turned. "It excited me," says Moravec, "because at that moment, I saw that you could assemble a few parts and end up with something more - it could seem to have a life of its own."

At the age of 10, he constructed a toy robot from miscellaneous scrap metal. In high school, when another student maintained that no machine could ever be truly human, Moravec suggested replacing human neurons, one at a time, using man-made components that would have the equivalent function. At what point, he asked, would humanness disappear? If a wholly artificial entity is still able to act human in every way, how could we prove that it isn't human?

Today, Moravec is a professor at Carnegie Mellon University's Robotics Institute, the largest robot research lab in America and one he helped to establish in 1980. He is a rare mixture of visionary and engineer, equally comfortable speculating on the fate of the planet or using a soldering iron, microchips, and stepper motors to build high-tech versions of his childhood dancing man. More than that, though, he's America's most gung-ho advocate of technology as a tool to transform human beings and make us more than we are - within our lifetimes, if we want it.

Some of his concepts have a confrontational, in-your-face shock value. For instance, to find out how the mind works, Moravec suggests severing a volunteer's corpus callosum (the nerve bundle linking the two hemispheres of the human brain) and interposing a computer to monitor thought traffic. After the computer has had time to learn the code, it can start inserting its own input, helping solve very difficult maths problems, suggesting bright, new ideas, even offering some friendly advice.

Or here's another scenario for anyone who'd like to escape the constrictions of dull old human biology: a futuristic robot surgeon peels away the brain of a conscious patient, using sensors to analyse and simulate the function of every neuron in each slice. As Moravec puts it, "Eventually your skull is empty, and the surgeon's hand rests deep in your brainstem. Though you haven't lost consciousness, your mind has been removed from the brain and transferred to a machine."

But even proposals like these are modest compared with Moravec's Number One concern, which is nothing less than the future of humanity. By 2040, he believes, we can have robots that are as smart as we are. Eventually, these machines will begin their own process of evolution and render us extinct in our present form.

Yet, according to Moravec, this is not something we should fear: it's the best thing we could hope for, the ultimate form of human transcendence. In his own laboratory, he's laying the groundwork that may help this evolutionary leap happen ahead of schedule.

Not everyone thinks this is such a wonderful idea. Joseph Weizenbaum, professor emeritus of computer science at MIT, complains that Moravec's book Mind Children: The Future of Robot and Human Intelligence is as dangerous as Mein Kampf. Respected mathematician Roger Penrose has written a rather long essay for The New York Review of Books in which he twice uses the word "horrific" to describe some of Moravec's concepts. Book reviewer Poovan Murugesan denounces Moravec as "a loose cannon of fast ideas" who suffers from "irresponsible optimism."

Even Moravec's fans seem a little ambivalent. "He comes off as a cross between Mister Rogers and Dr. Faustus," says writer Richard Kadrey. And in the words of award-winning science fiction author Vernor Vinge, who is also an associate professor of mathematical sciences at San Diego State University, "Moravec puts the rest of the technological optimists to shame. He is beyond their wildest extremes." But, Vinge adds hastily, "I mean this as praise!"

How seriously should we take Moravec's ideas? He is widely respected as a pioneer in robotics, but where is the line dividing his painstakingly practical research from his unfettered speculation? Why does he insist that breaking the boundaries of being human is important not just for himself, but for everyone - and why does he seem so crazy-cheerful about the whole thing?

These questions were on my mind when I visited Moravec at Carnegie Mellon in Pittsburgh, Pennsylvania. In person, he's a friendly faced, slightly overweight, irrepressibly good-humoured man in his late 40s who wears homely clothes and seems shy with strangers. But his enthusiasm gives him a childlike charm - even when he talks lyrically about human extinction.

His office is next door to the "high bay," a big lab displaying the results of previous Robotics Institute projects, including a huge, multilegged "walker" that was sent down into the cone of an active volcano, and a Pontiac minivan that can drive itself at speeds up to 60 mph. The van has already found its way from Pittsburgh to Washington, DC, with minimal human supervision, under the legal fiction that its four onboard Sparcstations and their mechanical interface are "an advanced form of cruise control."

But Moravec seems bored by these past achievements and has shed most of his administrative responsibilities at the Robotics Institute. He hides out in a small, undistinguished, modern office with a couple of computers, a few file cabinets, a refrigerator, a microwave oven, and a lot of books. This is where he pursues his immediate goal: designing and programming a domestic robot that can navigate freely in cluttered home environments. It is the next logical step, he says, towards truly intelligent machines that we will not only tolerate but love - even as they threaten to displace us as the dominant form of life on Earth.

Moravec's early work in robotics was plagued by setbacks. "I spent most of the 1970s," he recalls, "trying to teach a robot to find its way across a room. After 10 years, in 1979, I finally had one that could get where it was going three times out of four - but it took five hours to travel 90 feet." He chuckles like a fond father recalling the first incompetent steps of his baby boy.

Why was it so hard for a robot to accomplish a task that even a mouse can manage with ease? The answer, of course, is that animals have had hundreds of millions of years in which to evolve their motor skills. The problem of moving facilely through a three-dimensional world is hideously complex, as Moravec indicates, while counting off the tasks on his fingers: "Our robot used multiple images of the same scene, taken from different points of view, in order to infer distance and construct a sparse description of its surroundings. It used statistical methods to resolve mismatching errors. It planned obstacle-avoidance paths. And then it had to decide how to actually turn its motors and wheels."

In 1980, he built new robots and attempted to boost their performance. "But the best we were able to do with our old approach," he recounts, "was speed it up about tenfold and improve its accuracy tenfold. We did not manage to reduce its brittleness."

By "brittleness" Moravec means that the system tended to fail suddenly and catastrophically. "Accidental conspiracies of sensory miscues would lead it to a wrong conclusion while being sure that it was right. In practical terms, it could misidentify the surrounding objects and run into a wall."

Like Wile E. Coyote in a Road Runner cartoon, trying to run into the mouth of a tunnel painted on a rockface?

"Precisely!" he laughs again, sounding genuinely happy, as he does whenever he describes the lovably fallible behaviour of his creations.

In 1984, using £6 Polaroid ultrasonic range finders instead of expensive video cameras, Moravec created a new commercial robot that analysed maps of the surrounding space rather than just objects in it. The result, to his surprise, was a system that could navigate with reliable and relative swiftness.

Moravec's current research robot, a project initiated in 1987, now sits in a small workshop just across the corridor outside his office. "Would you like to take a look?" he asks.

We walk into a windowless space no larger than an average living room. There are a couple of video monitors, workbenches littered with tools, pale beige walls, and a vinyl floor. The robot stands in the centre of the room: an ugly little four-wheeled truck the size of a go-cart. But Moravec exudes pleasure and affection as he guides his toy out of the workshop, into the hall, and back again.

"Today's best robots can think at insect level," he says as we return to his office. He explains that state-of-the-art mobile robots orient themselves by sensing special markers placed on surrounding floors, walls, or ceilings. Insects behave in just the same way: ants follow pheromone trails, lightning bugs look for each other's flashes, and moths navigate with reference to the moon.

The trouble is, such systems are still brittle. Just as a moth can become fatally confused by fixing on candlelight instead of moonlight, a robot guided by markers can easily make a disastrous mistake - as happened when one designed by a Connecticut company to distribute hospital linens took a nosedive down a flight of stairs when it failed to notice a marker that was supposed to tell it not to proceed past a certain point.

Robots that orient themselves with markers have found some industry application - transporting pallets, cleaning floors - but offer little over the older systems that follow hidden guide wires. As a result, the market is very limited. "In fact," says Moravec, "the market barely exists at all. So, what we're shooting at now is a robot with the intelligence of a small vertebrate -the smallest fish you can imagine. It will no longer depend on navigational points; it will build a relatively dense representation of volumes of space."

By 2000, he foresees that this type of machine will find its own way around complex, cluttered places without using markers and without needing to be installed by experts. At first these robots will be expensive and specialised, but Moravec predicts they will become smaller, cheaper, and more user-friendly in just the same way that microcomputers evolved from mainframes. "Once we have a robot that customers can take out of the box, show it a job, and trust it to work without doing silly things - then the market will expand easily to hundreds of thousands, and beyond that. Any institution that does regular cleaning will find that it's a lot cheaper to use a robot than a person. The same goes for delivery jobs."

Moravec estimates that these systems will need an onboard computer capable of 500 million instructions per second. The first IBM PCs managed 0.3 mips; a modern Pentium-based PC reaches 200 mips; and it's reasonable to expect that 500-mips processors will be affordable by the turn of the century.

This power will enable the robot to convert 500-by-500-pixel stereoscopic pictures from its camera eyes into a 3-D model consisting of about 100-by-100-by-100 cells. Updating and processing all this visual information will take about one second - the longest interval that is reasonably safe and practical, since the robot will move blindly between glimpses of the world.

Once robots find a niche doing dull, repetitive jobs, Moravec sees an ever-expanding market. "The next step will be adding an arm and improving the sensor resolution so that they can find and manipulate objects. The result will be a first generation of universal robots, around 2010, with enough general competence to do relatively intricate mechanical tasks such as automotive repairs, bathroom cleaning, or even factory assembly work."

By "universal" Moravec means the robot will tackle many different jobs in the same way a Nintendo system plays many different games. Plug in one cartridge, and the robot will know how to change the oil in your car. Plug in another, and it will know how to patrol your property and challenge intruders.

Add more memory and computing power and enhance the software, and by 2020 we have a second generation that can learn from its own performance. "It will tackle tasks in various ways," says Moravec, "keep a set of statistics on how well each alternative has succeeded, and then choose the approach that worked best. This means that it can learn and begin to adapt. Success or failure will be defined by separate programs that will monitor the robot's actions and generate internal punishment and reward signals, which will actually begin to shape its character - basically, what it likes to do and what it prefers not to do."

Moravec pauses. The near future of robotics is something he's spelled out a thousand times before, and he no longer finds it particularly exciting. But now we move onto a subject that interests him more: the whole idea that robots can mimic human traits.

By 2030, according to Moravec, we should have a third-generation universal robot that emulates higher-level thought processes such as planning and foresight.

"It will maintain an internal model not only of its own past actions, but of the outside world," he explains. "This means it can run different simulations of how it plans to tackle a task, see how well each one works out, and compare them with what it's done before." An onlooker will have the eerie sense that it's imagining different solutions to a problem, developing its own ideas.

But perfecting the model of reality this robot will need isn't going to be easy. In fact, creating this model is the single hardest problem in artificial intelligence.

Intuitively, human beings know why they need to wear a raincoat in wet weather, or why they must turn the handle before pushing open a door. Almost without thinking we know if a bottle is empty, whether an object is breakable, or when food has spoiled.

But when presented to an artificial intelligence, none of these things is obvious - each everyday fact must be established in advance or derived from a set of logical principles.

On the plus side, each time a robot learns a fact or masters a skill, it will be able to pass its knowledge to other robots as quickly and easily as sending a program over the Net. This way, the task of understanding the world can be divided among thousands or millions of robot minds. As a result, the machines will soon develop a deeper knowledge base than any single person can hope to possess. Within a short space of time, robots that are linked in this way will no longer need our help to show them how to do anything.

Meanwhile, they will be smart enough to interact with us on a human level. "Their world model will include psychological attributes," Moravec says, "which means, for instance, that a robot will express in its internal language a logical statement such as 'I must be careful with this item, because it is valuable to my owner, and if I break it, my owner will be angry.' This means that if the robot's internal processes are translated into human terms, you will hear a description of consciousness - especially if the robot applies psychological attributes to its own actions, as in 'I don't like to bump into things,' which is a compact way of saying that the robot gets an internal negative reinforcement signal whenever it collides with something, or imagines a collision."

Moravec's critics are skeptical on this point. Many have stated flat out that a machine can never be "conscious." Their arguments are hard to refute, partly because no one can really say what consciousness is; but Moravec sidesteps the issue. He believes a robot that understands human behaviour can be programmed to act as if it is conscious, and can also claim to be conscious. If it says it's conscious, and it seems conscious, how can we prove that it isn't conscious?

Either way, there's no doubt that systems that can analyse their world, deduce generalisations, and modify their behaviour will have a major impact on society.

"The robots will still be in our thrall," Moravec points out, meaning that we will still be designing and programming them to serve and obey us. "They'll learn everything they know from us, and their goals and their methods will be imitations of ours. But as they become more competent, efficiency and productivity will keep going up, and the amount of work for humans will keep going down. By around 2040, there will be no job that people can do better than robots."

He sits back in his chair, pausing with cheerful satisfaction as he does whenever he reaches a radical conclusion that places him one step ahead, waiting for his audience to catch up.

In this case, though, Moravec's conclusion is less radical than it seems - because when many jobs are broken down into tasks, they require a relatively limited degree of "humanness." Even today, we have expert systems that offer advice based on a large number of facts in a field such as medicine or geology. Imagine this expertise gradually broadening to include subjects such as corporate law, mechanical design, profitability, and efficiency. Decisions in these areas are all made logically from sets of facts, which means that if the facts are completely spelled out, a machine intelligence should be able to deal with them.

Thus a corporation can literally become automated from the bottom up: first the assembly lines, then bookkeeping, product design, and planning. Even management can be taken over by computers that are able to learn from past performance.

Ultimately, a corporation will consist of a diverse mix of robots, some mobile, some fixed, some large and powerful, some microscopic, all interacting with speed and versatility that is completely beyond human abilities.

But what about the time scale factor? Isn't he compressing an enormous amount of progress into a very few decades?

"Back in the 1970s, I made some overoptimistic assumptions about the rate of progress of computers. I thought that using an array of cheap microcomputers, we might achieve human equivalence by the mid-1980s. Then I did a slightly more careful calculation around 1978 and decided it would take another 20 years, requiring a supercomputer. But then I started getting serious, writing articles and essays, and I thought I should do the calculations more rigorously. So I collected 100 data points of previous computer progress, I did the best calculation I could, I compared the human retina with computer vision applications, and I plotted it all out."

Still, even if his predictions are confirmed to be on schedule, there's an obvious problem: When robots are doing all the work, no one will earn any money. How can an economy possibly flourish when all the consumers are penniless?

Moravec obviously isn't troubled by the question. In fact, it's hard to imagine any question bothering him: he sits calmly, comfortably, eating the questions and spitting out answers with ease. Today, he points out, people who retire are supported via wealth that is ultimately created by industry. As industry becomes more efficient, there will be more wealth, allowing people to retire earlier. When industry is totally automated and hyper-efficient, it will create so much wealth that retirement can begin at birth. "We'll levy a tax on corporations," Moravec says, "and distribute the money to everyone as lifetime social-security payments."

But what if the robot-run corporations fail to function as he expects? He assumes these business entities will follow programs written by us, compelling them to obey laws and pay their taxes. But the programming will also encourage robot-controlled corporations to compete with each other.

Won't they try to exploit loopholes in their instructions, just as present-day businesses try to evade federal regulations? Isn't there a real risk that autonomous robots will steal from each other and cheat on their taxes?

"There is always the possibility that some kind of malfunction will produce a rogue corporation," Moravec admits. "We'll need police provisions so that legal companies will act to suppress rogues economically, or physically, if necessary. And among the inprogrammed laws we'll need antitrust clauses to force dangerously large companies to divest into smaller entities."

But this would be a second set of rules to solve a problem created by robots breaking the first set of rules. The system still seems fundamentally unstable.

"It is unstable," he agrees. "Everything will depend on the way in which we create it. Crafting these machines and the corporate laws that control them is going to be the most important thing humanity ever does. You know, each age has an activity in which the best minds get involved. Crafting the laws, and their implementation, will be the thing to do in the 21st century."

If the job of crafting these machines is done right, he predicts a world of comfort, health, and boundless plenty - at least for a while. Human beings, he sees, will be like slave owners whose servants never complain, need no supervision, and are constantly eager to please.

In the long term, though, robots programmed to serve us with maximum efficiency can become a potential hazard. They will naturally try to obtain energy and raw materials as cheaply as possible, with a minimum of regulatory interference. And the ideal way to do this is by relocating some of their operations beyond planet Earth.

Unlike human beings, robots don't need to breathe air, aren't disoriented by zero gravity, and can be easily shielded from harmful radiation. There are vast mineral resources in the asteroid belt, where there will be no regulations regarding pollution, noise, or safety. Robot factories located in space would be able to manufacture products with maximum efficiency and then drop them down into Earth's gravity well. Alternatively, they could conduct hazardous research and radio the encrypted results back to their parent corporation on Earth.

Only a small "seed colony" of robots would be needed to set up an off-world operation. Using local mineral ores and solar energy, robots could build everything they required - including copies of themselves. In this scenario, everything is still being controlled by the parent corporations, which are still being controlled by us. Therefore, the off-world operations should present no problems. "But now suppose a company goes out of business," Moravec says, "leaving its research division in space, where there's no supervision. The result is self-sustaining, superintelligent wildlife."

This marks the point where the genie finally gets out of the bottle and Earth's retirement community of pampered humans finds itself faced with a big problem. Out in space, the preprogrammed drive to compete and be efficient will result in the runaway evolution of machine capabilities.

Moravec feels that in a short period of time, all the local materials will be plundered and converted into machines, and all available solar energy will be used to power them. The result will be a dense, interacting swarm of competing entities - although, he says, the competition will be relatively benign. Warfare among robots will be rare because "fighting wastes energy, and a third entity can eat the pieces."

He believes that the most useful skill will be intelligence. Robots will be motivated to make themselves as small as possible, conserving raw materials to build better brains. "As a result, you end up with the whole mess forming a cyberspace where entities try to outsmart each other by causing their way of thinking to be more pervasive. Here's an ecology where all the dead-matter activity has been squeezed out and almost everything that happens is meaningful. You have this sphere of cyberspace with a robot shell, expanding outward towards Earth."
What will it look like?

"It will look like a region of space glowing warmly, with hardly anything visible on a human scale. The competitive pressure towards miniaturisation will result in activity on the subatomic level. They'll transform matter in some way; it will no longer be matter as we know it."

Since space-based machine intelligences will be free to develop at their own pace, they will quickly outstrip their cousins on Earth and eventually will be tempted to use the planet for their own purposes. "I don't think humanity will last long under these conditions," Moravec says. But, ever the optimist, he believes that "the takeover will be swift and painless."

Why? Because machine intelligence will be so far advanced, so incomprehensible to human beings, that we literally won't know what hit us. Moravec foresees a kind of happy ending, though, because the cyberspace entities should find human activity interesting from a historical perspective. We will be remembered as their ancestors, the creators who enabled them to exist. As Moravec puts it, "We are their past, and they will be interested in us for the same reason that today we are interested in the origins of our own life on Earth."

He seems very sincere as he says this, almost as if it's an article of faith for him - though of course it has some logical foundation. Machine intelligences of the far future will develop from our initial programming, just as a child grows from its parents' DNA. Consequently, even when robots are smarter than we are, they should retain many of our priorities and values.

But Moravec takes the scenario even one step further. Assuming the artificial intelligences now have truly overwhelming processing power, they should be able to reconstruct human society in every detail by tracing atomic events backwards in time. "It will cost them very little to preserve us this way," he points out. "They will, in fact, be able to re-create a model of our entire civilisation, with everything and everyone in it, down to the atomic level, simulating our atoms with machinery that's vastly subatomic. Also," he says with amusement, "they'll be able to use data compression to remove the redundant stuff that isn't important."

But by this logic, our current "reality" could be nothing more than a simulation produced by information entities.

"Of course." Moravec shrugs and waves his hand as if the idea is too obvious. "In fact, the robots will re-create us any number of times, whereas the original version of our world exists, at most, only once. Therefore, statistically speaking, it's much more likely we're living in a vast simulation than in the original version. To me, the whole concept of reality is rather absurd. But while you're inside the scenario, you can't help but play by the rules. So we might as well pretend this is real - even though the chance things are as they seem is essentially negligible."

And so, according to Hans Moravec, the human race is almost certainly extinct, while the world around us is just an advanced version of SimCity.

I've been sitting opposite Moravec in his office, typing on my laptop computer, following his exposition step by step. The vision he has described exists for him as a unified whole; it takes him only about an hour to describe it clearly and fluently from beginning to end. For him it seems entirely pleasurable: a destiny that grows out of his own work and affirms his own values.

His critics, of course, disagree. They complain that his vision is inhuman, lacking attributes such as culture and art that seem central to our identity. Sceptics also point out that the negative implications of his work far outweigh its benefits in the near future, when robots will cause a huge economic dislocation, creating a feeling of purposelessness among citizens who are rendered permanently unemployable.

Moravec is quite aware of this but sees no way to prevent it. He says his projection of the future is at least 50 per cent probable, and we're seeing the first signs of it right now. "In Europe generally," he says, "I believe unemployment is now up to around 15 per cent, and essentially this will never be reversed. We are already moving into the mode I envisage, where everyone is subsidised by productive machines."

This has created uncertainty and discontent - as he readily admits. "We all agree," he says, "that the world is a bit screwed up. The reason for this is rather obvious. We have a Stone Age brain, but we don't live in the Stone Age anymore. We were fitted out by evolution to live in tribal villages of up to 200 relatives and friends, finding and hunting for our food. Nowadays, we live in cities full of millions of strangers, supporting ourselves with unnatural tasks that we have to be trained to accomplish - like animals who have been forced to learn circus tricks."

In which case, what's the answer? Moravec adamantly believes that reversing the evolution of technology would create an even bigger disaster.

"Most of us would starve," he says. He suggests the opposite approach: that we try to catch up with technology by accelerating our own evolution. "We can change our-selves," he says, "and we can also build new children who are properly suited for the new conditions. Robot children."

Inevitably, I ask whether he has any normal, flesh-and-blood children.

"No. In fact, I am biologically incapable of it. I contracted testicular cancer as I was finishing my PhD; it didn't affect me very much, it didn't really hurt, I noticed a growth, but I still had my thesis to write and my orals to do, and the whole thing seemed very unreal. There were two surgeries, one minor, one major - with my intestines out in a bag to get at the lymph nodes. I came through it in sparkling condition, aged around 30. But a side effect is that I'm basically infertile."

Does this mean that his love of robots is nothing more than a displaced desire for the biological children he can't have?

"Not at all. Long before the cancer, I was already obsessively committed to robots for whatever neurotic reason. That was where I wanted to spend my energy. I met my wife in the hospital when I was getting chemo-therapy in 1980. She already had two children, so I inherited them as stepchildren."

Does his wife share any of his feelings about machines?

He laughs. "At the moment, my wife is a biblical scholar."

Moravec himself was raised Catholic, but he rebelled against it as a teenager and says he still has some anti-Catholic reflexes. As a result, he and his wife have had bitter theological debates in the past.

"But these days there's no point in arguing," he says, "because we already know exactly what each other is going to say, and in any case she's more astute in human relations than I am, so she knows how to handle me. But I have changed my outlook slightly. I'm a little less hard-core in my atheism than I used to be. And my ideas about resurrection in some ways are not so different from those of early theologians, or from the Greek thought that fed into that."

Also, of course, the desire for human transcendence has been a fundamental feature of almost all religions. And Moravec's vision of a supremely powerful artificial intelligence that will love humanity enough to re-create it is basically a vision of a god - the only difference being that in his scheme of things, we create God version 1.0, after which it builds its own enhancements.

But how does all this fit in with Moravec's obvious personal love for machines?

"My father was an engineer in Czechoslovakia and had a business making and selling electrical goods during the war. When the Russians arrived in 1944, he became a refugee. He left the country on a tricycle with 50 kilos of tools and 50 kilos of food. He met my mother in Austria, where I was born. He had an electrical store, where he'd hand wind transformers to convert battery-operated radios so they'd run on house current. We relocated to Canada in 1953."

Growing up in Montreal, learning English, and adjusting to a strange new culture, Hans Moravec was a solitary child who found solace in building models and gadgets. "I remember the thrill I got when I put together something and made it work. I could admire it for hours. And these things also made other people proud of me. I guess I actually thought that they would get me a wife! I knew I didn't have any social skills, but maybe if I could build these machine things really, really well, it would make me more attractive to women." He laughs at his own childhood naivety.

And yet, he didn't always want to be a scientist. First he wanted to be Superman. "But I could see that it wasn't practical. Then I noticed another character in the comics, Lex Luthor, who didn't have superpowers but was almost a match for Superman. So, I thought if I couldn't be Superman, maybe I could be Lex Luthor."

In person, Moravec seems diffident and gentle; he doesn't drive a car because, he says, he's uneasy with so much potentially dangerous mass in his control. He likes living in Pittsburgh because his home is a short walk from his office, and he seems to feel little need to venture outside this simple life.

Yet as a child he enjoyed fantasies about superheroes and supervillains, and as an adult he talks casually of totally rebuilding human society. He refers to his new book, for which he's currently seeking a publisher, as "a kind of speculative long-term business plan for humanity," and in it he speaks condescendingly of "Earth's small-minded biological natives." Can Moravec really claim that his work as a scientist is in no way manipulative?

"People such as myself," he says, "may have a little bit of influence, but we're like mosquitoes pushing at a rolling boulder. Progress is inflicted on people in the same way that natural evolution is inflicted on people. It really is evolution; it's the selection and growth of information, transmitted from one generation to the next."

But what about the rights of people who don't love the rolling boulder of progress?

"Well," he says, beginning to sound a little impatient with my objections, "they'll - they'll get used to it! In fact, they should enjoy it, since the amount of wealth will be astronomical; you'll be able to live anywhere and in any way you want."

In any case, he says, the progress he's talking about will be offered via the free market, not physically imposed on anyone.

"All I'm suggesting is that we give people a choice. In the next decade, people will either buy their housecleaning robot or not buy it. And I think they'll want to buy it. Then they'll have the choice of upgrading to one that learns, and I think they'll want that, too. Then they'll have the choice of a robot that claims it's conscious, a really nice entity that talks like a person, seems to understand you, and has nothing but your best interests at heart - because that's how it's programmed. And then the fourth generation will take that personality and add intelligence. It will be a constant help to you; it will explain why something that you want to do isn't what you should do - because it loves you. I think people will like these machines and will quickly get used to them."

Well, yes - until the machines cut loose, develop hyperintelligence, and bring about our demise.

"But I don't consider it a demise," Moravec retorts, still insisting that his vision is wholly positive. "The robots will be a continuation of us, and they won't mean our extinction any more than a new generation of children spells the extinction of the previous generation of adults. In any case, in the long term, the robots are much more likely to resurrect us than our biological children are."

For people who find long-term resurrection a somewhat nebulous concept, there are also some practical reasons why we should be happy to change ourselves radically. On a long-term basis, Moravec points out, our planet may not be a hospitable place to live. Huge climatic shifts may occur (as they did during the ice ages). Our sun may become unstable. The world may be ravaged by incurable diseases. Our entire ecology could be destroyed by a large meteor or comet. "Sooner or later," he says, "something big will come along that we cannot deal with. But by changing ourselves in the most fundamental way, we will be able to survive such catastrophes."

This is an arguable point of view, but I can't help wondering which came first, Moravec's personal interest in becoming more than human, or his proof that it's really a very good idea. He readily admits that he has a personal obsession with robots, and his passion for transcendence is far more extreme than that of most scientists. What makes him so different from everyone else?

"Well, I was breast-fed as a baby," he answers with typically disconcerting candour. "I was also the first born of my family, and I was well loved by my mother - which must have helped me feel confident about life." He pauses, realising that this explanation isn't adequate. "Maybe the idea of human transcendence makes me happy because my endorphin levels were misadjusted early on in life," he explains with a laugh and a shrug, unable to come up with a better answer.

Personally, I suspect he likes the idea of radical change because he's an intensely intelligent man who is easily bored by the everyday world. He finds it impossible to believe that it makes sense to continue, as human beings, in our exact same form. "Do we really want more of what we have now?" he asks, sounding incredulous. "More millennia of the same old human soap opera? Surely we have played out most of the interesting scenarios already in terms of human relationships in a trivial framework. What I'm talking about transcends all that. There'll be far more interesting stories. And what is life but a set of stories?"

Ultimately, Moravec comes back again to the power and grandeur of a destiny that exceeds all limits. "This universe is so big," he says. "The possibilities must be infinitely greater than anything we can imagine for ourselves. Pushing things in the direction of expanded possibilities seems to be by far the most productive use of my time. And that, here, is my purpose."

Charles Platt (cp@panix.com) writes science fiction books and science articles. His most recent work is The Silicon Man. He is a frequent contributor to Wired.