In recent years, both computer users and politicians have soundly rejected centralised planning. In computing, the shared mainframe has been replaced by dozens of autonomous PCs. In politics, most centralised economies have been replaced by the distributed intelligence of the free market. But the trend - at least in computing - may soon reverse.
Look around any office and chances are you'll see idle PCs. That's clearly an inefficient use of resources. Imagine instead the computers working together to take advantage of the unused resources of idle computers. If, for example, I needed to run a time-consuming calculation, the problem could be split into five or six independent pieces and farmed out to idle PCs. Those computers could churn away and collectively figure out the solution in a fraction of the time it would take a single computer. In short, a network of computers becomes a sort of hive mind, with all the computing power in an office working together to speed up the performance of each individual computer.
This concept has been discussed for years, but until now has never been practical. Even five years ago, the networks that were used to link PCs were glacially slow, and the computers weren't much faster. It made more sense to buy a supercomputer than to try to tie together thousands of PCs. But new networks have since emerged, and, thanks to economies of scale, PCs have evolved far faster than supercomputers. Now, just a couple dozen PCs working together can equal the performance of an IBM mainframe. That makes hive computing seem a lot more attractive, and computer scientists at the University of California at Berkeley (now.cs.berkeley.edu/) and Princeton University (www.cs.princeton.edu/shrimp/) are building the software and hardware necessary to make it a reality. But significant social and technical hurdles remain.
The social barriers facing hive computing are the same that have long bedevilled socialism. Hive computing, after all, asks people to give up ownership of their property for the greater good. A user might come back from a coffee break to find his or her computer running someone else's program. That sort of infidelity can drive people nuts, and early attempts at hive computing were often sabotaged by users who periodically tapped their keyboards so their computers would always appear to be hard at work.
To address this problem, researchers have come up with a scheme known as migration. As soon as users return to their computers (marked by touching the keyboard or mouse), remote programs are transferred to different idle computers. Unfortunately, even this process of migration causes a temporary slowdown that annoys most computer owners. So, researchers at UC Berkeley have embedded a social contract in the operating software that promises no user will ever be affected more than a few times a day. This should help make hive computing more socially acceptable. The big question is whether hive computing really improves performance enough to be worthwhile.
Hive computing can speed up programs in two ways: by providing access to more memory and by allowing many processors to work together. More memory offers the simplest route to improved performance. Currently, few programs can completely squeeze into the limited RAM of a workstation. Instead, pieces of the program are kept on disk and read in as needed. This is a major bottleneck: it takes almost 10,000 times longer to read from disk than from memory. But with hive computing, the RAM of idle workstations can be used in place of disks. When a program needs data that is stashed in some other computer's memory, it just sends a request across the network and then receives the necessary data.
Using Network RAM (NRAM) instead of disk can dramatically speed up some applications, and it's easy to do: most programs don't even have to be modified. To the program, it just looks like a lot of really slow memory. NRAM, however, doesn't really speed up programs that spend most of their time number crunching.
That's where the technique of parallel processing - breaking a problem into pieces that can be worked on simultaneously - comes into play. Writing a program so it takes advantage of multiple processors isn't easy, but the payoff can be dramatic. Theoretically, a problem that can be split into 10 equal pieces can be solved by 10 computers in one-tenth the time it would take a single computer. Theoretically. On current hive computers, it's more like one-seventh - or slower.
That's because parallel computing is a lot like working on a committee: most of the time is wasted in coordination. With parallel computing, the coordination consists of lots of messages sent over the network. And though networks have gotten really fast, even state-of-the-art asynchronous-transfer-mode networks are too slow for efficient hive computing. It can take up to 50 microseconds for the first byte of a message to travel over an ATM network. In that time, a 150-MHz PC could have executed more than 7,000 instructions. ATM hardware will probably improve, but the protocol simply wasn't designed with latency issues in mind.
Because of this bottleneck, hive computing will probably never completely replace supercomputers. But it will take a big bite out of the low-end supercomputer market. After all, any workplace that has lots of powerful PCs hooked together already has a hive computer for free. Well, almost free. Users have to be willing to sacrifice total machine ownership. Then Marx's old adage - "From each according to his abilities, to each according to his needs" - may live again.
Steve G. Steinberg (steve@wired.com) is a section editor at Wired US.