March 1996 - Volume 12:4
By Mitch Marcus and Atsushi Akera
The ENIAC demonstrated to the world that large-scale, high-speed, electronic computation was possible, triggering a wave of new computer designs and the birth of the computer industry. Despite the ENIAC's success, one fundamental aspect of its design is only now becoming part of the design of everyday comput ers, after remaining dormant for nearly 50 years.
At the heart of the ENIAC was a set of 20 independent accumulators, each an electronic adding machine that could take in a number and add it to an existing total every 200 microseconds. In principle, a programmer could arrange that all 20 of these adding machines do new additions in parallel, allowing the ENIAC to perform not 5,000 but 100,000 additions a second. In this way, the ENIAC was fundamentally a parallel machine.
Almost immediately the ENIAC's programmers decided, in the words of J. Presper Eckert [chief engineer on the ENIAC project] that parallel programming intro duced "a number of inconveniences and difficulties" so that "in programming a machine, it is undesirable to try to do several operations in parallel." Eckert noted that because there was no mechanism to allow a third operation to continue only after two parallel sets of operations had both completed, the two paths had to take exactly the same length of time. Although SEAS researchers have recently shown that there is a simple trick that would solve this problem, it now appears that the ENIAC programmers were actually uninterested in parallel programming. Why?
Betty Hoberton, one of the two programmers of the demonstration program executed on February 14, 1946, recently noted that setting up a complex parallel algorithm was simply too time consuming, given that it took nearly a day to move heavy digit trays and connect cables to set up even a simple problem on the ENIAC. Also, since the machine's operation was unaffected by tube failures in accumulators that weren't being used, the smaller the program, the longer it would run until a tube failed. Parallel computing disappeared for 25 years.
In the late 1960s parallel computing burst forth once more, now called supercomputing, driven by the very high computational needs of a range of important engineering, scientific, and military problems. To simplify both hardware and software, many supercomputers use a very limited kind of parallel processing, where the same operations are performed in parallel on many different data points.
Most surprisingly, the inside of the Intel Pentium has a close resemblance to the ENIAC's accumulators. Internally the Pentium converts machine instructions into operations that are given to any free arithmetic unit, each of which operates in parallel; some of these units are interconnected so that results from one can go directly into others.
A closer successor to the ENIAC can be found inside every CD player. Converting the stream of numbers stored on a CD back into music involves many different steps. Each step is actually a computer program executed on special-purpose computer chips in the CD player called digital signal processors (DSP). The key operation in many of these programs, performed again and again, involves multiplying together the results of two additions. To speed up the conversion of numbers into music, each DSP contains two accumulators whose outputs are fed into a single multiplier unit so that two adds and a multiply are all performed in parallel. Programming these algorithms on the ENIAC would have been very natural.
Finally, although not usually recognized as such, research in what are now called data flow machines is attempting to recreate in modern, general form the original flexibility of the ENIAC. A data flow machine has many different processing units connected together exactly as the programmer wishes, but now reconfigurable under high -speed computer control. The ENIAC was exactly such a data flow machine, only externally programmed. Developing an effective data flow architecture will require the development of new methods to provide high-speed switching at very low cost, but the payoff will be latter- day ENIACs that run many times faster than current computers.
MITCH MARCUS is Chair of the Computer and Informa tion Science Department of the School of Engineering and Applied Science; ATSUSHI AKERA is a graduate student in the Department of History and Sociology of Science.