The Supercomputer That Performs 1,000,000,000,000,000,000 Operations Per Second

A supercomputer may be defined as a computer which performs at nearly the highest operational rate for computers. Many complicated, dangerous, expensive experiments can be taken virtually with such computers. Many testing of nuclear experiments such as measuring the parameter of blasts can be done virtually with the help of supercomputers.

The govt laboratory in Illinois, U.S. is reported to receive the evolution of this new generation computer by 2021. Aurora, the mammoth machine, will be residing at Argonne National Laboratory with a capability to simulate complex systems, run artificial intelligence and conduct research on materials-science topics.

For energy research experiments, Aurora allows researchers to test the design of energy producing equipment such as a windmill blade etc. helping thus to alter things, designs and finalize the design.  “You cannot put the world in a bottle in a laboratory, and see what happens if we do this, that, or the other thing with our energy policy,” says Steve Scott, the chief technical officer at Cray, Inc, one of the companies building Aurora.

When Aurora finally comes, it is expected to be the top domestic machine ever built. “It’s targeted to be the fastest in the United States when it’s built,” says Alan Gara, a fellow at Intel, which is also working on the new machine. Not only the U.S. is investing money behind these new machines, but also China, Switzerland and Japan. China is currently the home for the third fastest machine and the two fastest supercomputers were both Chinese. Experts say that to be on the safe side it’s better to assume that it wouldn’t hold its spot in no.1.  “There’s a little bit of a race, and for good reason—these have become for tools for nations to compete in some ways,” Gara says.

Aurora is believed to carry out 1,000,000,000,000,000,000 operations per second, or one quintillion problems per sec. The parameter of measuring supercomputers’ performance is FLOPS (floating point operations per second) with the best ones being measured in petaflops. Summit, a sprawling machine, is said to reach a peak of 200 petaflops. Aurora is designed to be 5x times more powerful of Summit.

Aurora’s quintillion operational capability is equivalent to the carrying out of operations of a billion laptops connected together where each laptop has a capability to carry out a billion operations per second. The differentiating feature between a billion laptops connected together and a supercomputer are firstly the efficiency in the connections and secondly the hardware of supercomputers are liquid cooled which isn’t that for a billion interconnected laptops.

If you have the image of a supercomputer being a huge, machine with lots of wires in a large empty room. A supercomputer’s hardware is in cabinets. Aurora is said to be requiring 200 cabinets each being 4 ft wide, 5 ft deep and 7 ft tall along with spaces in between. The net area would be somewhat around 6.4k square ft almost equal to that of a basketball court. The heating up of each cabinet will be controlled by the liquid cooling.

Copper fiber-optic cabling and switches will network up all the needed computing nodes amidst each cabinet and within the cabinet shelves. Each node will be connected by 3 hops according to Scott. Every cabinet will be having multiple numbers of switches with 64 ports each. The speed of data between switch to switch is 200 gigabits per second ( for installing a game in your computer with modernized feature of a size of 40gb it would require  600-700 mbps to get it installed within 6 hours. Here gb refers to gigabyte). The main goal of researchers is not to achieve the capabilities in teraflops, petaflops and exaflops but to continue to beat the most recent capability.

Abhirup is a science and tech freak. His passion is blogging on different science and technology topics. In his spare time, he plays computer games and plays piano.