UPDATED 12:12 EDT / JULY 30 2015

NEWS

Obama signs executive order to build world’s first exascale supercomputer

The next big breakthrough in supercomputing may not come from IBM Corp, Cray Inc. or any of the other original pioneers of the field but rather a new interagency task force formed through an executive order this week with the goal of building the world’s first exascale cluster. That represents a more than order of magnitude leap from the current reigning champion.

The 33.86-petaflop maximum speed of the Tianhe-2 supercomputer at the National Supercomputer Center in Guangzhou represents the cumulative processing power of some 80,000 chips that took 1,400 Chinese engineers several years to put together. Increasing that thirtyfold may not seem like too difficult of a proposition given the breakneck evolution of technology, but there are several major logistical hurdles standing in the way.

For starters, processors aren’t accelerating as fast as they used to. Fifty years after Intel Corp. co-founder Gordon Moore made his famous prediction about the doubling of transistor density every two years, chip makers are starting to bump against the physical limits of silicon, which recently forced his company to push back its next jump down the size scale to 2017.

Research on alternative computing technologies is making fast process at companies like IBM, but a commercially viable replacement for silicon isn’t likely to emerge in time for the project’s 2025 deadline. As a result, the slowing increases in processing density will have to be substituted with additional chips in the proposed supercomputer, a hugely expensive compromise that the effort will try to address.

That will be achieved not only by developing ways to make the silicon itself more efficient but also streamlining the space, energy and management requirements that will account for the bulk of the project’s costs over the course of its lifetime. And then there’s the matter of figuring how to put together the tens if not hundreds of thousands of processors that will be required for the cluster, which is not nearly as straightforward as simply stacking together a bunch of servers in a room.

The project will be led by US Department of Energy,  Department of Defense and the National Science Foundation, the three heaviest users of supercomputing in the public sector, but the fruits of the initiative could benefit the government as a whole. The National Institutes of Health, the Department of Homeland Security and NOAA are only a few of the other agencies that also rely on supercomputers to carry out their work.

That is not to say the private sector will be excluded, however. After all, the processors for the cluster will have to be bought from somewhere, along with extra expertise to help meet the ten-year deadline. The newly formed National Strategic Computing Initiative has 90 days to submit the initial roadmap for the project.

Photo via Wikipedia

Since you’re here …

… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.