Speedy data drives racing simulation design at Red Bull sports
In the competitive world of Formula One racing, teams of engineers work around the clock to modify car designs in the never-ending quest for a few more seconds of speed. For Red Bull Racing Ltd., one of the leading Formula One racing teams in the world, that quest means using the IBM Spectrum high-performance computing portfolio to run design simulations that could make the difference between finishing last or ending with the checkered flag and a victory lap.
“Our workflows are incredibly complex,” said Wayne Glanfield (pictured, left), HPC manager at Red Bull Racing, who described up to 200 separate job steps for each simulation workflow. “By having an HPC facility we can actually do the design simulations in a virtual environment.”
Glanfield visited theCUBE, SiliconANGLE Media’s mobile livestreaming studio, and spoke with host Jeff Frick (@JeffFrick) at Supercomputing 2017 in Denver, Colorado. He was joined by Bernie Spang (pictured, right), vice president of software-defined infrastructure at IBM Corp., and they discussed Red Bull Racing’s use of IBM’s technology and the importance of managing the rising number of clusters in information technology environments. (* Disclosure below.)
Vast amounts of data for car design
Red Bull Racing relies on IBM’s Spectrum LSF, workload management software for high performance design and simulation applications. Formula One rules limit Red Bull Racing to 25 teraflops of data for the simulations, so efficiency is at a premium.
“They push the performance of the environment and they push us,” said IBM’s Spang, in describing the partnership with Red Bull. “You need a system and an infrastructure that can chew through vast amounts of data, both in performance and compute.”
IBM’s Spectrum Computing platform is designed to help enterprises achieve faster results from data analytics and applications. The company is beginning to see customers who need software-defined management tools to handle the rising tide of Hadoop, Spark and machine learning clusters which are becoming key elements of the enterprise, information technology environment.
“We’re seeing clients who don’t have this virtualization software beginning to have cluster creep,” Spang said. “You can’t afford to have silos of clusters. Spectrum computing virtualizes that shared cluster environment so that you can run all of the different kinds of workloads and drive up the efficiency.”
Watch the complete video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s coverage of the Supercomputing 2017 conference. (* Disclosure: TheCUBE is a paid media partner for the Super Computing 2017 conference. Neither IBM Corp., the event sponsor, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
Photo: SiliconANGLE
Since you’re here …
… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.
If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.