UPDATED 15:00 EDT / MAY 31 2017

BIG DATA

Imagine there’s no data — with virtualized DataOps

Can computing bottlenecks and storage costs be slashed in one throw with data operations using read/write capable virtual data?

The database — and physical data in general — often chokes infrastructure agility with its manual procedures and costly space requirements, according to Kellyn Pot’Vin-Gorman (pictured), technical intelligence manager for the office of the chief technology officer at Delphix Corp.

“DataOps is the idea that you automate all of this — and if you virtualize that data, we found, with Delphix, that removed that last hurdle,” Pot’Vin-Gorman said during the Data Platforms event in Litchfield Park, Arizona, where she took the stage to speak on DataOps for big data.

“When you talk about a relational data or any kind of legacy data store, people are duplicating that through archaic processes,” she told Jeff Frick (@JeffFrick) and George Gilbert (@ggilbert41), co-hosts of theCUBE, SiliconANGLE Media’s mobile live streaming studio. (* Disclosure below.)

The result is siloed data that developers and others are constantly butting up against and skirting around, she explained. Delphix bursts this bottleneck by creating containers (a virtual method for running distributed applications) for virtualized data and agile deployment to multiple on-prem or cloud environments.

The virtual data is fully read and write capable and updates through snapshots that can be thought of as a “perpetual recovery state inside our Delphix engine,” she explained.

Big piece of big data cloud puzzle?

The implications for big data in the cloud, where storage costs are still higher than on-prem, should be clear, but in fact, Delphix is just now venturing into this territory, Pot’Vin-Gorman stated.

“We haven’t really talked to a lot of big data companies. We have been very relational over a period of time,” she said. Now customers are telling Delphix that their data stores have grown to bona fide big data proportions, and they need fitting solutions.

Many open-source big data projects are good candidates for DataOps due to their many moving pieces, Pot’Vin-Gorman stated. Containerizing them and deploying them just once with virtualized data that looks like it is deployed on multiple environments could save loads of effort, she said.

Watch the complete video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s independent editorial coverage of Data Platforms 2017. (* Disclosure: TheCUBE is a paid media partner for Data Platforms 2017. Neither Qubole Inc. nor other sponsors have editorial influence on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

Since you’re here …

… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.