WANdisco & IBM Team for "IBM Big Replicate"
SiliconAngle

WANdisco & IBM Team for "IBM Big Replicate"

Moving massive amounts of data can be overwhelming and cumbersome. There may even be multiple copies of datasets across several platforms. To address this issue, WANdisco, plc and IBM teamed up to make IBM Big Replicate, which allows data to be moved efficiently.

David Richards, founder and CEO of WANdisco, plc, and Joel Horwitz, director of Corporate & Business Development, Analytics, at IBM, talked with John Furrier (@furrier), host of theCUBE, from the SiliconANGLE Media team, during Hadoop Summit 2016 about IBM Big Replicate.

What’s the big deal?

As technology evolves into something more complicated, users want the methods of operation to get easier. The buying audience has changed with the market and “there’s less care around machinations,” according to Horwitz. This means that old data has to be moved to new platforms and what was once paper and ink must now go to the cloud.

IBM and WANdisco working together on Big Replicate will allow customers to move their data to the cloud and across multiple Hadoop distributions.

The management

IBM also hopes to deliver effective data management. Moving so many sets of data across platforms can cause duplicates as the information is pasted. A hybrid distribution has more effective analytics that can be worked into the process. Big Replicate “allows for fewer copies of data across clusters,” said Horwitz.
The cloud creates an environment for elastic infrastructures that can be changed and adapted with varying applications. With the data consolidation process made easier, developers and users alike will gain a bit of their sanity back.

The peace of mind that IBM Big Replicate provides is a major step toward better data management, and working with the Hadoop platform has people seeing data “as an asset,” said Richards.

*Original article posted on SiliconAngle

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics