Oracle this week at the Strata + Hadoop 2015 conference announced a significant expansion of its big data strategy that includes support for and access to the Oracle NoSQL database using a RESTful API.
In addition, Oracle added support for Hadoop, Apache Hive, Apache HBase and Apache Flume to its Golden Gate Integration support and released a Big Data Discovery toolset that takes advantage of metadata and Machine Learning to make it simpler to identify patterns and trends within a big data environment.
Neil Mendelson, Oracle's group vice president of big data and analytics, says that while there is no shortage of big data technologies, Oracle is differentiating itself by building a broad portfolio of offerings that makes big data more accessible for the average enterprise. As part of that effort, Mendelson says Oracle is becoming a single source of integrated big data products and services for developers, data scientists, integrators and end users.
A big part of that strategy, says Mendelson, is providing REST API support across both its relational and NoSQL database offerings. As the number of databases that get deployed across the enterprise increases, having common methods of accessing those databases using SQL and REST APIs provides a higher level of abstraction that reduces complexity for developers.
In addition to REST API support, Oracle NoSQL Database 3.2.5 also now includes a Thrift-based C API and support for Data Definition Language. Oracle also unveiled Oracle Big Data SQL 1.1, which improves SQL query performance against Hadoop by up to 40%.
Oracle, of course, is playing catch-up to other providers of NoSQL databases. But the company is betting that as organizations look to leverage their massive investments in existing relational databases, they will want to deploy a NoSQL database that shares the same APIs and integration frameworks they are already using on top of the Oracle relational database.
Collectively, Mendelson says, Oracle is trying to turn big data into more of a team sport. For example, he says organizations don’t always appreciate how critical data discovery tools really are to the success of a big data project. At the moment, Mendelson says most organizations are spending 80% of their time on big data infrastructure versus 20% of their time exploring that data when in fact the opposite should be true.
For all the investments in big data infrastructure and interfaces, the only way to derive actual business value from big data applications is to make sure that end users have the tools they need to explore that data, says Mendelson.