Are Databases Becoming Just Query Engines for Big Object Stores?

Object storage is winning the battle for big data storage in such a convincing fashion that database makers are beginning to cede data storage to object storage vendors and concentrating instead on optimizing their SQL query performance, according to Minio, which develops an S3-compatible object storage system.

Since AWS launched it in March 2006, Amazon S3 has set the standard for cloud-native object storage. Millions of developers have adopted the Simple Storage Service, which is accessed using simple REST-based APIs, to hook up nearly unlimited storage to countless Web and mobile applications.

More recently, enterprise architects have begun deploying analytical and transactional applications that have more stringent latency demands atop S3 and S3-compatible object stores. Business-critical workloads traditionally have used relational databases–including column-oriented databases for OLAP and row-oriented ones for OLTP workloads–running atop SAN-based block storage and NAS-based file storage to deliver the fast performance, as measured in input/output per second (IOPS), required by enterprises.

But as the size of data has increased and object store’s IOPS performance capabilities have improved, the longtime advantage held by traditional relational databases for both OLAP and OLTP workloads has begun to erode at the hands of object stores, says Jonathan Symonds, MinIO’s chief marketing officer.

Amazon S3 is the standard protocol for accessing objects

“They realize that there’s just a bunch of other companies, MinIO being one of them, that are doing just a far superior job than they can ever do [in storage]…around erasure coding, around throughput, around security,” Symonds says in a recent interview with Datanami.

“The database market is so competitive at this point that they all want to focus on query optimization,” he continues. “They all want to deliver high performance querying, and they all want to do it in the most parallel fashion. And so they’re basically saying, I’m going to focus on this because it’s core to my business, and I’m not going to focus on this.”

Genies and Bottles

For example, Snowflake’s decision in the summer of 2022 to introduce the new capability (currently in preview) that allows users to use Snowflake to query their own object store shows that the cloud data warehousing giant is confident with open object storage, Symonds says.

“For years, Snowflake effectively resold AWS S3,” Symonds says. “But they came to the conclusion that, on a go-forward basis, that that wasn’t strategic for them. They needed to up their game on the product side, and not worry about the storage side.”

That move did two things for Snowflake, he says. For starters, it allowed Snowflake and its customers to get access to more data (such as data residing in MinIO) without forcing the data to be physically moved via ETL into Snowflake’s proprietary database format, which is a slow, cumbersome, and costly thing to do. It also allowed Snowflake customers to query much larger datasets, which helps customers’ business, Symonds says.

(viewimage/ShutterstocK)

“It’s not as if this was some strategic choice. Customers were saying, ‘Hey, I want object storage to be supported,’” Symonds says. “And once that happens, the genie is a little bit out of the bottle. But at the same time, they had to be competitive on query processing side. And if you have to choose where to put your engineering hours, you’re going to put it on query processing because that’s core to your business. You’re not going to put it into storage component, which is not core to your business.”

Microsoft’s recent decision to utilize S3 object storage for SQL Server in the cloud is another example of a database giant moving away from storing the data in the database. It’s telling that Microsoft chose to support its competitor’s format, S3, rather than its own Azure Blob Storage format (which has its roots in HDFS), says Minio CEO and co-founder AB Periasamy.

“MS SQL Server can run on any cloud, on prem–anywhere. They have embraced S3 API and not Azure Blob Store API,” Periasamy says. “Microsoft’s big data play is actually MS SQL Server tied to object store.”

Embracing Object

The last decade of big data development is a story about how customers and vendors alike have struggled to store and process ever-growing data sets, Periasamy says.

For years, database makers handled data storage and all that entails, such as providing for scale-out capabilities and data resiliency/protection, in addition to the higher-order functions, such as optimizing SQL query performance . The database makers were required to handle those lower-level storage requirements because the data storage primitives in the underlying SAN and NAS file systems were very limited in that regard, Periasamy says.

The open source community got the ball moving forward with Hadoop. However, Hadoop and the Hadoop Distributed File System (HDFS) were limited in a couple of key areas, including the fact that they were mostly used for storing and processing unstructured data, whereas businesses mostly stored structured data. Businesses also resisted learning the new MapReduce style of parallel programming, Periasamy says, and they wanted a SQL interface to their data anyway.

“Customers in the end said ‘I want SQL on top of this data,’” Periasamy says. “And that’s when SQL players said ‘We have a better SQL engine. It’s not hard for us to support large data sets if we let the storage go.’”

Apache Hive was the first SQL engine to run atop HDFS. Bedeviled by slow ad-hoc performance, Hive-creator Facebook replaced it with Presto (and its spin-off, Trino). Both Presto and Trino are query engines with no underlying storage engine, which is a model that appears is now being embraced by more established database makers, like Microsoft and Snowflake.

Eventually, the market spoke and HDFS gave way to S3 and S3-compatabile object storage as the defacto standard for big data storage and processing. Spark-backer Databricks also supports S3 and S3-compatible object stores with Databricks File System (DBFS), which is an abstraction layer that maps Unix-like file system calls to cloud storage APIs.

Even Teradata, long the gold-standard for on-prem massively parallel processing (MPP) databases, in August officially embraced the “data lake” style of OLAP computing atop an S3-compatible object storage base for the first time (although it maintains that some analytics workloads will perform better running atop its optimized file system format).

Setting the (Open) Table

According to Periasamy, there’s one other element to the object store story that’s critical to making it all fit together for customers: The emergence of open table formats.

One of the benefits of storing massive amounts of data in object storage is the ability to access it using different query engines. This is the simple recognition that what works best for low-latency ad-hoc analytics is probably not what works best for training a machine learning model, for example.

However, when multiple engines access the same data sets, the potential for conflicts exists, including (but not limited to) getting the wrong answer. This in a nuthsell is what gave rise to open table formats, such as Apache Iceberg, Apache Hudi, and Databricks’ Delta Lake table format.

“This is actually the biggest change happening in the database market, that for all of them to cooperate, they have to agree on standards, and the data format that’s sitting on MinIO or any object store has to be in some open format,” Periasamy says. “That is the biggest innovation that’s going on, and we are fully embracing that.”

While the engineer in Periasamy (co-creator of the Gluster file system) is partial to Iceberg as it is the most “cloud native” of the three, MinIO itself supports all three open table formats. Databricks deserves support for launching the open table formats concept, which enables multiple users and applications to access the same data without messing it up, but it’s been widely adopted since.

Open table formats are critical, Periasamy says. “Customers would make a copy of every data. It was not like two to three copies.  It was 15 copies, 20 copies. It was an enormous tax on the infrastructure,” he says. “To solve that problem, what if we all can work on the same data set, but whatever changes you’re making, it’s your copy. It’s like versioning on a large data set. It’s like a Git-like repo on the same source code [with] different branches of data.”

The traditional database market isn’t going to shrink any time soon. Databases are still proliferating to fill the niche needs of specific workloads, including graph data, time-series data, geo-spatial data, IoT data, unstructured data, JSON, etc. For sheer speed, object stores will likely never match the performance of an optimized in-memory database.

But at the upper reaches of the big data curve–say from 1PB to 100PB and beyond, where making copies of data or moving it is an instant dealbreaker–data lakes and lakehouse built atop object stores have a substantial lead, and nothing appears poised to unseat them from owning the storage layer. Database makers would be wise to incorporate object stores into their plans, if they haven’t already done so.

Related Items:

Teradata Taps Cloudian for On-Prem Lakehouse

Why the Open Sourcing of Databricks Delta Lake Table Format Is a Big Deal

Solving Storage Just the Beginning for Minio CEO Periasamy

Like this post? Please share to your friends:
Leave a Reply

;-) :| :x :twisted: :smile: :shock: :sad: :roll: :razz: :oops: :o :mrgreen: :lol: :idea: :grin: :evil: :cry: :cool: :arrow: :???: :?: :!: