Google Cloud just built a data lakehouse on BigQuery
BigLake, a new data lake storage engine that resembles data lakehouses built by newer data companies, will be at the center of Google Cloud’s data platform strategy.
Google Cloud plans to launch a new data lake storage engine based on its popular BigQuery data warehouse to help remove barriers preventing customers from mining the full value of their ever-increasing data.
BigLake, now available in preview, allows enterprises to unify their data warehouses and data lakes to analyze data without worrying about the underlying storage format or systems, according to Sudhir Hasbe, Google Cloud’s senior director of Product Management for data analytics.
“The biggest advantage is then you don’t have to duplicate your data across two different environments and create data silos,” Hasbe said in a press briefing prior to Wednesday’s Google Data Cloud Summit, where BigLake is being announced.
With BigLake, Google Cloud is extending the capabilities of its 11-year-old BigQuery to data lakes on Google Cloud Storage to enable a flexible, open lakehouse architecture, according to the cloud provider. A data lakehouse is an open data-management architecture that combines data-warehouse-like data management and optimization functions, including business intelligence, machine learning and governance, for data lakes that typically provide more cost-effective storage.
BigQuery is a Google Cloud-managed, serverless, multicloud data warehouse that lets customers run analytics over vast amounts of data in near real time. It processes more than 110 terabytes of customers’ data every second on average, according to Google Cloud.
“We have tens of thousands of customers on it, and we invested a lot in all the governance, security and all the core capabilities, so we’re taking that innovation from BigQuery and now extending it onto all the data that sits in different formats as well as in lake environments — whether it’s on Google Cloud with Google Cloud Storage, whether it’s on AWS or whether it’s on [Microsoft] Azure,” Hasbe said.
BigLake will be at the center of Google Cloud’s data platform strategy.Image: Google Cloud
BigLake will be at the center of Google Cloud’s data platform strategy, and the cloud provider will ensure that all its tools and capabilities integrate with it, according to Hasbe.
“We are going to seamlessly integrate our data management and governance capability with Dataplex, so any data that goes into BigLake will be managed [and] governed in a consistent fashion,” he said. “All of our machine-learning and AI capabilities … will also work on BigLake, as well as all our analytics engines, whether it’s BigQuery, whether it’s Spark, whether it’s Dataflow.”
Enterprise data sets are growing from terabytes to petabytes, while the types of data — from structured, semi-structured and unstructured data to IoT data collected from connected devices including sensors and wearables — also are increasing. That data typically is stored across different systems with different capabilities, whether in data warehouses for structured and semi-structured data or data lakes for other types of data, creating so-called data silos that could limit access and increase costs and risks, particularly when the data must be moved.
BigLake will support all open-source file formats and standards including Apache Parquet and ORC and new formats for table access such as Iceberg, as well as open-source processing engines such as Apache Spark.
“When you think about limitless data, it is time that we end the artificial separation between managed warehouses and data lakes,” said Gerrit Kazmaier, Google Cloud’s vice president and general manager for database, data analytics and Looker. “Google is doing this in a unique way.”
Courtesy of: Donna Goodison