Are traditional data warehouses falling short in today's data landscape? Do you find it challenging to keep up with the increasing types of data from multiple sources, while facing high reporting and analytics costs? If so, it's time to consider a modern solution: the data lakehouse.

In this blog, we explore the benefits and challenges of moving to a data lakehouse and provide guidance on how to transition from a traditional data warehouse to this modern and cost-effective approach.

As organizations continue to grapple with increasing amounts of data from APIs, flat files, IoT, and relational databases, they need solutions that can store and process this data efficiently and cost-effectively. A data lakehouse offers an excellent solution that combines the advantages of a data warehouse with those of a data lake.

Moving from a traditional data warehouse architecture to a modern data lakehouse architecture can provide you with the flexibility and scalability needed in today’s rapidly evolving data landscape. This shift provides you with the agility necessary to grow and adapt to both internal and external market forces.

In this blog, we cover:

Why You Should Consider Moving from A Data Warehouse to A Data Lakehouse

If you’re working with a large number and different types of source systems and diverse file formats, a data lakehouse can provide a more efficient and cost-effective solution for your organization. With the data lakehouse approach, incoming data is stored in its raw form accommodating native schema, data types, or file types, allowing data engineers to focus solely on the data transformation process to create valuable information.

If you’re looking to enhance your data analytics and machine learning capabilities, you will find that a data lakehouse offers a more suitable environment to data science processes, with the open-source nature of the platform providing greater freedom to use popular languages such as Python, R, Scala, and others in addition to SQL. Unlike traditional data warehouses—which often require duplicating and moving data—a data lakehouse simplifies and streamlines the data analytics and machine learning process with reduced need to copy or move data.

Benefits of Moving from A Data Warehouse to A Data Lakehouse

Now that we’ve discussed why you should consider moving to a data lakehouse, let’s look at the numerous benefits this approach offers your organization:

  • Performance with Cost Management In-Mind: A data lakehouse offers flexibility to balance varying needs of cost and performance of different types of workloads. Decoupling storage and compute allows you to pay only for the computing resources you use, reducing costs significantly while providing the horsepower necessary to run intensive tasks.
  • Improved Data Governance: Data lakehouses enhance data governance by providing robust data lineage and metric management capabilities. You can easily manage your data end-to-end while ensuring compliance with industry regulations thanks to the reduction of data duplication and siloed data.
  • Data Sharing: Many data lakehouse platforms offer built-in data sharing functionality, making it simpler to share data with customers and reduce barriers of monetizing data products where appropriate.
  • No Vendor Lock-In: You can use different tools to process data in the data lakehouse, reducing the risks of vendor lock-in and allowing you to evolve your data strategy as business and market needs change.
  • Support for Various Workloads: Data lakehouses support the diverse needs of various data workloads, including data science, data engineering, and data analysis. This makes it easier to develop complex data models, machine learning models, data pipelines, and datasets for dashboards and reports, all using the same platform.
  • Flexibility for Streaming and Batch Processing: Data lakehouses provide the flexibility to support both streaming and batch processing, depending on the needs of the organization. This allows you to handle large volumes of data and provide real-time insights.
Four blue circles with white icons represent the components of an MLOps pipeline: design, development, deployment, operations. Below each icon is a black box with white text stating what each stage includes. Two light blue arrows surround the design in a circular motion.

Data lakehouse architectures enable all workloads from BI, data science, and ML without the replication of data and regardless of the types of data being used. Photo Credit: Databricks

Key Considerations When Moving from a Data Warehouse to a Data Lakehouse

When moving from a data warehouse to a data lakehouse, it is important to consider several key factors. In this section, we outline these considerations and explain why they are important, what specifically should be considered, and how to approach each consideration.

  • Decouple storage and compute: Decoupling storage and compute in a data lakehouse offers cost and performance advantages. By allocating compute resources based on the needs of specific workflows or data teams, development, testing, and deployment can become more flexible without using the same resources allocated to production workloads. This also allows for unhindered data growth in the data lakehouse, as storage increases do not encumber costs for adjustments in the next year’s budget.To fully realize the benefits of this decoupling, you should first consider how much compute is needed for each workload and monitor the resulting performance and cost. You can start by identifying compute-heavy workloads and testing them with small clusters or compute resources, scaling up from there. You should also assess how much history is necessary for reporting and analytics. If only recent history is needed, then the frequency of file transfers to cold storage can be adjusted accordingly.
  • Expect structured, semi-structured, and unstructured data: Even if your data is entirely relational today, you need to anticipate more diversity of data types. A data lakehouse can work with semi-structured and unstructured data, such as JSON—natively or through open-source tools—and allows developers to use the tool or language of their choice to process the data.This adaptability makes it easier to incorporate new data sources and expand into new markets or entities.
  • Support polyglot (several different languages): The data lakehouse environment allows the entire data team—data engineers, data analysts, and data scientists—to work on one platform, using a variety of tools and languages, including Python and SQL. This centralization of tools and languages makes it easier to integrate data and facilitate enriched data processing by machine learning or data scientists.This flexibility is necessary for an organization to evolve but it’s important to consider where your current and future data engineers’ strengths and weaknesses are when deciding which tool and language your data lakehouse will be built upon. To take full advantage of the polyglot capabilities, establish best practices for when to use python or SQL. For instance, for reads and writes from the data lake use python, for the transformation logic between those two steps, use SQL with spark wrappers.
  • Work with files vs. a database: In a data lakehouse, data is curated within the data lake and stored in folders representing raw, aggregated, and optimized data products, using a multi-hop architecture approach. Understanding extraction and integration from the source to the “raw” layer of the data lake and selecting the right tool and baseline directory strategy can protect the data lake from becoming a data swamp.If you’re migrating to a data lakehouse, you should identify one or two workloads to start with and build up a curated “raw” layer to best enable the multi-hop architecture that comes after.
  • Choose your open-source file format: One important aspect of optimizing a data lakehouse is file partitioning, which can improve performance and scalability. Unique table formats offered by data lakehouse platforms—such as DELTA LAKE and ICEBERG—offer various optimization options, such as file compression and removal of unused files, which keep storage costs in check and performance up.Remember: you don’t know what you don’t know … at least not yet. Start by choosing a workload to migrate to the data lakehouse. You’ll want to consider retention policies for time travel features, size of data for compression and partitioning capabilities, and the end goal—be it BI and/or machine learning—as DELTA and ICEBERG will be uniquely positioned to enable one or the other. Not all data lakehouse tools are the same. As a result, it’s important to consider the optimization offerings and the potential they can bring.
  • Stop copying data everywhere: The multi-hop architecture of a data lakehouse reduces the need to fully copy data from layer to layer in the ETL process. Instead, the metadata management component of the data lakehouse—using ACID methodologies—can run ETL workflows and only copy the necessary data to produce transformed output, which reduces the need for data deduplication.With the deduplication of data, who will and who has access becomes more important. Since data is no longer being copied into other systems, strong data governance and access controls are needed. If your organization has sensitive data and needs to comply with data protection laws, considering security functionality is vital. You will need to identify which business units, groups, and individuals will need access to what. The datalake house sits on top of the data lake, meaning the security and access in your data lake is not automatically transferred to your lakehouse platform. They may mirror one another; however, you now can grant metadata privileges for instance which may make sense for ETL teams, but not ML or business teams.
Four blue circles with white icons represent the components of an MLOps pipeline: design, development, deployment, operations. Below each icon is a black box with white text stating what each stage includes. Two light blue arrows surround the design in a circular motion.

The data lakehouse is the meeting of the data lake and the data warehouse—various features converge in a ‘sweet spot’, with some becoming less strict and others more so to create a richer, scalable, and flexible architecture.

Talk to an expert about your data lakehouse needs.

Challenges of Moving from A Data Warehouse to A Data Lakehouse

Moving from a data warehouse to a data lakehouse presents several challenges that you must overcome. The following are some of the major challenges and considerations to keep in mind:

  • Configuration: Migrating to a data lakehouse architecture requires more configuration than a traditional cloud data warehouse, which can be a challenge for some organizations. It’s crucial to understand how to fine-tune clusters and engines to specific workloads, and this learning curve can be steep.Pro Tip: Invest in the right talent and expertise to configure your data lakehouse architecture. Consider hiring experienced cloud architects or working with a data and analytics consulting firm that specializes in cloud infrastructure. Additionally, make sure to fully understand your workloads and data storage requirements to optimize cluster and engine configurations.
  • Access Control: Access control can present a challenge for organizations without a clear security process in place. It is important to define who can access the data lake and who can access the data lakehouse, as well as implement policies on cluster and engine creation and use.

    Pro Tip: 
    Develop a comprehensive security plan that clearly defines access controls and policies for cluster and engine creation and usage. This should involve a mix of technical controls and user education to ensure all stakeholders understand their roles and responsibilities when it comes to data access and security.
  • Compute/Engine Start: The time it takes to spin up a cluster or engine can be frustrating for organizations accustomed to traditional data warehouses. Although serverless compute is reducing this pain point, it’s important to set proper expectations for users to manage their frustration.Pro Tip: Plan for cluster and engine start-up times by setting realistic expectations for users and automating the spin-up process as much as possible. Make sure to consider the trade-offs between faster compute start-up times and cost.
  • Data Quality: Data quality is a critical consideration when implementing a data lakehouse architecture—especially because the architecture allows data to be stored in a variety of formats. While a data lakehouse can significantly improve data quality, it can also be a challenge, particularly if an organization struggles with data governance.

    Pro Tip:
     Establish a robust data governance framework that includes clear policies and procedures for data classification, protection, and control. Invest in data quality tools and automated testing to ensure data consistency and accuracy. It’s also important to train data engineers and data analysts on how to properly curate and manage data in a data lakehouse environment.
  • Cost: Cost management is a critical consideration when migrating to a data lakehouse. Improper configuration of compute resources can result in unnecessary costs. Understanding how costs are accrued on the chosen data lakehouse platform or tool is essential for effective cost control. Compute usage incurs cost, and a small workload can be expensive when run on a very powerful compute engine. Conversely, a large workload that runs on a small compute engine may take longer to process, resulting in higher costs.

    Pro Tip:
     Monitor and optimize compute usage to avoid unnecessary costs. This can involve using serverless compute or right-sizing compute resources based on workload demands. Additionally, consider implementing cost management tools and processes to monitor spending and identify areas for optimization.
Four blue circles with white icons represent the components of an MLOps pipeline: design, development, deployment, operations. Below each icon is a black box with white text stating what each stage includes. Two light blue arrows surround the design in a circular motion.

Moving to a data lakehouse from a data warehouse poses a set of challenges around data quality, configuration, access control, and compute/engine start.

Although there are challenges, the benefits of moving to a data lakehouse—lower costs, improved scalability, enhanced data governance, and support for different workloads—make it a wise choice for organizations looking to scale their data strategy while remaining agile and cost-effective. If you have not yet considered a data lakehouse for your organization, now is the time to do so.

Jaime Tirado Jaime is a Senior Consultant focused on the Data Management space. He is a thought leader in data lakehouse solutions, enabling the design and implementation of lakehouse platforms for our clients. He also develops lakehouse content for internal training, sales, and marketing purposes. Outside of work, Jaime enjoys cooking, gaming, and traveling with his partner.
John Bemenderfer John is a Senior Consultant based out of our Dallas office. He has experience across the entire data stack, from data engineering to analytics, helping clients get the most value out of their data. He also helps lead the Power BI practice for Analytics8. Outside of work, John enjoys spending time with his daughter and wife, dungeons and dragons, and anything Star Wars related.
Subscribe to

The Insider

Sign up to receive our monthly newsletter, and get the latest insights, tips, and advice.

Thank You!