delta live tables

Delta Live Tables Advanced Q & A

Delta Live Tables Advanced Q & A This is primarily written for those folks who are trying to handle edge cases. Q1.) DLT Pipeline was deleted, but the Delta table exists. What to do now? What if the owner has left the org Step 1.) Verify via CLI if the pipeline has been deleted databricks --profile <your_env> pipelines list databricks --profile <your_env> pipelines get --pipeline-id <deleted_pipeline_id> Step 2.) Create a new pipeline with the existing storage path

Continue reading

How to upgrade your Spark Stream application with a new checkpoint!

How to upgrade your Spark Stream application with a new checkpoint With working code Sometimes in life, we need to make breaking changes which require us to create a new checkpoint. Some example scenarios: You are doing a code/application change where you are changing logic Major Spark Version upgrade from Spark 2.x to Spark 3.x The previous deployment was wrong, and you want to reprocess from a certain point There could be plenty of scenarios where you want to control precisely which data(Kafka offsets) need to be processed.

Continue reading

How to parameterize Delta Live Tables and import reusable functions

How to parameterize Delta Live Tables and import reusable functions with working code This blog will discuss passing custom parameters to a Delta Live Tables (DLT) pipeline. Furthermore, we will discuss importing functions defined in other files or locations. You can import files from the current directory or a specified location using sys.path.append(). Update: As of *December 2022, you can directly import files if the reusable_functions.py file exists in the same repository by just using the import command, which is the preferred approach.

Continue reading

How Audantic Uses Databricks Delta Live Tables to Increase Productivity for Real Estate Market Segments

To support our data-driven initiatives, we had ‘stitched’ together various services for ETL, orchestration, ML leveraging AWS, Airflow, where we saw some success but quickly turned into an overly complex system that took nearly five times as long to develop compared to the new solution. Our team captured high-level metrics comparing our previous implementation and current lakehouse solution. As you can see from the table below, we spent months developing our previous solution and had to write approximately 3 times as much code.

Continue reading