Migrate workloads to Delta Lake


When you migrate workloads to Delta Lake, you should be aware of the following simplifications and differences compared with the data sources provided by Apache Spark and Apache Hive.
Delta Lake handles the following operations automatically, which you should never perform manually:
REFRESH TABLE
: Delta tables always return the most up-to-date information, so there is no need to manually callREFRESH TABLE
after changes.Add and remove partitions: Delta Lake automatically tracks the set of partitions present in a table and updates the list as data is added or removed. As a result, there is no need to run
ALTER TABLE [ADD|DROP] PARTITION
orMSCK
.Load a single partition: As an optimization, you may sometimes directly load the partition of data you are interested in. For example,
spark.read.parquet("/data/date=2017-01-01")
. This is unnecessary with Delta Lake, since it can quickly read the list of files from the transaction log to find the relevant ones. If you are interested in a single partition, specify it using aWHERE
clause. For example,spark.read.delta("/data").where("date = '2017-01-01'")
. For large tables with many files in the partition, this can be much faster than loading a single partition (with direct partition path, or withWHERE
) from a Parquet table because listing the files in the directory is often slower than reading the list of files from the transaction log.
When you port an existing application to Delta Lake, you should avoid the following operations, which bypass the transaction log:
Manually modify data: Delta Lake uses the transaction log to atomically commit changes to the table. Because the log is the source of truth, files that are written out but not added to the transaction log are not read by Spark. Similarly, even if you manually delete a file, a pointer to the file is still present in the transaction log. Instead of manually modifying files stored in a Delta table, always use the commands that are described in this guide.
External readers: The data stored in Delta Lake is encoded as Parquet files. However, accessing these files using an external reader is not safe. You’ll see duplicates and uncommitted data and the read may fail when someone runs Remove files no longer referenced by a Delta table.
Note — Because the files are encoded in an open format, you always have the option to move the files outside Delta Lake. At that point, you can run
VACUUM RETAIN 0
and delete the transaction log. This leaves the table’s files in a consistent state that can be read by the external reader of your choice.
Example
Suppose you have Parquet data stored in the directory /data-pipeline
and want to create a table named events
. You can always read into DataFrame and save as Delta table. This approach copies data and lets Spark manage the table. Alternatively you can convert to Delta Lake which is faster but results in an unmanaged table.
Save as Delta table
- Read the data into a DataFrame and save it to a new directory in
delta
format:
data = spark.read.parquet("/data-pipeline") data.write.format("delta").save("/mnt/delta/data-pipeline/")
- Create a Delta table
events
that refers to the files in the Delta Lake directory:
spark.sql("CREATE TABLE events USING DELTA LOCATION '/mnt/delta/data-pipeline/'")
Convert to Delta table
You have two options for converting a Parquet table to a Delta table:
- Convert files to Delta Lake format and create Delta table:
-- SQLCONVERT TO DELTA parquet.`/data-pipeline/` CREATE TABLE events USING DELTA LOCATION '/data-pipeline/'
- Create Parquet table and convert to Delta table:
-- SQLCREATE TABLE events USING PARQUET OPTIONS (path '/data-pipeline/') CONVERT TO DELTA events
For details, see Convert a Parquet table to a Delta table.
Subscribe to my newsletter
Read articles from Ian Santillan directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Ian Santillan
Ian Santillan
Data Architect ACE - Analytics | Leading Data Consultant for North America 2022 | Global Power Platform Bootcamp 2023 Speaker | Toronto CDAO Inner Circle 2023