You signed in with another tab or window.
Reload
to refresh your session.
You signed out in another tab or window.
Reload
to refresh your session.
You switched accounts on another tab or window.
Reload
to refresh your session.
By clicking “Sign up for GitHub”, you agree to our
terms of service
and
privacy statement
. We’ll occasionally send you account related emails.
Already on GitHub?
Sign in
to your account
Describe the problem
I'm currently populating a Hive metastore with Delta tables created by Databricks.
These tables have table properties, including Databricks-specific properties such as
delta.autoOptimize.autoCompact
, which can be ignored via the
spark.databricks.delta.allowArbitraryProperties.enabled
configuration since Delta 2.0.0. So far so good!
Unfortunately, when trying to create the table in the metastore, I get the following error:
Exception in thread "main" org.apache.spark.sql.AnalysisException: The specified properties do not match the existing properties at s3a://redacted/redacted/redacted/table/0000.
== Specified ==
delta.autooptimize.autocompact=true
redacted.public=false
== Existing ==
delta.autoOptimize.autoCompact=true
redacted.public=false
Due to the Delta code thinking the properties are case-insensitive, but grabbing the existing properties as a straightforward Map and comparing the two in a case-sensitive fashion.
Steps to reproduce
Create a Databricks Delta table with table properties including upper case letters
Try to read them with open-source Delta
Observed results
Failing due to case sensitivity on the table properties.
Expected results
Not failing :)
Further details
Full traceback: