Like our blogs?

Join our newsletter and get more blogs and news

Great Expectations Newsletter and Updates Sign-up

Hello friend of Great Expectations!

Our newsletter content will feature product updates from the open-source platform and our upcoming Cloud product, new blogs and community celebrations.

Please let us know what matters to you in regards to your use (or potential use) of Great Expectations below. We want to make sure that we keep you informed and notified only about what matters to you.

Error message placeholder

Error message placeholder

Error message placeholder

Error message placeholder

Error message placeholder

Error message placeholder

Error message placeholder

Error message placeholder

Using spark_config to pass Spark configuration parameters from GX to a Spark instance

Anytime you need to use additional flags with Spark, you can pass those configurations from your GX workflow.
Written By  Hannah StokesNovember 09, 2022
A photo of a gold sparkler against a black background.
GX offers multiple ways to pass configuration options to Spark. (📸: Jez Timms via Unsplash)

Today we’ll be covering how to pass configuration options to Spark from your Great Expectations (GX) workflow! This is useful anytime you need to use additional flags with Spark.

All of these options are things you could do by connecting directly to your Spark instance and providing flags that way, then yielding the resultant data_frame.

But by accessing these options from within GX itself, we can make our days a little easier. Like if you’re tuning something—you can declare a Spark option in line while you’re doing that.

The example we’ll reference for this post is here.

NOTE: You can reference tests and/or docs to find other stellar examples!

How do you do it?

You can pass Spark configuration options from GX both at runtime in your workflow and through your configuration. Pick the option that better fits your preference and/or environment!

At runtime

Let’s review the aforementioned Python example of providing a Spark configuration to the underlying Spark instance:

   ...
   spark_config: Dict[str, str] = {
        …
        "spark.sql.catalogImplementation": "hive",
        "spark.executor.memory": "768m",
        ...
    }

As you can see, providing the flag to Spark is easy: just add the name of the flag as a key to the spark_config dictionary, with your desired value as the other half of the pair.

Since you’re working with Spark, don’t forget to read up on the deets of batch_spec_passthrough!

In your configuration

You can also provide the configuration options for Spark in your datasource configuration directly. We’ll use data_context_parameterized_expectation_suite.add_datasource() as our example:

   data_context_parameterized_expectation_suite.add_datasource(
        dataset_name,
        class_name="SparkDFDatasource",
        spark_config=spark_config,
        force_reuse_spark_context=False,
        module_name="great_expectations.datasource",
        batch_kwargs_generators={},
    )
    datasource_config =
data_context_parameterized_expectation_suite.get_datasource(
        dataset_name
    ).config

In short, in this scenario you provide the earlier configuration options for Spark to your datasource configuration directly. The specifics of the configuration are stored in great_expectations/great_expectations.yml.

Tips

It always helps to validate the configuration! We can easily do that by following this example:

   ...
    source: SparkDFDatasource =
SparkDFDatasource(spark_config=spark_config)
    spark_session: SparkSession = source.spark
    ...

After which a quick check to confirm the connection is just one spark_session.sparkContext._jsc.sc().isStopped() away.

Before you go

You can view an example with many more options declared for Spark here:

expected_spark_config: Dict[str, str] = {
        "spark.app.name":
"default_great_expectations_spark_application",
        "spark.default.parallelism": "4",
        "spark.driver.memory": "6g",
        "spark.executor.id": "driver",
        "spark.executor.memory": "6g",
        "spark.master": "local[*]",
        "spark.rdd.compress": "True",
        "spark.serializer.objectStreamReset": "100",
        "spark.sql.catalogImplementation": "hive",
        "spark.sql.shuffle.partitions": "2",
        "spark.submit.deployMode": "client",
        "spark.ui.showConsoleProgress": "False",
    }

Get the full list of possible options, provided by the lovely folks at the Apache Software Foundation, here.


Great Expectations is part of an increasingly flexible and powerful modern data ecosystem. This is just one example of the ways in which Great Expectations is able to give you greater control of your data quality processes within that ecosystem.

We’re committed to supporting and growing the community around Great Expectations. It’s not enough to build a great platform; we want to build a great community as well. Join our public Slack channel here, find us on GitHub, sign up for one of our weekly cloud workshops, or head to https://greatexpectations.io/ to learn more.

We are hiring! Please check out our job board here:

Great Expectations

Developed By

Netlify Logo
Brought to you by the Superconductive TeamCopyright ©2020 Great Expectations