GCP + Pandas (dagster-gcp-pandas)

BigQuery

This library provides an integration with the BigQuery database and Pandas data processing library.

dagster_gcp_pandas.bigquery_pandas_io_manager IOManagerDefinition

Config Schema:
dataset (dagster.StringSource, optional):

Name of the BigQuery dataset to use. If not provided, the last prefix before the asset name will be used.

project (dagster.StringSource):

The GCP project to use.

location (Union[dagster.StringSource, None], optional):

The GCP location.

Default Value: None

gcp_credentials (Union[dagster.StringSource, None], optional):

GCP authentication credentials. If provided, a temporary file will be created with the credentials and GOOGLE_APPLICATION_CREDENTIALS will be set to the temporary file. To avoid issues with newlines in the keys, you must base64 encode the key. You can retrieve the base64 encoded with this shell command: cat $GOOGLE_AUTH_CREDENTIALS | base64

Default Value: None

An IO manager definition that reads inputs from and writes pandas DataFrames to BigQuery.

Returns:

IOManagerDefinition

Examples

from dagster_bigquery_pandas import bigquery_pandas_io_manager
from dagster import Definitions

@asset(
    key_prefix=["my_dataset"]  # will be used as the dataset in BigQuery
)
def my_table() -> pd.DataFrame:  # the name of the asset will be the table name
    ...

defs = Definitions(
    assets=[my_table],
    resources={
        "io_manager": bigquery_pandas_io_manager.configured({
            "project" : {"env": "GCP_PROJECT"}
        })
    }
)

You can tell Dagster in which dataset to create tables by setting the “dataset” configuration value. If you do not provide a dataset as configuration to the I/O manager, Dagster will determine a dataset based on the assets and ops using the I/O Manager. For assets, the dataset will be determined from the asset key, as shown in the above example. The final prefix before the asset name will be used as the dataset. For example, if the asset “my_table” had the key prefix [“gcp”, “bigquery”, “my_dataset”], the dataset “my_dataset” will be used. For ops, the dataset can be specified by including a “schema” entry in output metadata. If “schema” is not provided via config or on the asset/op, “public” will be used for the dataset.

@op(
    out={"my_table": Out(metadata={"schema": "my_dataset"})}
)
def make_my_table() -> pd.DataFrame:
    # the returned value will be stored at my_dataset.my_table
    ...

To only use specific columns of a table as input to a downstream op or asset, add the metadata “columns” to the In or AssetIn.

@asset(
    ins={"my_table": AssetIn("my_table", metadata={"columns": ["a"]})}
)
def my_table_a(my_table: pd.DataFrame) -> pd.DataFrame:
    # my_table will just contain the data from column "a"
    ...

If you cannot upload a file to your Dagster deployment, or otherwise cannot authenticate with GCP via a standard method, (see https://cloud.google.com/docs/authentication/provide-credentials-adc), you can provide a service account key as the “gcp_credentials” configuration. Dagster will store this key in a temporary file and set GOOGLE_APPLICATION_CREDENTIALS to point to the file. After the run completes, the file will be deleted, and GOOGLE_APPLICATION_CREDENTIALS will be unset. The key must be base64 encoded to avoid issues with newlines in the keys. You can retrieve the base64 encoded with this shell command: cat $GOOGLE_APPLICATION_CREDENTIALS | base64

class dagster_gcp_pandas.BigQueryPandasTypeHandler(*args, **kwds)[source]

Plugin for the BigQuery I/O Manager that can store and load Pandas DataFrames as BigQuery tables.

Examples

from dagster_gcp import build_bigquery_io_manager
from dagster_bigquery_pandas import BigQueryPandasTypeHandler
from dagster import asset, Definitions

@asset(
    key_prefix=["my_dataset"]  # will be used as the dataset in BigQuery
)
def my_table():
    ...

bigquery_io_manager = build_bigquery_io_manager([BigQueryPandasTypeHandler()])

defs = Definitions(
    assets=[my_table],
    resources={
        "io_manager": bigquery_io_manager.configured({
            "project" : {"env": "GCP_PROJECT"}
        })
    }
)