3 - ETL Processing On Google Cloud Using Dataflow and BigQuery

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 15

ETL Processing on Google Cloud Using Dataflow and

BigQuery

Overview

In this lab you build several Data Pipelines that ingest data from a publicly
available dataset into BigQuery, using these Google Cloud services:

 Cloud Storage
 Dataflow
 BigQuery
You will create your own Data Pipeline, including the design considerations, as
well as implementation details, to ensure that your prototype meets the
requirements. Be sure to open the python files and read the comments when
instructed to.

It takes a few moments to provision and connect to the environment. When you
are connected, you are already authenticated, and the project is set to
your PROJECT_ID. For example:
gcloud is the command-line tool for Google Cloud. It comes pre-installed on
Cloud Shell and supports tab-completion.

You can list the active account name with this command:

gcloud auth list


(Output)

Credentialed accounts:
- <myaccount>@<mydomain>.com (active)
(Example output)

Credentialed accounts:
- [email protected]
You can list the project ID with this command:

gcloud config list project


(Output)

[core]
project = <project_ID>
(Example output)

[core]
project = qwiklabs-gcp-44776a13dea667a6
For full documentation of gcloud see the gcloud command-line tool overview.

Download the starter code


Open a session in Cloud Shell and run the following command to get Dataflow
Python Examples from GCP's professional services github:
gsutil -m cp -R gs://spls/gsp290/dataflow-python-examples .
Now set a variable equal to your project id, replacing <YOUR-PROJECT-ID> with
your lab Project ID:

export PROJECT=<YOUR-PROJECT-ID>
gcloud config set project $PROJECT

Create Cloud Storage Bucket


Use the make bucket command to create a new regional bucket in the us-
central1 region within your project:

gsutil mb -c regional -l us-central1 gs://$PROJECT

Test Completed Task

Click Check my progress to verify your performed task.

Create a Cloud Storage Bucket


Check my progress

Copy Files to Your Bucket


Use the gsutil command to copy files to the GCS bucket you just created:

gsutil cp gs://spls/gsp290/data_files/usa_names.csv gs://$PROJECT/data_files/


gsutil cp gs://spls/gsp290/data_files/head_usa_names.csv
gs://$PROJECT/data_files/

Test Completed Task

Click Check my progress to verify your performed task.


Copy Files to Your Bucket
Check my progress

Create the BigQuery Dataset


Create a dataset in BigQuery called lake. This is where all of your tables will be
loaded in BigQuery:

bq mk lake

Test Completed Task

Click Check my progress to verify your performed task.

Create the BigQuery Dataset (name: lake)


Check my progress

Build a Dataflow Pipeline


In this section you will create an append-only Dataflow which will ingest data into
the BigQuery table. You can use the built-in code editor which will allow you to
view and edit the code in the GCP console.
Open Code Editor

Navigate to the source code by clicking on the Code Editor icon in Cloud Shell:

Data Ingestion
You will now build a Dataflow pipeline with a TextIO source and a BigQueryIO
destination to ingest data into BigQuery. More specifically, it will:

 Ingest the files from Cloud Storage.


 Filter out the header row in the files.
 Convert the lines read to dictionary objects.
 Output the rows to BigQuery.

Review pipeline python code


In the Code Editor navigate to dataflow-python-
examples > dataflow_python_examples and open the data_ingestion.py file.
Read through the comments in the file, which explain what the code is doing.
This code will populate the data in BigQuery.
Run the Apache Beam Pipeline
Return to your Cloud Shell session for this step. You will now do a bit of set up
for the required python libraries.

Run the following to set up the python environment:


cd dataflow-python-examples/
# Here we set up the python environment.
# Pip is a tool, similar to maven in the java world
sudo pip install virtualenv

#Dataflow requires python 3.7


virtualenv -p python3 venv

source venv/bin/activate
pip install apache-beam[gcp]
You will run the Dataflow pipeline in the cloud.

The following will spin up the workers required, and shut them down when
complete:

python dataflow_python_examples/data_ingestion.py --project=$PROJECT


--region=us-central1 --runner=DataflowRunner
--staging_location=gs://$PROJECT/test --temp_location gs://$PROJECT/test
--input gs://$PROJECT/data_files/head_usa_names.csv --save_main_session
Return to the Cloud Console and open the Navigation menu > Dataflow to view
the status of your job.

Click on the name of your job to watch it's progress. Once your Job
Status is Succeeded, navigate to BigQuery (Navigation menu > BigQuery)
see that your data has been populated.
Click on your project name to see the usa_names table under the lake dataset.

Click on the table then navigate to the Preview tab to see examples of


the usa_names data.

Test Completed Task

Click Check my progress to verify your performed task.

Build a Data Ingestion Dataflow Pipeline


Check my progress
Data Transformation
You will now build a Dataflow pipeline with a TextIO source and a BigQueryIO
destination to ingest data into BigQuery. More specifically, you will:

 Ingest the files from Cloud Storage.


 Convert the lines read to dictionary objects.
 Transform the data which contains the year to a format BigQuery
understands as a date.
 Output the rows to BigQuery.

Review pipeline python code

Navigate to data_transformation.py and open it in Code Editor. Read through


the comments in the file which explain what the code is doing.

Run the Apache Beam Pipeline


You will run the Dataflow pipeline in the cloud. This will spin up the workers
required, and shut them down when complete.

Run the following commands to do so:

python dataflow_python_examples/data_transformation.py --project=$PROJECT


--region=us-central1 --runner=DataflowRunner
--staging_location=gs://$PROJECT/test --temp_location gs://$PROJECT/test
--input gs://$PROJECT/data_files/head_usa_names.csv --save_main_session
Navigate to Navigation menu > Dataflow and click on the name of this job view
the status.
When your Job Status is Succeeded in the Dataflow Job status screen,
navigate to BigQuery to check to see that your data has been populated.

You should see the usa_names_transformed table under the lake dataset.

Click on the table and navigate to the Preview tab to see examples of


the usa_names_transformed data.

Note: If you don't see the usa_names_transformed table, try refreshing the page or view the
tables using the classic BigQuery UI.

Test Completed Task

Click Check my progress to verify your performed task.

Build a Data Transformation Dataflow Pipeline


Check my progress

Data Enrichment
You will now build a Dataflow pipeline with a TextIO source and a BigQueryIO
destination to ingest data into BigQuery. More specifically, you will:

 Ingest the files from GCS.


 Filter out the header row in the files.
 Convert the lines read to dictionary objects.
 Output the rows to BigQuery.
Review pipeline python code
Navigate to data_enrichment.py and open it in Code Editor. Check out the
comments which explain what the code is doing. This code will populate the data
in BigQuery.

Line 83 currently looks like:

values = [x.decode('utf8') for x in csv_row]


Edit it so it looks like the following:

values = [x for x in csv_row]

Run the Apache Beam Pipeline


Here you'll run the Dataflow pipeline in the cloud. Run the following to spin up the
workers required, and shut them down when complete:

python dataflow_python_examples/data_enrichment.py --project=$PROJECT


--region=us-central1 --runner=DataflowRunner
--staging_location=gs://$PROJECT/test --temp_location gs://$PROJECT/test
--input gs://$PROJECT/data_files/head_usa_names.csv --save_main_session
Navigate to Navigation menu > Dataflow to view the status of your job.

Once your Job Status is Succeed in the Dataflow Job status screen, navigate to


BigQuery to check to see that your data has been populated.

You should see the usa_names_enriched table under the lake dataset.

Click on the table and navigate to the Preview tab to see examples of data for the
data.
Note: If you don't see the usa_names_enriched table, try refreshing the page or view the
tables using the classic BigQuery UI.

Test Completed Task

Click Check my progress to verify your performed task.

Build a Data Enrichment Dataflow Pipeline


Check my progress

Data lake to Mart


Now build a Dataflow pipeline that reads data from 2 BigQuery data sources, and
then joins the data sources. Specifically, you:

 Ingest files from 2 BigQuery sources.


 Join the 2 data sources.
 Filter out the header row in the files.
 Convert the lines read to dictionary objects.
 Output the rows to BigQuery.

Review pipeline python code


Navigate to data_lake_to_mart.py and open it in Code Editor. Read through
the comments in the file which explain what the code is doing. This code will
populate the data in BigQuery.

Run the Apache Beam Pipeline


Now you'll run the Dataflow pipeline in the cloud. Run the following to spin up the
workers required, and shut them down when complete:

python dataflow_python_examples/data_lake_to_mart.py
--worker_disk_type="compute.googleapis.com/projects//zones//diskTypes/pd-ssd"
--max_num_workers=4 --project=$PROJECT --runner=DataflowRunner
--staging_location=gs://$PROJECT/test --temp_location gs://$PROJECT/test
--save_main_session --region=us-central1
Navigate to Navigation menu > Dataflow and click on the name of this new job
to view the status.

Once you've Job Status is Succeeded in the Dataflow Job status screen,


navigate to BigQuery to check to see that your data has been populated.

You should see the orders_denormalized_sideinput table under the lake


dataset.

Click on the table and navigate to the Preview section to see examples of


orders_denormalized_sideinput data.

Note: If you don't see the orders_denormalized_sideinput table, try refreshing the page


or view the tables using the classic BigQuery UI.

Test Completed Task

Click Check my progress to verify your performed task.

Build a Data lake to Mart Dataflow Pipeline


Check my progress

Test your Understanding


Below are a multiple choice questions to reinforce your understanding of this
lab's concepts. Answer them to the best of your abilities.

ETL stands for ____.

Electronic Transferable Ledger

Extract, Transform and Load


Submit

You might also like