Create object tables
This document describes how to access unstructured data in BigQuery by creating an object table.
To create an object table, you must complete the following tasks:
- Create a connection to read object information from Cloud Storage.
- Grant permission to read Cloud Storage information to the service account associated with the connection.
- Create the object table and associate it with the connection by using the
CREATE EXTERNAL TABLE
statement.
Before you begin
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
-
Enable the BigQuery and BigQuery Connection API APIs.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
-
Enable the BigQuery and BigQuery Connection API APIs.
- Ensure that your BigQuery administrator has created a connection and set up access to Cloud Storage.
Required roles
To work with object tables, your users need the following IAM permissions based on their role in your organization. For more information on user roles, see Security model. For more information about giving permissions, see Viewing the grantable roles on resources.
Data lake administrator
To get the permissions that you need to connect to Cloud Storage, ask your administrator to grant you the BigQuery Connection Admin (
roles/bigquery.connectionAdmin
) role on the project.To get the permissions that you need to create and manage Cloud Storage buckets, ask your administrator to grant you the Storage Admin (
roles/storage.admin
) role on the project.This predefined role contains the permissions required to connect to Cloud Storage and create and manage Cloud Storage buckets. To see the exact permissions that are required, expand the Required permissions section:
Required permissions
bigquery.connections.create
bigquery.connections.get
bigquery.connections.list
bigquery.connections.update
bigquery.connections.use
bigquery.connections.delete
storage.bucket.*
storage.object.*
Data warehouse administrator
To get the permissions that you need to create object tables, ask your administrator to grant you the following roles on the project:
- BigQuery Data Editor (
roles/bigquery.dataEditor
) role. - BigQuery Connection Admin (
roles/bigquery.connectionAdmin
) role.
This predefined role contains the permissions required to create object tables. To see the exact permissions that are required, expand the Required permissions section:
Required permissions
bigquery.tables.create
bigquery.tables.update
bigquery.connections.delegate
- BigQuery Data Editor (
Data analyst
To get the permissions that you need to query object tables, ask your administrator to grant you the following roles on the project:
- BigQuery Data Viewer (
roles/bigquery.dataViewer
) role - BigQuery Connection User (
roles/bigquery.connectionUser
) role
This predefined role contains the permissions required to query object tables. To see the exact permissions that are required, expand the Required permissions section:
Required permissions
bigquery.jobs.create
bigquery.tables.get
bigquery.tables.getData
bigquery.readsessions.create
You might also be able to get these permissions with custom roles or other predefined roles.
- BigQuery Data Viewer (
Create object tables
Before you create an object table, you must have an existing dataset to contain it. For more information, see Creating datasets.
To create an object table:
SQL
Use the
CREATE EXTERNAL TABLE
statement.
In the Google Cloud console, go to the BigQuery page.
In the query editor, enter the following statement:
CREATE EXTERNAL TABLE `PROJECT_ID.DATASET_ID.TABLE_NAME` WITH CONNECTION `PROJECT_ID.REGION.CONNECTION_ID` OPTIONS( object_metadata = 'SIMPLE', uris = ['BUCKET_PATH'[,...]], max_staleness = STALENESS_INTERVAL, metadata_cache_mode = 'CACHE_MODE');
Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset to contain the object table.TABLE_NAME
: the name of the object table.REGION
: the region or multi-region that contains the connection.CONNECTION_ID
: the ID of the cloud resource connection to use with this object table. The connection determines which service account is used to read data from Cloud Storage.When you view the connection details in the Google Cloud console, the connection ID is the value in the last section of the fully qualified connection ID that is shown in Connection ID—for example
projects/myproject/locations/connection_location/connections/myconnection
.BUCKET_PATH
: the path to the Cloud Storage bucket that contains the objects represented by the object table, in the format['gs://bucket_name/[folder_name/]*']
.You can use one asterisk (
*
) wildcard character in each path to limit the objects included in the object table. For example, if the bucket contains several types of unstructured data, you could create the object table over only PDF objects by specifying['gs://bucket_name/*.pdf']
. For more information, see Wildcard support for Cloud Storage URIs.You can specify multiple buckets for the
uris
option by providing multiple paths, for example['gs://mybucket1/*', 'gs://mybucket2/folder5/*']
.For more information about using Cloud Storage URIs in BigQuery, see Cloud Storage resource path.
STALENESS_INTERVAL
: specifies whether cached metadata is used by operations against the object table, and how fresh the cached metadata must be in order for the operation to use it. For more information on metadata caching considerations, see Metadata caching for performance.To disable metadata caching, specify 0. This is the default.
To enable metadata caching, specify an interval literal value between 30 minutes and 7 days. For example, specify
INTERVAL 4 HOUR
for a 4 hour staleness interval. With this value, operations against the table use cached metadata if it has been refreshed within the past 4 hours. If the cached metadata is older than that, the operation retrieves metadata from Cloud Storage instead.CACHE_MODE
: specifies whether the metadata cache is refreshed automatically or manually. For more information on metadata caching considerations, see Metadata caching for performance.Set to
AUTOMATIC
for the metadata cache to be refreshed at a system-defined interval, usually somewhere between 30 and 60 minutes.Set to
MANUAL
if you want to refresh the metadata cache on a schedule you determine. In this case, you can call theBQ.REFRESH_EXTERNAL_METADATA_CACHE
system procedure to refresh the cache.You must set
CACHE_MODE
ifSTALENESS_INTERVAL
is set to a value greater than 0.
Click
Run.
For more information about how to run queries, see Run an interactive query.
Examples
The following example creates an object table with a metadata cache staleness interval of 1 day:
CREATE EXTERNAL TABLE `my_dataset.object_table` WITH CONNECTION `us.my-connection` OPTIONS( object_metadata = 'SIMPLE', uris = ['gs://mybucket/*'], max_staleness = INTERVAL 1 DAY, metadata_cache_mode = 'AUTOMATIC' );
The following example creates an object table over the objects in three Cloud Storage buckets:
CREATE EXTERNAL TABLE `my_dataset.object_table` WITH CONNECTION `us.my-connection` OPTIONS( object_metadata = 'SIMPLE', uris = ['gs://bucket1/*','gs://bucket2/folder1/*','gs://bucket3/*'] );
The following example creates an object table over just the PDF objects in a Cloud Storage bucket:
CREATE EXTERNAL TABLE `my_dataset.object_table` WITH CONNECTION `us.my-connection` OPTIONS( object_metadata = 'SIMPLE', uris = ['gs://bucket1/*.pdf'] );
bq
Use the
bq mk
command.
bq mk --table \ --external_table_definition=BUCKET_PATH@REGION.CONNECTION_ID \ --object_metadata=SIMPLE \ --max_staleness=STALENESS_INTERVAL \ --metadata_cache_mode=CACHE_MODE \ PROJECT_ID:DATASET_ID.TABLE_NAME
Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset to contain the object table.TABLE_NAME
: the name of the object table.REGION
: the region or multi-region that contains the connection.CONNECTION_ID
: the ID of the cloud resource connection to use with this external table. The connection determines which service account is used to read data from Cloud Storage.When you view the connection details in the Google Cloud console, the connection ID is the value in the last section of the fully qualified connection ID that is shown in Connection ID—for example
projects/myproject/locations/connection_location/connections/myconnection
.BUCKET_PATH
: the path to the Cloud Storage bucket that contains the objects represented by the object table, in the formatgs://bucket_name/[folder_name/]*
.You can use one asterisk (
*
) wildcard character in each path to limit the objects included in the object table. For example, if the bucket contains several types of unstructured data, you could create the object table over only PDF objects by specifyinggs://bucket_name/*.pdf
. For more information, see Wildcard support for Cloud Storage URIs.You can specify multiple buckets for the
uris
option by providing multiple paths, for examplegs://mybucket1/*,gs://mybucket2/folder5/*
.For more information about using Cloud Storage URIs in BigQuery, see Cloud Storage resource path.
STALENESS_INTERVAL
: specifies whether cached metadata is used by operations against the object table, and how fresh the cached metadata must be in order for the operation to use it. For more information on metadata caching considerations, see Metadata caching for performance.To disable metadata caching, specify 0. This is the default.
To enable metadata caching, specify an interval value between 30 minutes and 7 days, using the
Y-M D H:M:S
format described in theINTERVAL
data type documentation. For example, specify0-0 0 4:0:0
for a 4 hour staleness interval. With this value, operations against the table use cached metadata if it has been refreshed within the past 4 hours. If the cached metadata is older than that, the operation retrieves metadata from Cloud Storage instead.CACHE_MODE
: specifies whether the metadata cache is refreshed automatically or manually. For more information on metadata caching considerations, see Metadata caching for performance.Set to
AUTOMATIC
for the metadata cache to be refreshed at a system-defined interval, usually somewhere between 30 and 60 minutes.Set to
MANUAL
if you want to refresh the metadata cache on a schedule you determine. In this case, you can call theBQ.REFRESH_EXTERNAL_METADATA_CACHE
system procedure to refresh the cache.You must set
CACHE_MODE
ifSTALENESS_INTERVAL
is set to a value greater than 0.
Examples
The following example creates an object table with a metadata cache staleness interval of 1 day:
bq mk --table \ --external_table_definition=gs://mybucket/*@us.my-connection \ --object_metadata=SIMPLE \ --max_staleness=0-0 1 0:0:0 \ --metadata_cache_mode=AUTOMATIC \ my_dataset.object_table
The following example creates an object table over the objects in three Cloud Storage buckets:
bq mk --table \ --external_table_definition=gs://bucket1/*,gs://bucket2/folder1/*,gs://bucket3/*@us.my-connection \ --object_metadata=SIMPLE \ my_dataset.object_table
The following example creates an object table over just the PDF objects in a Cloud Storage bucket:
bq mk --table \ --external_table_definition=gs://bucket1/*[email protected] \ --object_metadata=SIMPLE \ my_dataset.object_table
API
Call the
tables.insert
method.
Include an
ExternalDataConfiguration
object
with the objectMetadata
field set to SIMPLE
in the
Table
resource
that you pass in.
The following example shows how to call this method by using curl
:
ACCESS_TOKEN=$(gcloud auth print-access-token) curl \
-H "Authorization: Bearer ${ACCESS_TOKEN}" \
-H "Content-Type: application/json" \
-X POST \
-d '{"tableReference": {"projectId": "my_project", "datasetId": "my_dataset", "tableId": "object_table_name"}, "externalDataConfiguration": {"objectMetadata": "SIMPLE", "sourceUris": ["gs://mybucket/*"]}}' \
https://www.googleapis.com/bigquery/v2/projects/my_project/datasets/my_dataset/tables
Terraform
This example creates an object table with metadata caching enabled with manual refresh.
To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.
The key fields to specify for an object table are
google_bigquery_table.external_data_configuration.object_metadata
,
google_bigquery_table.external_data_configuration.metadata_cache_mode
,
and google_bigquery_table.max_staleness
. For more information on each resource, see the Terraform BigQuery documentation.
To apply your Terraform configuration in a Google Cloud project, complete the steps in the following sections.
Prepare Cloud Shell
- Launch Cloud Shell.
-
Set the default Google Cloud project where you want to apply your Terraform configurations.
You only need to run this command once per project, and you can run it in any directory.
export GOOGLE_CLOUD_PROJECT=PROJECT_ID
Environment variables are overridden if you set explicit values in the Terraform configuration file.
Prepare the directory
Each Terraform configuration file must have its own directory (also called a root module).
-
In Cloud Shell, create a directory and a new
file within that directory. The filename must have the
.tf
extension—for examplemain.tf
. In this tutorial, the file is referred to asmain.tf
.mkdir DIRECTORY && cd DIRECTORY && touch main.tf
-
If you are following a tutorial, you can copy the sample code in each section or step.
Copy the sample code into the newly created
main.tf
.Optionally, copy the code from GitHub. This is recommended when the Terraform snippet is part of an end-to-end solution.
- Review and modify the sample parameters to apply to your environment.
- Save your changes.
-
Initialize Terraform. You only need to do this once per directory.
terraform init
Optionally, to use the latest Google provider version, include the
-upgrade
option:terraform init -upgrade
Apply the changes
-
Review the configuration and verify that the resources that Terraform is going to create or
update match your expectations:
terraform plan
Make corrections to the configuration as necessary.
-
Apply the Terraform configuration by running the following command and entering
yes
at the prompt:terraform apply
Wait until Terraform displays the "Apply complete!" message.
- Open your Google Cloud project to view the results. In the Google Cloud console, navigate to your resources in the UI to make sure that Terraform has created or updated them.
Query object tables
You can query an object table like any other BigQuery, for example:
SELECT * FROM mydataset.myobjecttable;
Querying an object table returns metadata for the underlying objects. For more information, see Object table schema.
What's next
- Learn how to run inference on image object tables.
- Learn how to analyze object tables by using remote functions.