BigQuery
Automatically ingest usage events from your data warehouse.
To set up an automatic import of your usage data from your BigQuery into Sequence, you will need:
- A dedicated source dataset in BigQuery to host the usage data to send to Sequence
- A Cloud Storage bucket to temporarily stage data during imports
- A service account with:
- Read access to the source dataset
- Write access to the staging bucket
- A linked Sequence-provided principal
Step 1: Create a service account
- In the GCP console, in the same project as your BigQuery instance, navigate to the IAM & Admin menu and click into the Service Accounts tab
![](https://mintlify.s3-us-west-1.amazonaws.com/sequence/images/integrations/bigquery_01.png)
- Click Create service account at the top of the menu, give it a name.
![](https://mintlify.s3-us-west-1.amazonaws.com/sequence/images/integrations/bigquery_02.png)
- In the second step, grant the service account the role BigQuery User
![](https://mintlify.s3-us-west-1.amazonaws.com/sequence/images/integrations/bigquery_03.png)
import-service-account@<your_project_id>.iam.gserviceaccount.com
- Once your Service Account is created, click on Manage permissions under the Actions menu
![](https://mintlify.s3-us-west-1.amazonaws.com/sequence/images/integrations/bigquery_04.png)
- Click on Grant Access
![](https://mintlify.s3-us-west-1.amazonaws.com/sequence/images/integrations/bigquery_05.png)
- Add the following principal
datasync-bobldvms@prql-prod.iam.gserviceaccount.com
and assign it the Service Account Token Creator role
![](https://mintlify.s3-us-west-1.amazonaws.com/sequence/images/integrations/bigquery_06.png)
This setup lets Sequence import your data securely without the need to share a private key, while keeping you in control of the access permissions to the resources you will configure in the next steps.
Step 2: Create a dataset
- Log into the Google Cloud Console and navigate to BigQuery. Click on your project and select Create data set
![](https://mintlify.s3-us-west-1.amazonaws.com/sequence/images/integrations/bigquery_07.png)
- Chose an ID and a Location for your dataset, then click Create data set. Make a note of the location (region)
![](https://mintlify.s3-us-west-1.amazonaws.com/sequence/images/integrations/bigquery_08.png)
-
Create and populate the table to be imported in Sequence
-
Share the dataset with the service account created in Step 2, e.g.
import-service-account@<your_project_id>.iam.gserviceaccount.com
. To do so, navigate to Permissions -> Sharing
![](https://mintlify.s3-us-west-1.amazonaws.com/sequence/images/integrations/bigquery_09.png)
- Once in the Permissions menu click Add Principal
![](https://mintlify.s3-us-west-1.amazonaws.com/sequence/images/integrations/bigquery_10.png)
- Add the service account created in Step 2 as principal with the roles of
BigQuery User
andBigQuery Data Viewer
. Click Save
![](https://mintlify.s3-us-west-1.amazonaws.com/sequence/images/integrations/bigquery_11.png)
Step 3: Create a Cloud Storage bucket
Transferring data from BigQuery requires a temporary staging area within Google Cloud Storage to stage compressed data before copying to a destination.
- In the GCP console, in the same project as your BigQuery instance, navigate to the Cloud Storage menu and click on Buckets
![](https://mintlify.s3-us-west-1.amazonaws.com/sequence/images/integrations/bigquery_12.png)
- Create a new bucket
![](https://mintlify.s3-us-west-1.amazonaws.com/sequence/images/integrations/bigquery_13.png)
- Choose a name for the bucket. Click Continue. Select a location for the staging bucket. Make a note of both the name and the location (region)
The location you choose for your staging bucket must match the location of your source dataset in BigQuery
-
Click continue and complete the options that follow according to your preferences, then click Create
-
On the Bucket details page that appears, click the Permissions tab and click Grant access
![](https://mintlify.s3-us-west-1.amazonaws.com/sequence/images/integrations/bigquery_14.png)
- In the New principals drop-down, add the Service Account created in Step 2, select the Storage Admin role, and click Save
![](https://mintlify.s3-us-west-1.amazonaws.com/sequence/images/integrations/bigquery_15.png)
Step 4: Sharing your details with Sequence
The last step is to share the following details with the Sequence team:
- The fully qualified BigQuery source table ID:
<project_id>.<dataset_id>.<table_name>
- The full table schema of the BigQuery source table
- The GCS bucket name created in Step 3 and its region
- The Service Account email created in Step 2, e.g.
import-service-account@<your_project_id>.iam.gserviceaccount.com