BigQuery
To set up an automatic import of your usage data from your BigQuery into Sequence, you will need:
- A dedicated source dataset in BigQuery to host the usage data to send to Sequence
- A Cloud Storage bucket to temporarily stage data during imports
- A service account with:
- Read access to the source dataset
- Write access to the staging bucket
- A linked Sequence-provided principal
Step 1: Create a service account
- In the GCP console, in the same project as your BigQuery instance, navigate to the IAM & Admin menu and click into the Service Accounts tab

- Click Create service account at the top of the menu, give it a name.

- In the second step, grant the service account the role BigQuery User

import-service-account@<your_project_id>.iam.gserviceaccount.com
- Once your Service Account is created, click on Manage permissions under the Actions menu

- Click on Grant Access

- Add the following principal
datasync-bobldvms@prql-prod.iam.gserviceaccount.com
and assign it the Service Account Token Creator role

This setup lets Sequence import your data securely without the need to share a private key, while keeping you in control of the access permissions to the resources you will configure in the next steps.
Step 2: Create a dataset
- Log into the Google Cloud Console and navigate to BigQuery. Click on your project and select Create data set

- Chose an ID and a Location for your dataset, then click Create data set. Make a note of the location (region)

-
Create and populate the table to be imported in Sequence
-
Share the dataset with the service account created in Step 2, e.g.
import-service-account@<your_project_id>.iam.gserviceaccount.com
. To do so, navigate to Permissions -> Sharing

- Once in the Permissions menu click Add Principal

- Add the service account created in Step 2 as principal with the roles of
BigQuery User
andBigQuery Data Viewer
. Click Save

Step 3: Create a Cloud Storage bucket
Transferring data from BigQuery requires a temporary staging area within Google Cloud Storage to stage compressed data before copying to a destination.
- In the GCP console, in the same project as your BigQuery instance, navigate to the Cloud Storage menu and click on Buckets

- Create a new bucket

- Choose a name for the bucket. Click Continue. Select a location for the staging bucket. Make a note of both the name and the location (region)
The location you choose for your staging bucket must match the location of your source dataset in BigQuery
-
Click continue and complete the options that follow according to your preferences, then click Create
-
On the Bucket details page that appears, click the Permissions tab and click Grant access

- In the New principals drop-down, add the Service Account created in Step 2, select the Storage Admin role, and click Save

Step 4: Sharing your details with Sequence
The last step is to share the following details with the Sequence team:
- The fully qualified BigQuery source table ID:
<project_id>.<dataset_id>.<table_name>
- The full table schema of the BigQuery source table
- The GCS bucket name created in Step 3 and its region
- The Service Account email created in Step 2, e.g.
import-service-account@<your_project_id>.iam.gserviceaccount.com