To set up an automatic import of your usage data from your BigQuery into Sequence, you will need:

  • A dedicated source dataset in BigQuery to host the usage data to send to Sequence
  • A Cloud Storage bucket to temporarily stage data during imports
  • A service account with:
    • Read access to the source dataset
    • Write access to the staging bucket
    • A linked Sequence-provided principal

Step 1: Create a service account

  1. In the GCP console, in the same project as your BigQuery instance, navigate to the IAM & Admin menu and click into the Service Accounts tab
  1. Click Create service account at the top of the menu, give it a name.
  1. In the second step, grant the service account the role BigQuery User
Make note of the service account ID which is automatically generated in the second line above. For example: import-service-account@<your_project_id>.iam.gserviceaccount.com
  1. Once your Service Account is created, click on Manage permissions under the Actions menu
  1. Click on Grant Access
  1. Add the following principal datasync-bobldvms@prql-prod.iam.gserviceaccount.com and assign it the Service Account Token Creator role

This setup lets Sequence import your data securely without the need to share a private key, while keeping you in control of the access permissions to the resources you will configure in the next steps.

Step 2: Create a dataset

While this step is optional, we recommend that you use a separate dataset for the data you share with Sequence
  1. Log into the Google Cloud Console and navigate to BigQuery. Click on your project and select Create data set
  1. Chose an ID and a Location for your dataset, then click Create data set. Make a note of the location (region)
  1. Create and populate the table to be imported in Sequence

  2. Share the dataset with the service account created in Step 2, e.g. import-service-account@<your_project_id>.iam.gserviceaccount.com. To do so, navigate to Permissions -> Sharing

  1. Once in the Permissions menu click Add Principal
  1. Add the service account created in Step 2 as principal with the roles of BigQuery User and BigQuery Data Viewer. Click Save

Step 3: Create a Cloud Storage bucket

Transferring data from BigQuery requires a temporary staging area within Google Cloud Storage to stage compressed data before copying to a destination.

  1. In the GCP console, in the same project as your BigQuery instance, navigate to the Cloud Storage menu and click on Buckets
  1. Create a new bucket
  1. Choose a name for the bucket. Click Continue. Select a location for the staging bucket. Make a note of both the name and the location (region)

The location you choose for your staging bucket must match the location of your source dataset in BigQuery

  1. Click continue and complete the options that follow according to your preferences, then click Create

  2. On the Bucket details page that appears, click the Permissions tab and click Grant access

  1. In the New principals drop-down, add the Service Account created in Step 2, select the Storage Admin role, and click Save

Step 4: Sharing your details with Sequence

The last step is to share the following details with the Sequence team:

  • The fully qualified BigQuery source table ID: <project_id>.<dataset_id>.<table_name>
  • The full table schema of the BigQuery source table
  • The GCS bucket name created in Step 3 and its region
  • The Service Account email created in Step 2, e.g. import-service-account@<your_project_id>.iam.gserviceaccount.com
That’s it! You are now ready to automatically import usage data into Sequence 🎉