Skip to main contentFlag applied connectors export assignment data.
BigQuery
Redshift
Databricks and S3
Pub/Sub
Kinesis
Export assignment data to a table in BigQuery.Required GCP Roles
- BigQuery data owner for the destination table.
Configuration
- Project - The GCP project the destination table exists in
- Service account - A GCP service account that has write access to the destination table, and that you’ve configured so that the Confidence service account can impersonate this account.
- Dataset - The dataset in which to create the destination table
- Table - The table to write the data to. The connector automatically creates the table. If the table already exists, the connector verifies that the schema looks as expected, otherwise it attempts to create the required columns.
Export assignment data to a table in Redshift, by first writing the data as Parquet files to a S3 bucket, and then import these files into the configured table.Required AWS Permissions/Policies for Role
s3:GetObject
s3:GetObjectAcl
s3:PutObject
s3:PutObjectAcl
AmazonRedshiftDataFullAccess policy
AmazonRedshiftAllCommandsFullAccess policy
The role also needs to have permissions to create tables and insert rows into those tables in the configured database.Configuration
- Table - The name of the Redshift table to use for writing assignment data. The connector creates the table automatically when the first import is done.
- Cluster - The name of the Redshift cluster to use.
- Database - The name of the Redshift database to use.
- Schema - The name of the Redshift schema where the connector creates the preceding table.
- Redshift region - The AWS region of the cluster. Because of AWS limitations, the region of the cluster needs to match the region of the S3 bucket.
- Redshift Role ARN - The role to use for the Redshift
COPY jobs. This role needs to have permission to create tables and copy data into tables in the configured schema, and load files from the S3 bucket.
- Bucket - The S3 bucket to write the Parquet files to.
- Bucket Role ARN - The role Confidence assumes when writing files to the S3 bucket.
- Bucket Region - The AWS region of the bucket. The bucket needs to be in the same region as the Redshift cluster.
- Batch settings - These settings control the size and max age of the Parquet files written to S3.
Export assignment data to a table in Databricks, by first writing the data as Parquet files to a S3 bucket, and then import these files into the specified table.Required AWS Permissions for Role
s3:GetObject
s3:GetObjectAcl
s3:PutObject
s3:PutObjectAcl
Configuration
- Table - The name of the Databricks table to use for writing assignment data. The connector creates the table automatically when the first import is done.
- Schema - The name of the Databricks Schema/Catalogue where the connector creates the preceding table.
- Databricks HTTP path - The HTTP path to use for the Databricks JDBC connection, available in the connection details for the cluster.
- Databricks Access Token - An access token that has write access to the configured table.
- Role ARN - The ARN for the AWS Role that has read/write access to the S3 bucket. You need to configure the role to have a trust relationship so that the Confidence service account can to assume this role.
- Bucket - The S3 bucket to write the parquet files to.
- Databricks host - The hostname for the Databricks instance, for example
xx.x.gcp.databricks.com.
- Batch settings - These settings control the size of the Parquet files written to S3.
Forward assignment data to a Pub/Sub topic.Required GCP Roles
Configuration
- Event Type - The Event type to export to the stream
- Project - The GCP project that the topic exists in.
- Service account - A GCP service account that has publish permissions to the configured topic and that you’ve configured so that the Confidence service account can impersonate this account.
- Topic - The name of the topic
- Output format - if the connector should write events in JSON or Protobuf binary format. Regardless of the format chosen, it wraps the event in a CloudEvents envelope.
Forward assignment data to a Kinesis stream.Required AWS Permissions
kinesis:PutRecord
kinesis:PutRecords
kinesis:PutRecordBatch
kinesis:DescribeStream
Configuration
- Event Type - The Event type to export to the stream
- Role ARN - The ARN for the AWS role that has read and write access to the Kinesis stream.
- Region - The AWS region the Kinesis stream exists in
- Stream - The Kinesis stream name.
- Output format - if the connector should write events in JSON or Protobuf binary format. Regardless of the format chosen, the connector wraps the event in a CloudEvents envelope.