BigQuery Data Transfer Service: Avoid the New IAM Access Denied

Data warehouses have been an essential core of every effective analytics setup. According to Statista, global data generation will triple between 2025 and 2029. And in 2025, the amount of data was already at an overwhelming 182 zettabytes.

Organisations increasingly rely on centralised platforms such as BigQuery to process this much information and support decision‑making across marketing, finance, and product teams.

For businesses relying on Google Cloud, a critical deadline has passed. On March 17, Google enforced new permission requirements for the BigQuery Data Transfer Service (DTS).

Failing to update your IAM roles will break existing transfer creation and updated workflows.

This guide outlines the technical steps to secure your pipelines and explores the new native connectors that simplify revenue data ingestion.

What is BigQuery Data Transfer Service?

The BigQuery Data Transfer Service (DTS) automates the movement of data from various sources into BigQuery. It eliminates the need for manual coding or custom Extract, Transform, Load (ETL) scripts for supported applications.

Common sources include Google Ads, YouTube, and Google Analytics 4 (GA4).

By scheduling these transfers, businesses maintain a unified repository for their marketing and operational data.

The service handles the underlying API calls, data formatting, and backfilling. This frees up your experts to spend their time finding insights instead of fixing broken data pipelines.

The Evolution of Native Data Payment Ingestion

The service recently expanded its utility by adding PayPal and Stripe as transfer sources in preview.

Previously, merging transaction data with marketing performance required third-party pipeline tools or bespoke API integrations. These custom solutions often introduced latency and increased the cost of ownership.

The inclusion of these payment processors allows revenue, refund, and transaction data to flow directly into BigQuery alongside your GA4 events and ad spend.

This native integration enables a more accurate view of Return on Ad Spend (ROAS). 

Additionally, the GA4 Data Transfer connector now supports custom report configurations. You can now define specific dimensions and metrics for your transfers, ensuring your warehouse only stores the data relevant to your business objectives.

Navigating BigQuery Data Transfer Service’s New IAM Permission Requirements

The most urgent update involves a change in the security layer. Beginning March 17, any user creating or updating a transfer configuration must possess two additional IAM permissions:

  1. bigquery.datasets.setIamPolicy
  2. bigquery.datasets.getIamPolicy

Google is implementing this change to ensure that the service can properly manage the underlying dataset permissions required for the transfer to execute.

For organisations using tightly scoped or custom IAM roles, this update is a high priority. Without these permissions, the BigQuery interface will likely return an “Access Denied” or “Forbidden” error during the setup phase.

These errors frequently lack specific detail, making them difficult to diagnose without prior knowledge of this policy shift.

Step-by-Step Guide to Applying IAM Permissions

Updating your permissions requires access to the Google Cloud Console with IAM Admin privileges.

Here’s how to start implementing the new IAM permissions:

1. Identify Affected Users and Service Accounts

Review your project to see who manages data transfers. This includes both human users and service accounts used by automation tools.

Use the Cloud Asset Inventory or the IAM policy analyser to find roles that currently lack the setIamPolicy and getIamPolicy permissions.

2. Update Custom Roles or Use Predefined Roles

If you use custom roles, add the two new permissions to the existing role definition. If you prefer using Google’s predefined roles, ensure the user has a role that includes these permissions, such as BigQuery Admin (roles/bigquery.admin).

Note that BigQuery Data Editor doesn’t inherently include IAM policy management permissions.

3. Apply the Permissions at the Correct Level

You should apply these permissions at the project level or, at a minimum, for the specific dataset acting as the destination for the transfer.

Applying permissions at the dataset level follows the principle of least privilege, which reduces your overall security risk.

Troubleshooting Common IAM Configuration Errors

Even with the correct permissions, several factors can disrupt your data pipelines.

  • Credential Expiration. Transfers often fail because the user who created the transfer has left the organisation or their credentials have expired. Always use a dedicated service account for critical production transfers.
  • Dataset Region Mismatch. The destination BigQuery dataset must exist in a region supported by the source. For example, some third-party transfers require the dataset to be in the US or EU multi-regions.
  • Table Schema Conflicts. If you manually modify the schema of a table managed by a transfer, the next scheduled run may fail. Avoid adding required columns to auto-generated tables.
  • API Quota Limits. Large backfills can sometimes trigger API rate limits on the source side. Monitor your transfer run logs for specific error codes related to rate limiting.

5 Best Practices for Long-Term Data Pipeline Stability

Building a warehouse is a marathon, not a sprint. Follow these principles to ensure your BigQuery environment remains stable as your data volume grows:

1. Use Service Accounts for All Production Transfers

Relying on individual user accounts creates a single point of failure. Service accounts provide a stable, non-human identity that remains active regardless of staff turnover.

Ensure your service account has the necessary BigQuery Data Editor and Data Transfer User roles.

2. Monitor with Cloud Logging and Alerts

Don’t wait for a report to fail before you check your data health. This is important as Gartner reported that poor data quality in an organisation can lead to at least $12.9 million a year in losses.

Set up cloud monitoring alerts for BigQuery Data Transfer errors. These alerts can notify your team via email or Slack the moment a transfer fails, allowing you to re-authorise or fix the configuration before it impacts your business intelligence (BI).

3. Document Your Transformation Logic

BigQuery Data Transfer Service lands data in a raw format. You’ll likely build views or scheduled queries on top of this raw data.

Maintain a data dictionary that documents the transformation logic. This is essential for auditing and for onboarding new analysts.

4. Align Ingestion with Analytics Strategy

Data pipelines should support business questions rather than exist as isolated technical systems.

When organisations align ingestion architecture with reporting requirements, analysts can produce faster and more reliable insights.

5. Maintain Schema Awareness

Data sources frequently evolve. Payment platforms and advertising tools may introduce new fields or modify existing structures.

Regular schema reviews help prevent downstream analytics pipelines from breaking when source data changes.

Stay on Top of Data Transfer Configurations

The introduction of PayPal and Stripe connectors, combined with stricter IAM policies, demonstrates Google and BigQuery’s focus on both accessibility and security. Staying ahead of these changes prevents costly data gaps and ensures that your decision-making processes remain data-led.

Tell No Lies supports businesses and agencies in managing the technical foundations of their data environments. We focus on data architecture, pipeline reliability, and ensuring that integrations across platforms remain accurate, secure, and compliant.

Changes like IAM updates can disrupt data flow without warning. Without proper oversight, these issues can impact reporting, forecasting, and decision-making.

Speak with us at Tell No Lies about strengthening your data infrastructure and keeping your analytics running without interruption.