Skip to main content
← Back to Articles
MCPCursor IDEBigQueryGoogle Clouddata engineeringSQLAI coding

BigQuery MCP Server Cursor IDE Setup 2026: Query Your Data Warehouse with AI

Connect Google BigQuery to Cursor IDE using MCP. Run SQL, explore datasets, and analyze billions of rows from Composer — full setup guide with IAM config.

By Web MCP GuideApril 6, 20266 min read


BigQuery MCP Server Cursor IDE Setup 2026

BigQuery holds the kind of data that shapes engineering decisions — event logs, product analytics, financial records, and more. Connecting it to Cursor via MCP means you can ask natural language questions against your data warehouse and get SQL results without leaving your editor. This guide covers the full setup.

What This Integration Enables

With BigQuery MCP running in Cursor, you can:

  • Query any dataset in your GCP project using natural language

  • Get schema information for tables and views

  • Run exploratory SQL directly from Composer

  • Validate data transformations before committing to code

  • Check row counts, column distributions, and data freshness

  • Cross-reference live BigQuery data while writing dbt models or pipelines
  • Prerequisites


  • Cursor IDE 0.40+

  • Node.js 18+ or Python 3.10+

  • Google Cloud account with at least one BigQuery dataset

  • gcloud CLI installed and authenticated

  • A service account or ADC credentials with BigQuery access
  • Step 1: Set Up Google Cloud Authentication

    The BigQuery MCP server uses Application Default Credentials (ADC). The easiest path:

    gcloud auth application-default login

    This opens a browser and stores credentials at ~/.config/gcloud/application_default_credentials.json. The MCP server picks these up automatically.

    Alternatively, create a service account:

    1. In Google Cloud Console, go to IAM & Admin → Service Accounts.
    2. Click Create Service Account → name it "cursor-mcp".
    3. Grant the role BigQuery Data Viewer (for read-only) or BigQuery User + BigQuery Data Editor (for queries + write).
    4. Click Done, then open the account → Keys → Add Key → JSON.
    5. Download the JSON key file and store it securely.

    Step 2: Install the BigQuery MCP Server

    The most widely used option is the community @dataengineeringwithalex/mcp-bigquery server:

    npm install -g @dataengineeringwithalex/mcp-bigquery

    Or run it with npx (no global install):

    npx @dataengineeringwithalex/mcp-bigquery --help

    Alternatively, there's a Python-based option:

    pip install mcp-server-bigquery

    Step 3: Configure Cursor

    Edit ~/.cursor/mcp.json (macOS/Linux) or %APPDATA%\Cursor\mcp.json (Windows):

    Using ADC (recommended):

    {
    "mcpServers": {
    "bigquery": {
    "command": "npx",
    "args": ["-y", "@dataengineeringwithalex/mcp-bigquery"],
    "env": {
    "GOOGLE_CLOUD_PROJECT": "your-gcp-project-id"
    }
    }
    }
    }

    Using a service account key:

    {
    "mcpServers": {
    "bigquery": {
    "command": "npx",
    "args": ["-y", "@dataengineeringwithalex/mcp-bigquery"],
    "env": {
    "GOOGLE_CLOUD_PROJECT": "your-gcp-project-id",
    "GOOGLE_APPLICATION_CREDENTIALS": "/absolute/path/to/service-account-key.json"
    }
    }
    }
    }

    Replace your-gcp-project-id with your actual GCP project ID (visible in the Cloud Console header).

    Step 4: Restart and Verify

    Quit Cursor completely and reopen it. Go to Settings → Features → MCP and confirm the "bigquery" server appears with a green status.

    In Composer, test with:

    List the datasets available in my BigQuery project

    If you get a list of datasets back, you're connected.

    Real-World Use Cases

    Exploratory Data Analysis

    You're working on a feature that reads from a BigQuery table but you're not sure of the schema:

    Describe the schema for the events.user_sessions table in BigQuery

    Show me a sample of 10 rows from events.user_sessions 
    where created_at is in the last 7 days

    Debugging a Data Pipeline

    Your dbt model is producing unexpected output. Ask Cursor:

    Query BigQuery: SELECT COUNT(*) as total, status, COUNT(DISTINCT user_id) 
    FROM analytics.orders
    WHERE created_at >= '2026-04-01'
    GROUP BY status
    ORDER BY total DESC

    Validating a Migration

    Before deploying a schema change:

    Check how many rows in bigquery table analytics.events 
    have a null value in the session_id column

    Writing Queries Based on Business Questions

    Write a BigQuery SQL query that shows me daily active users 
    for the past 30 days from the analytics.events table,
    using the user_id field and event_date timestamp

    Cursor writes the SQL, then you can run it directly in the MCP interface.

    Troubleshooting

    Authentication errors


  • Run gcloud auth application-default print-access-token — if it errors, re-run gcloud auth application-default login.

  • For service account: verify the JSON file path is absolute, not relative.

  • Confirm GOOGLE_CLOUD_PROJECT matches exactly what's in your GCP console — it's case-sensitive.
  • "Access Denied" on specific tables


  • Check IAM permissions on the dataset level (not just project level). BigQuery permissions cascade from project → dataset → table.

  • For views, the service account also needs read access to the underlying source tables.

  • Confirm the role is BigQuery Data Viewer at minimum.
  • Query timeout


  • BigQuery MCP servers typically use the Jobs API which has a default timeout. Add "BIGQUERY_TIMEOUT": "60" to your env block.

  • Very large queries may hit memory limits in the MCP server — add a LIMIT clause to your prompt.
  • Server starts but no tools appear


  • Some BigQuery MCP versions require GCLOUD_PROJECT instead of GOOGLE_CLOUD_PROJECT. Try both.

  • Run the server manually: npx @dataengineeringwithalex/mcp-bigquery and check for startup errors.
  • Results are truncated


  • The MCP protocol has message size limits. For large result sets, ask for aggregated summaries instead of raw row dumps.

  • Use LIMIT 100 explicitly in your prompts for exploratory queries.
  • Cost Awareness

    BigQuery charges per bytes scanned. When running queries through MCP:

  • Use SELECT specific_columns instead of SELECT * to reduce bytes scanned.

  • Add date partition filters — BigQuery uses partition pruning to cut costs dramatically.

  • Preview table size with SELECT COUNT(*) before running wide queries.

  • Set a daily query spend cap in GCP: BigQuery → Settings → Custom Quotas.
  • A well-structured AI query should cost less than $0.01. Unfiltered table scans on terabyte datasets can be expensive.

    Using Multiple GCP Projects

    If you work across multiple GCP projects, configure separate MCP servers:

    {
    "mcpServers": {
    "bigquery-prod": {
    "command": "npx",
    "args": ["-y", "@dataengineeringwithalex/mcp-bigquery"],
    "env": {
    "GOOGLE_CLOUD_PROJECT": "my-prod-project"
    }
    },
    "bigquery-dev": {
    "command": "npx",
    "args": ["-y", "@dataengineeringwithalex/mcp-bigquery"],
    "env": {
    "GOOGLE_CLOUD_PROJECT": "my-dev-project"
    }
    }
    }
    }

    Then reference them explicitly: "Query bigquery-dev for the orders table schema."

    Security


  • Store service account JSON keys outside your project directories.

  • Never commit mcp.json — add it to .gitignore.

  • Use read-only roles (BigQuery Data Viewer) unless you need to write or create tables.

  • Rotate service account keys regularly in the GCP console.

  • Consider using Workload Identity Federation instead of JSON keys for production environments.
  • Combining with Other MCP Servers

    BigQuery pairs naturally with:

  • GitHub MCP — reference your dbt model code while querying the resulting tables

  • Notion MCP — pull data from BigQuery and create Notion documentation automatically

  • Slack MCP — summarize query results and post them to a data channel
  • See the full MCP server directory to expand your AI-powered data workflow.