← Back to Articles

Docker MCP Server Setup Guide: Run MCP Servers in Containers (2026)

Learn how to run MCP servers in Docker containers for isolated, reproducible AI agent environments. Includes Dockerfile examples, compose configs, and Cursor integration.

By Web MCP GuideMarch 30, 20268 min read


Docker MCP Server Setup Guide: Run MCP Servers in Containers (2026)

Running MCP servers locally works fine when you're developing solo. But as soon as you want to share your setup with a team, deploy it to a server, or run multiple MCP servers with conflicting dependencies, you hit the limits of local installs fast. Docker solves all of these problems.

This guide covers how to containerize MCP servers with Docker, connect them to Cursor IDE or Claude Desktop, and build a reproducible multi-server setup using Docker Compose.

Why Run MCP Servers in Docker?

Local MCP server installs are quick to set up but brittle at scale:

  • Dependency conflicts: Different MCP servers may require different Node.js or Python versions

  • No isolation: A misbehaving server can affect your whole development environment

  • Not shareable: Your teammates can't just clone a config and have it work

  • Hard to version: Knowing exactly which version of what is running requires manual tracking
  • Docker fixes all of this. Each MCP server runs in its own container with its own dependencies, and your entire MCP stack can be defined in a single docker-compose.yml that any teammate can spin up in minutes.

    Prerequisites


  • Docker Desktop (or Docker Engine on Linux)

  • Cursor IDE or Claude Desktop

  • Basic familiarity with Docker CLI
  • Understanding How Cursor Connects to MCP Servers

    By default, Cursor launches MCP servers as local processes using stdio transport — it starts the server process and communicates over stdin/stdout. Docker adds a layer of indirection here: instead of running node server.js directly, Cursor runs docker run ... to start the container, which then runs the server inside.

    For remote/containerized setups, you can also use SSE (Server-Sent Events) transport, which lets Cursor connect to an MCP server running as a network service on a specific port.

    Both approaches are covered below.

    Approach 1: Docker via stdio (Simplest)

    This approach has Cursor spawn a Docker container for each MCP server, using stdio transport just like a local process. The container starts on demand and stops when Cursor closes.

    Example: GitHub MCP Server in Docker

    First, create a Dockerfile:

    FROM node:20-alpine

    RUN npm install -g @modelcontextprotocol/server-github

    ENTRYPOINT ["mcp-server-github"]

    Build the image:

    docker build -t mcp-github .

    Then configure Cursor in ~/.cursor/mcp.json:

    {
    "mcpServers": {
    "github": {
    "command": "docker",
    "args": [
    "run", "--rm", "-i",
    "-e", "GITHUB_PERSONAL_ACCESS_TOKEN",
    "mcp-github"
    ],
    "env": {
    "GITHUB_PERSONAL_ACCESS_TOKEN": "your-token-here"
    }
    }
    }
    }

    Key flags:

  • --rm: Remove the container when it exits

  • -i: Keep stdin open (required for stdio transport)

  • -e GITHUB_PERSONAL_ACCESS_TOKEN: Pass the environment variable into the container
  • Example: Python-based MCP Server in Docker

    For Python MCP servers, the pattern is similar:

    FROM python:3.11-slim

    RUN pip install mcp-server-filesystem

    ENTRYPOINT ["python", "-m", "mcp_server_filesystem"]

    {
    "mcpServers": {
    "filesystem": {
    "command": "docker",
    "args": [
    "run", "--rm", "-i",
    "-v", "/Users/yourname/projects:/projects:ro",
    "mcp-filesystem"
    ]
    }
    }
    }

    The -v flag mounts a local directory into the container so the filesystem server can access your files.

    Approach 2: SSE Transport (Network Mode)

    SSE transport is better for persistent servers, shared team setups, or when you want to run MCP servers on a remote machine. The server runs as a daemon and listens on a port; Cursor connects over HTTP.

    Setting Up an SSE MCP Server

    Most MCP servers support SSE mode with a flag or environment variable. Example with a custom server:

    FROM node:20-alpine

    WORKDIR /app
    COPY package.json .
    RUN npm install
    COPY . .

    ENV MCP_TRANSPORT=sse
    ENV MCP_PORT=3100

    EXPOSE 3100

    CMD ["node", "server.js"]

    Run it:

    docker run -d -p 3100:3100 --name mcp-myserver mcp-myserver

    Configure Cursor:

    {
    "mcpServers": {
    "myserver": {
    "url": "http://localhost:3100/sse"
    }
    }
    }

    For remote servers (e.g., running on a VPS or in your team's infrastructure):

    {
    "mcpServers": {
    "myserver": {
    "url": "https://mcp.yourcompany.com/sse",
    "headers": {
    "Authorization": "Bearer your-api-key"
    }
    }
    }
    }

    Running Multiple MCP Servers with Docker Compose

    This is where Docker becomes genuinely powerful. Define your entire MCP stack in one file:

    docker-compose.yml


    version: '3.8'

    services:
    mcp-github:
    build:
    context: ./servers/github
    environment:

  • GITHUB_PERSONAL_ACCESS_TOKEN=${GITHUB_TOKEN}

  • ports:
  • "3101:3100"

  • restart: unless-stopped

    mcp-postgres:
    image: mcp-postgres:latest
    build:
    context: ./servers/postgres
    environment:

  • DATABASE_URL=${DATABASE_URL}

  • ports:
  • "3102:3100"

  • restart: unless-stopped

    mcp-slack:
    build:
    context: ./servers/slack
    environment:

  • SLACK_BOT_TOKEN=${SLACK_BOT_TOKEN}

  • ports:
  • "3103:3100"

  • restart: unless-stopped

    Store secrets in a .env file (and add it to .gitignore):

    GITHUB_TOKEN=ghp_xxxx
    DATABASE_URL=postgresql://user:pass@host:5432/db
    SLACK_BOT_TOKEN=xoxb-xxxx

    Start everything:

    docker compose up -d

    Then configure all servers in Cursor:

    {
    "mcpServers": {
    "github": { "url": "http://localhost:3101/sse" },
    "postgres": { "url": "http://localhost:3102/sse" },
    "slack": { "url": "http://localhost:3103/sse" }
    }
    }

    Your entire team can share this docker-compose.yml, fill in their own .env, and have the identical MCP setup running in minutes.

    Official Docker MCP Catalog

    Docker has launched an official MCP catalog at hub.docker.com that packages popular MCP servers as ready-to-run images. Rather than building your own Dockerfile, you can pull directly:

    docker pull docker/mcp-github-official
    docker pull docker/mcp-postgres-official

    Check the Docker Hub catalog for the latest available images. The official images are maintained by Docker and updated regularly with security patches.

    Using the official images in Cursor:

    {
    "mcpServers": {
    "github": {
    "command": "docker",
    "args": ["run", "--rm", "-i", "-e", "GITHUB_PERSONAL_ACCESS_TOKEN", "docker/mcp-github-official"],
    "env": { "GITHUB_PERSONAL_ACCESS_TOKEN": "your-token" }
    }
    }
    }

    Dockerfile Best Practices for MCP Servers

    Use minimal base images

    Good: small image, faster startup


    FROM node:20-alpine

    Avoid unless you need full OS tools


    FROM ubuntu:22.04

    Pin your versions

    Good: reproducible builds


    FROM node:20.11.0-alpine3.19
    RUN npm install -g @modelcontextprotocol/server-github@1.2.3

    Risky: version drift


    FROM node:latest
    RUN npm install -g @modelcontextprotocol/server-github

    Keep secrets out of images

    Never bake secrets into your Dockerfile or image layers. Always pass them via environment variables at runtime:

    Good


    docker run -e API_KEY=$MY_KEY mcp-server

    Never do this


    ENV API_KEY=hardcoded_secret_here

    Add health checks

    HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
    CMD curl -f http://localhost:3100/health || exit 1

    Running MCP Servers on a Remote VM

    For teams, you can host the entire MCP stack on a shared server:

    1. Deploy your docker-compose.yml to a VPS (DigitalOcean, Linode, EC2, etc.)
    2. Expose the services over HTTPS using a reverse proxy (Caddy or nginx)
    3. Add authentication headers in Cursor's MCP config

    Example Caddy config:

    mcp.yourcompany.com {
    reverse_proxy /github/* localhost:3101
    reverse_proxy /postgres/* localhost:3102

    basicauth {
    team $2a$hash...
    }
    }

    This gives your whole team access to the same MCP servers without each person needing local installs or credentials for every service.

    Troubleshooting

    Container starts but Cursor shows disconnected


  • Check that -i flag is present for stdio transport

  • For SSE, verify the port is actually exposed with docker ps

  • Check container logs: docker logs mcp-github
  • Permission errors on mounted volumes

    Add :ro (read-only) for paths that only need read access. For write access, ensure the container user has permission to the mounted path.

    Server works locally but fails in Docker

    The most common cause is network differences. Inside a container, localhost refers to the container itself, not your host machine. Use host.docker.internal to reach host services from inside a container (macOS/Windows), or --network=host on Linux.

    High memory usage with many containers

    Each running container uses memory. For many servers, stdio mode (start/stop on demand) is more efficient than running all servers as persistent daemons.

    Frequently Asked Questions

    Can I use Docker MCP servers with Claude Desktop?
    Yes — the same command/args pattern works in Claude Desktop's claude_desktop_config.json. The configuration syntax is nearly identical.

    Does Docker add latency to MCP responses?
    Minimal for SSE transport (persistent containers). For stdio transport, container startup adds 1–3 seconds on first request, then it's fast.

    Can I use Podman instead of Docker?
    Yes — Podman is compatible with Docker's CLI syntax. Replace docker with podman in your commands and config.

    How do I update an MCP server running in Docker?
    Pull the new image or rebuild, then restart the container. With Compose: docker compose pull && docker compose up -d.

    Is this production-safe for team use?
    With proper secrets management (Docker secrets or a vault), network isolation, and HTTPS on your reverse proxy, yes — this is how teams productionize MCP infrastructure.

    Docker gives you the repeatability, isolation, and scalability that local MCP installs can't match. Once you've defined your stack in Compose, spinning it up on a new machine takes minutes — and your whole team stays in sync.