CodexSpot

Docker-Based MCP Servers: When and Why

March 15, 2026 · 5 min read

TL;DR

  • Docker-based MCP servers provide better isolation and security compared to globally installed npm or pip packages
  • Database MCP servers are a natural fit for Docker because the database and its MCP server can run in the same Compose network
  • The tradeoff is startup time and complexity — use npm/pip for simple servers, Docker for servers that need isolation or depend on system services

Most MCP server setup guides show you how to install a package globally with npx or pip and run it directly. This works fine for getting started, but for teams and production-adjacent use, Docker offers meaningful advantages. This post covers when Docker makes sense, how to set it up, and the patterns that work best.

The Default: npx and pip

The standard way to run an MCP server looks like this:

json
{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/home/user/project"]
    }
  }
}

npx -y downloads and runs the package, caching it after the first run. For most servers, this is fine. The package runs as your current user, with access to your environment, and it's simple to configure.

The downsides become apparent when:

  • The server needs system-level dependencies (a Postgres client library, a Python package with native extensions)
  • You want the server isolated from your system environment
  • The server connects to other services (databases, APIs) that run in Docker
  • You need reproducible behavior across developer machines

What Docker Adds

Process Isolation

A Docker container has its own filesystem, network namespace, and process tree. An MCP server running in Docker can't access files outside its explicitly mounted volumes, can't reach network hosts outside its Docker network, and runs as a non-root user by default if you configure it correctly.

This is particularly valuable for the shell server or any MCP server that executes code. A containerized shell server can only damage what's inside its container, not your host system.

Dependency Isolation

Some MCP servers have native dependencies that conflict with other tools on your system. Python-based servers may need specific library versions. A Docker container packages everything the server needs, so it works the same on every developer's machine and in CI.

Networking With Other Containers

This is the most practical reason to Dockerize an MCP server: if your database is already running in Docker, a Docker-based MCP server can join the same network and reach the database by container name. No port mapping, no host networking, no cross-network credentials.

Running a Postgres MCP Server in Docker

Here's the pattern for running a Postgres MCP server that connects to a Postgres container:

docker-compose.yml

yaml
version: "3.9"

services:
  postgres:
    image: postgres:16-alpine
    environment:
      POSTGRES_DB: myapp
      POSTGRES_USER: app
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U app -d myapp"]
      interval: 5s
      timeout: 5s
      retries: 5

  mcp-postgres:
    image: node:20-alpine
    command: >
      sh -c "npx -y @modelcontextprotocol/server-postgres
             postgresql://app:${POSTGRES_PASSWORD}@postgres:5432/myapp"
    depends_on:
      postgres:
        condition: service_healthy
    stdin_open: true
    tty: true

volumes:
  postgres_data:

The mcp-postgres container:

  • Uses the official Node.js Alpine image (small footprint)
  • Connects to the postgres service by container name — no exposed ports required
  • Starts only after the Postgres container passes its healthcheck
  • Keeps stdin open for stdio transport

Connecting Your MCP Client to the Docker Server

For stdio-based servers running in Docker, you need to pipe stdio through docker exec or docker run. The MCP client config looks like:

json
{
  "mcpServers": {
    "postgres": {
      "command": "docker",
      "args": [
        "exec",
        "-i",
        "myapp-mcp-postgres-1",
        "npx",
        "-y",
        "@modelcontextprotocol/server-postgres",
        "postgresql://app:${POSTGRES_PASSWORD}@postgres:5432/myapp"
      ],
      "env": {
        "POSTGRES_PASSWORD": "${POSTGRES_PASSWORD}"
      }
    }
  }
}

Or use docker run for a fresh container per session:

json
{
  "mcpServers": {
    "postgres": {
      "command": "docker",
      "args": [
        "run",
        "--rm",
        "-i",
        "--network", "myapp_default",
        "-e", "POSTGRES_URL=postgresql://app:${POSTGRES_PASSWORD}@postgres:5432/myapp",
        "node:20-alpine",
        "npx", "-y", "@modelcontextprotocol/server-postgres",
        "${POSTGRES_URL}"
      ]
    }
  }
}

The --network myapp_default flag puts the container in the same Docker network as your Compose stack, so it can reach the postgres container by name.

Writing a Dockerfile for a Custom MCP Server

If you're building a custom MCP server, Docker gives you a clean way to package and distribute it:

dockerfile
FROM node:20-alpine

# Create non-root user
RUN addgroup --system mcp && adduser --system --ingroup mcp mcpuser

WORKDIR /app

# Copy package files first for better layer caching
COPY package*.json ./
RUN npm ci --only=production

# Copy source
COPY src/ ./src/

# Switch to non-root user
USER mcpuser

# MCP servers communicate over stdio — no port needed
CMD ["node", "src/index.js"]

Build and run:

bash
docker build -t my-custom-mcp-server .
docker run --rm -i my-custom-mcp-server

The -i flag (interactive) keeps stdin open, which is required for stdio transport.

Volume Mounts for Filesystem Servers

If you want to run the filesystem MCP server in Docker (for better isolation), you need to mount the directories you want to expose:

json
{
  "mcpServers": {
    "filesystem": {
      "command": "docker",
      "args": [
        "run",
        "--rm",
        "-i",
        "-v", "/home/user/projects/myapp:/workspace:ro",
        "node:20-alpine",
        "npx", "-y", "@modelcontextprotocol/server-filesystem",
        "/workspace"
      ]
    }
  }
}

The :ro mount flag makes the directory read-only inside the container. Even if the MCP server or the AI tries to write, the container can't — the kernel blocks it.

For read-write access to specific directories:

bash
docker run --rm -i \
  -v /home/user/projects/myapp:/workspace:rw \
  -v /home/user/projects/myapp/.git:/workspace/.git:ro \
  node:20-alpine npx -y @modelcontextprotocol/server-filesystem /workspace

This gives write access to the project but keeps .git read-only — the AI can read git history but can't corrupt the repository.

Docker Compose Patterns for Development Teams

For teams, Docker Compose makes it easy to standardize the MCP server setup:

yaml
version: "3.9"

services:
  postgres:
    image: postgres:16-alpine
    environment:
      POSTGRES_DB: ${DB_NAME:-myapp}
      POSTGRES_USER: ${DB_USER:-app}
      POSTGRES_PASSWORD: ${DB_PASSWORD}
    volumes:
      - postgres_data:/var/lib/postgresql/data
    ports:
      - "5432:5432"  # Expose for local tools and GUIs

  redis:
    image: redis:7-alpine
    volumes:
      - redis_data:/data

  mcp-postgres:
    image: node:20-alpine
    command: sh -c "npx -y @modelcontextprotocol/server-postgres postgresql://${DB_USER:-app}:${DB_PASSWORD}@postgres:5432/${DB_NAME:-myapp}"
    depends_on:
      postgres:
        condition: service_healthy
    stdin_open: true

  mcp-filesystem:
    image: node:20-alpine
    command: npx -y @modelcontextprotocol/server-filesystem /workspace
    volumes:
      - ${PROJECT_ROOT:-./}:/workspace:rw
    stdin_open: true

volumes:
  postgres_data:
  redis_data:

With a shared .env file (gitignored):

bash
DB_PASSWORD=dev-password-here
PROJECT_ROOT=/home/user/projects/myapp

Developers run docker compose up -d and reference container names in their MCP configs. Everyone gets the same database state (or restores from a shared dump), and the MCP servers are consistent across the team.

When NOT to Use Docker

Docker adds startup latency and operational complexity. It's overkill for:

Simple, stateless servers: The filesystem server for a personal project, the GitHub server for a solo developer — these don't need Docker. The npx approach is simpler and fast enough.

Servers with no external dependencies: If the server is pure JavaScript or Python with no native extensions and no connection to other services, Docker doesn't add much.

Heavily iterated custom servers: When you're actively developing an MCP server, the Docker build-run cycle slows you down. Develop locally, Dockerize when it's stable.

Low-resource environments: Docker containers have overhead. On a developer laptop that's already at the memory limit, adding Docker for every MCP server is counterproductive.

The Decision Matrix

| Situation | Recommended Approach | |-----------|---------------------| | Quick personal setup | npx / pip | | Database MCP server, DB already in Docker | Docker, same Compose network | | Shared team environment | Docker Compose for consistency | | Custom server in production | Docker with a proper Dockerfile | | Filesystem server for isolation | Docker with read-only mounts | | Rapid development iteration | Local npm/pip, Dockerize later | | Air-gapped / offline environment | Docker with pre-pulled images |

Getting Started

The easiest starting point is adding an MCP service to an existing Docker Compose file. If you already run your database in Docker, add the corresponding MCP server to the same docker-compose.yml, put it in the same network, and reference the database container by service name.

For teams starting from scratch, the pattern is:

  1. Define your services in docker-compose.yml
  2. Add MCP server services that depend on your databases
  3. Document the container names developers need to reference in their MCP configs
  4. Add .env.example with the required variables documented (without values)

Docker doesn't make MCP servers better at their jobs, but it makes them more secure, more consistent, and easier to manage in team environments where infrastructure is already containerized.

Referenced in this post