Using MCP for Database Access: Patterns and Pitfalls
March 15, 2026 · 7 min read
TL;DR
- MCP database servers let AI assistants query your databases directly, but read-only mode should be the default for production
- Connection string security requires environment variables — never hard-code credentials in MCP config files
- Query scoping and result size limits are essential to prevent runaway queries and context window overflow
MCP database servers are one of the most practically useful MCP server categories. They let AI assistants directly query your database schemas, inspect data, and in some configurations run write operations. Done right, they dramatically accelerate development workflows. Done wrong, they create real security and stability risks.
This post covers the core patterns and pitfalls for database MCP server usage.
What MCP Database Servers Do
A database MCP server acts as a bridge between your AI assistant and a database. The server connects using a standard database driver and exposes MCP tools the AI can call:
query— Run a SQL query and return resultslist_tables— List all tables in the databasedescribe_table— Get column definitions and types for a tablelist_schemas— List available schemas or databases
The AI can call these tools in response to your natural language questions. Ask "what's in the orders table?" and the assistant will call describe_table, inspect the columns, then call query with an appropriate SELECT.
Setting Up a Basic Postgres MCP Server
Here's a minimal configuration for the @modelcontextprotocol/server-postgres package in a Cursor or Cline config:
{
"mcpServers": {
"postgres": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-postgres",
"postgresql://localhost:5432/mydb"
]
}
}
}The connection string is passed as a positional argument. This works for local development, but you should not embed credentials this way. The config file gets committed to version control, and plaintext passwords in args are visible in process listings.
Credential Management
Use Environment Variables
The better pattern is to pass credentials via environment variables:
{
"mcpServers": {
"postgres": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-postgres"],
"env": {
"POSTGRES_URL": "${POSTGRES_MCP_URL}"
}
}
}
}Then set POSTGRES_MCP_URL in your shell environment or a .env file that's gitignored. The MCP server reads the environment variable at startup.
For Cursor specifically, you can use project-level .cursor/mcp.json with environment variable references. These are not committed to git when .cursor/ is in your .gitignore.
Use Separate MCP Credentials
Create a dedicated database user for MCP access. This user should:
- Have only the permissions the AI actually needs (usually SELECT on specific schemas)
- Have a different password than your application users
- Be easy to rotate without affecting application deployments
-- Create a read-only MCP user
CREATE USER mcp_readonly WITH PASSWORD 'generated-strong-password';
GRANT CONNECT ON DATABASE mydb TO mcp_readonly;
GRANT USAGE ON SCHEMA public TO mcp_readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO mcp_readonly;
-- Ensure future tables are also readable
ALTER DEFAULT PRIVILEGES IN SCHEMA public
GRANT SELECT TO mcp_readonly;This limits blast radius if credentials are ever exposed.
Read-Only vs. Read-Write Patterns
Default to Read-Only
For most development use cases — exploring schemas, debugging queries, understanding data — read-only is sufficient. Configure your database user with SELECT-only permissions and you can't accidentally run a DELETE or UPDATE.
The postgres and mysql MCP servers both support connecting as a read-only user. The server will still expose query tools, but any write attempt will fail at the database level with a permission error.
When to Allow Write Access
Write access makes sense when:
- You're working on a local development database with disposable test data
- You want the AI to help you write and test migration scripts interactively
- You're using SQLite for a local file that you can easily restore
Even in these cases, consider a staging or development database rather than production. AI-generated queries can have edge cases that delete more rows than intended.
Explicit Read-Write Configuration
Some MCP servers accept a readOnly flag:
{
"mcpServers": {
"sqlite": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-sqlite",
"--db-path", "/path/to/dev.db",
"--read-only"
]
}
}
}Check your specific MCP server's documentation — the flag names vary.
Query Scoping
Without guardrails, an AI can generate queries that scan entire tables, create expensive joins, or return millions of rows. The context window will overflow and your database will be unhappy.
Add Row Limits
Most SQL MCP servers don't add implicit LIMIT clauses. Instruct your AI assistant to always add limits, or use a system prompt that enforces this:
When querying databases, always add LIMIT 100 unless the user explicitly asks for more rows. Never run queries without a WHERE clause on tables with more than 1000 rows.In Cursor's rules (.cursorrules) or Cline's instructions, you can make this a standing rule for all database interactions.
Schema-Level Scoping
If your database has multiple schemas, restrict the MCP user to only the schemas relevant to your work:
-- Only grant access to the app schema, not internal or audit schemas
GRANT USAGE ON SCHEMA app TO mcp_readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA app TO mcp_readonly;
-- Explicitly deny access to sensitive schemas
REVOKE ALL ON SCHEMA audit FROM mcp_readonly;This prevents the AI from accidentally querying audit logs, PII tables, or internal admin data you didn't intend to expose.
Handling Large Result Sets
Even with LIMIT clauses, result sets can be large. A table with 100 rows and 50 columns of JSON can still be too much to fit usefully in context.
Result Truncation
Good MCP database servers truncate large results and indicate that truncation occurred. Look for this in your server's output:
Query returned 100 rows (result truncated, 47832 rows total).The AI can then refine its query to target specific rows or columns.
Column Selection
Encourage the AI to SELECT only the columns it needs, not SELECT *. This is especially important with tables that have JSONB or TEXT columns that can be very large:
-- Bad: returns all columns including large JSON blobs
SELECT * FROM events LIMIT 10;
-- Better: select only what you're investigating
SELECT id, event_type, created_at, user_id FROM events LIMIT 10;Pagination
For cases where you need to paginate through large datasets, keyset pagination is more reliable than OFFSET:
-- Use a WHERE clause on the primary key for efficient pagination
SELECT id, email, created_at
FROM users
WHERE id > 1000
ORDER BY id
LIMIT 50;Choosing Between SQL and NoSQL MCP Servers
PostgreSQL and MySQL MCP Servers
Best for: Relational data, complex joins, schema exploration, teams already on these databases.
The postgres and mysql servers support the full SQL query surface. You can do joins, aggregations, CTEs, window functions — anything your database supports.
SQLite is a good choice for local development: no server process needed, file-based, easy to reset.
MongoDB MCP Server
The mongodb MCP server exposes tools for:
find— Query documents with filter syntaxaggregate— Run aggregation pipelineslistCollections— List collections in a database
The AI needs to generate MongoDB query syntax (JSON filter objects) rather than SQL. This works well when the AI already understands your document schema. For exploratory use, SQL databases are often easier because the AI can introspect schemas more naturally.
Redis MCP Server
Redis MCP servers are useful for a different purpose: inspecting cache state, understanding what keys exist, and debugging caching behavior. They're not designed for bulk data analysis.
Common Redis MCP tools:
get— Get a value by keykeys— List keys matching a patternhgetall— Get all fields of a hashttl— Check expiry of a key
Use the Redis MCP server when you need to debug why a cached value is wrong or stale, not for querying application data.
Common Pitfalls
Connection Leaks
Each MCP server process holds a database connection open for the lifetime of the AI session. If you run multiple AI clients simultaneously (e.g., Cursor and Claude Desktop both with the postgres server configured), you can exhaust your connection pool.
Configure a connection limit on your MCP user:
ALTER USER mcp_readonly CONNECTION LIMIT 5;And set a short statement_timeout to kill runaway queries:
ALTER USER mcp_readonly SET statement_timeout = '30s';Exposing Production Data
The most common serious mistake is connecting an MCP server to production. The AI may log queries, return data in its context window, or cause unexpected load. Use a production replica with a read-only user at most, and prefer staging or development databases for day-to-day AI-assisted work.
Config File Security
The MCP config files (.cursor/mcp.json, ~/.cline/config.json, etc.) should not be committed to git if they contain any sensitive values. Add them to .gitignore. Use environment variable interpolation instead of embedding credentials.
A Recommended Setup
For most development teams, this pattern works well:
- Staging database with a read-only MCP user that has SELECT on all application schemas
- Local SQLite or Postgres for feature development and migration testing, with write access allowed
- Credentials in environment variables, referenced in MCP config with
${VAR_NAME}syntax - Statement timeout of 30 seconds and connection limit of 3-5 on the MCP database user
- AI instructions to always use LIMIT and to confirm before any write operation
This gives you powerful database introspection in your AI assistant while keeping production data safe and your database stable.