Top 10 MCP Servers Every Developer Should Know
March 15, 2026 · 7 min read
TL;DR
- MCP servers extend AI assistants with real capabilities like file access, GitHub integration, and web browsing
- The filesystem, GitHub, and Postgres servers cover the most common developer workflows out of the box
- Servers like memory and sequential-thinking improve reasoning quality rather than adding external integrations
- Most servers install in under five minutes via npm or uvx and connect to Claude Desktop or Claude Code
The MCP ecosystem has grown quickly, and it can be hard to know which servers are worth your time. This guide covers the ten servers that show up most often in real developer workflows — what each one does, the tools it exposes, and the situations where it shines.
1. filesystem-server
Install: npx -y @modelcontextprotocol/server-filesystem
The filesystem server gives an AI assistant direct access to directories on your local machine. It is usually the first server developers add because so many tasks involve reading or writing files.
Key tools
read_file— reads a single file's contentswrite_file— creates or overwrites a filelist_directory— returns directory contentssearch_files— finds files matching a patternmove_file/delete_file— file management operations
When to use it
Any time you want an AI to read your codebase, generate files in a project, or reorganize a directory structure without copy-pasting content back and forth. Pair it with a code-editing tool like Cursor for a tight feedback loop.
Configuration note: Pass the directories you want to expose as arguments. Never expose your entire home directory — use the minimum set of paths needed.
2. github-server
Install: npx -y @modelcontextprotocol/server-github
The GitHub server wraps the GitHub REST API so an AI can read and write to repositories on your behalf. It covers the operations developers perform dozens of times a day: reading issues, opening PRs, searching code.
Key tools
get_file_contents— fetch file content from a reposearch_code— full-text code search across GitHubcreate_issue/update_issue— issue managementcreate_pull_request— open a PR with a title and bodylist_commits/get_commit— commit history
When to use it
Triaging issues, drafting PR descriptions, searching an unfamiliar open-source codebase, or automating release note generation. Requires a GITHUB_PERSONAL_ACCESS_TOKEN environment variable with appropriate scopes.
3. postgres-server
Install: npx -y @modelcontextprotocol/server-postgres
The Postgres server connects to a PostgreSQL database and lets an AI run read-only queries, inspect schemas, and reason about your data model.
Key tools
query— executes a SQL SELECT and returns results- Schema inspection via resources: tables, columns, foreign keys, indexes
When to use it
Writing complex queries, debugging data quality issues, generating documentation from a live schema, or exploring a new codebase's data model. The server enforces read-only access by default, which makes it safe to use against a staging database.
Configuration: Set the POSTGRES_URL environment variable to your connection string.
4. puppeteer-server
Install: npx -y @modelcontextprotocol/server-puppeteer
The Puppeteer server launches a headless Chromium browser that an AI can control programmatically. It enables web scraping, screenshot capture, and automated interaction with web UIs.
Key tools
puppeteer_navigate— navigate to a URLpuppeteer_screenshot— capture a screenshotpuppeteer_click/puppeteer_type— interact with page elementspuppeteer_evaluate— run arbitrary JavaScript in the browser contextpuppeteer_select— interact with dropdown menus
When to use it
Scraping websites that require JavaScript rendering, testing web UIs, extracting data from SaaS dashboards, or taking screenshots for documentation. Puppeteer runs locally so there are no external service costs.
5. slack-server
Install: npx -y @modelcontextprotocol/server-slack
The Slack server connects to a Slack workspace and lets an AI read channel history, post messages, and manage conversations.
Key tools
slack_list_channels— list available channelsslack_get_channel_history— fetch recent messages from a channelslack_post_message— send a messageslack_reply_to_thread— post a thread replyslack_get_users— list workspace members
When to use it
Drafting team updates, summarizing a long thread, posting build notifications, or searching for context buried in channel history. Requires a Slack bot token (SLACK_BOT_TOKEN) with the relevant OAuth scopes.
6. fetch-server
Install: uvx mcp-server-fetch
The fetch server makes HTTP requests from the AI's perspective, retrieving web pages and converting them to Markdown for easier consumption. It is lightweight and has no external dependencies beyond the Python runtime.
Key tools
fetch— retrieves a URL and returns the page as clean Markdown
When to use it
Pulling documentation directly from the web, reading changelogs, or letting an AI verify something by visiting a URL. When you need a quick "go look at this page" capability without setting up a full browser automation stack, fetch is the right tool.
7. memory-server
Install: npx -y @modelcontextprotocol/server-memory
The memory server gives an AI persistent, structured memory across sessions using a knowledge graph stored locally. Entities, relations, and observations are stored on disk and available in future conversations.
Key tools
create_entities— add named entities to memorycreate_relations— define relationships between entitiesadd_observations— attach facts to existing entitiessearch_nodes— query the knowledge graphread_graph— return the full graph
When to use it
Long-running projects where the AI needs to remember decisions, architecture choices, or team member preferences across sessions. Particularly useful in agentic workflows where state must persist between tool calls.
8. brave-search-server
Install: npx -y @modelcontextprotocol/server-brave-search
The Brave Search server exposes the Brave Search API, giving an AI real-time access to web search results without going through a browser.
Key tools
brave_web_search— keyword search returning titles, URLs, and snippetsbrave_local_search— location-aware search for local businesses
When to use it
Answering questions about recent events, verifying that a library still exists, or finding documentation that the AI's training data might not cover. Requires a BRAVE_API_KEY from the Brave Search developer portal. The free tier includes 2,000 queries per month.
9. notion-server
Install: npx -y @modelcontextprotocol/server-notion
The Notion server integrates with Notion's API to read pages and databases, create new content, and update existing records.
Key tools
notion_retrieve_page— fetch a page by IDnotion_query_database— query a Notion database with filtersnotion_create_page— create a new page in a workspacenotion_update_page— update page propertiesnotion_search— full-text search across a workspace
When to use it
Teams that use Notion as a knowledge base or project tracker. An AI can pull spec documents during code generation, log decisions back to a project page, or summarize a database of tasks. Requires a NOTION_API_TOKEN and the integration must be added to the pages it needs to access.
10. sequential-thinking-server
Install: npx -y @modelcontextprotocol/server-sequential-thinking
Sequential thinking is different from the other nine servers here: it does not integrate with an external service. Instead, it gives an AI a structured tool for breaking complex problems into explicit reasoning steps.
Key tools
sequentialthinking— iteratively builds a chain of reasoning steps, with the ability to branch, revise, and flag when more steps are needed
When to use it
Complex planning tasks, debugging sessions with multiple hypotheses, or any situation where you want the AI to show its work rather than jumping to a conclusion. It improves output quality on hard problems by forcing structured reasoning before an answer is committed.
Choosing the Right Servers
Most developers start with three servers: filesystem-server for local file access, github-server for repository operations, and fetch-server for pulling in documentation. From there, add servers that match your specific stack — postgres-server if you work with databases, slack-server if your team lives in Slack, notion-server if that is your knowledge base.
The sequential-thinking-server is worth adding for any AI workflow that involves planning or debugging — it costs nothing and consistently improves reasoning quality on complex tasks.
Each server installs in under five minutes. Start with one, get comfortable with how it behaves, and add more as you find gaps in what your AI assistant can do.