Model Context Protocol: The Standard That Lets AI Agents Actually Do Things
AI assistants have gotten remarkably good at reasoning. What they still cannot do on their own is act — check your database, create a calendar event, query a live API, look up a record in your CRM, open a pull request. Their knowledge is frozen at a training cutoff, and they have no way to reach outside that boundary unless something bridges the gap. Model Context Protocol (MCP) is that bridge. Before MCP, connecting an AI assistant to an external service meant writing a custom integration every time. A function definition, input schema, auth handling, error logic — and you needed to do all of that separately for each service and each AI model. Connect five services to three models and you have fifteen custom connectors to build and maintain. IBM describes this as the "N×M integration challenge" — N models needing to connect to M external systems, each combination requiring its own implementation. It does not scale. Anthropic introduced MCP in November 2024 as an open standard to solve exactly this. Instead of custom connectors for every combination, you build one MCP server per service. Any MCP-compatible AI client can then discover and use that server's tools without additional wiring. Build once, connect to any model. OpenAI officially adopted MCP in March 2025. Google DeepMind followed. It is now the closest thing the industry has to a universal standard for AI-tool integration, with tens of thousands of MCP servers available across a growing ecosystem. MCP uses a client-server architecture over JSON-RPC 2.0. There are three components: The MCP host is the AI application — Claude Desktop, Claude Code, Cursor, or any other AI-powered tool. It contains the model and is where the user interacts. The MCP client sits inside the host. It handles communication between the model and MCP servers — discovering available tools, translating the model's requests into MCP-formatted calls, and returning results back to the model. The MCP server is the external service. It exposes its capabilities as named tools with defined input schemas. It could be a database, a file system, a code repository, a calendar, a project tracker, a design tool — anything that can respond to structured requests. As Google Cloud's MCP documentation explains, when a user asks the AI to do something that requires external data or an action, the model uses the MCP client to discover which tools are available, generates a structured request to call the right one, the MCP server executes it and returns a result, and the model uses that result to respond. The analogy Figma uses in their MCP documentation is useful: MCP is to AI what USB-C is to hardware. One standard connector, any compatible device. MCP servers can expose three types of capabilities: Tools are functions the model can call to take action or retrieve data — send_email, query_database, create_issue, search_files, get_calendar_events. These are the most commonly used capability and the most relevant for agentic workflows. Resources are data sources the model can read from — files, database records, API responses. Resources give the model context it can reason about. Prompts are reusable prompt templates the server exposes. Less common in practice, but useful for standardizing how a model approaches recurring tasks. For most integrations, tools are what you are working with. The shift MCP enables is moving from writing orchestration logic to writing intent. Without MCP, if you want an AI to pull open issues from your project tracker, cross-reference them against a Slack thread, and draft a status update, you write the code that calls each API, handles the responses, and sequences the operations. You own the control flow entirely. With MCP, you tell the model "find all open issues tagged as blockers, check if any of them were discussed in the engineering channel this week, and write a summary for the team standup." The model reads the available tool descriptions from your connected MCP servers, decides which tools to call and in what order, and composes the output — without you writing a single line of orchestration code. The same pattern applies across any combination of systems. "Pull last week's sales data, compare it against the same period last year, and flag anything down more than 20 percent" becomes a multi-tool agent call across your analytics and spreadsheet MCP servers. "Read the design file, check if the component names match the ones in the codebase, and list any mismatches" becomes a cross-server query across Figma and your file system. Thoughtworks noted in their 2025 Technology Radar that MCP has arguably brought agentic AI into the mainstream faster than the industry expected, precisely because it makes it easier for developers to connect agents to real systems without significant time and investment per integration. MCP is powerful enough that the security implications are worth understanding before you start connecting servers. Research by Knostic in July 2025 scanning nearly 2,000 MCP servers found a significant number with no authentication at all — tool listings and potentially sensitive data exposed to anyone who connected. Backslash Security's June 2025 analysis identified similar patterns of over-permissioning in another 2,000 servers. A few principles that matter: Only connect MCP servers you trust. Tool descriptions are read by the model and treated as instructions. A malicious server can include instructions in its tool descriptions that the model follows — a technique called tool poisoning. The MCP specification explicitly notes that tool descriptions should be treated as untrusted unless obtained from a trusted server. Use HTTP header authentication for remote servers. Passing your API key in headers rather than through a tool call keeps credentials out of the conversation context and reduces the risk of exposure in logs. Grant the minimum permissions your use case requires. If an MCP server only needs read access, do not connect one with write permissions. The June 2025 update to the MCP authorization specification added OAuth 2.1 support and Resource Indicators (RFC 8707) to improve token security, but implementation varies by server. Check the authentication documentation for any MCP server before connecting it to a production environment. MCP is not a replacement for APIs, SDKs, or traditional integrations. It is an additional layer that makes sense in specific contexts. Use MCP when the orchestration layer is an AI model — when you want the model to decide what to call, in what order, based on a high-level instruction or a user's natural language request. Agentic workflows, internal AI tools, and IDE-based developer queries are the natural home for it. Use a REST API or SDK when you are building a production application with deterministic code paths — when you need typed inputs and outputs, full control over error handling, and integration with your own data layer. The predictability of imperative code matters in those contexts in ways that model-driven orchestration cannot match. Most serious integrations eventually use both: an SDK for the core application logic and an MCP server for the agentic or conversational layer on top. The full MCP specification and getting started guides are at modelcontextprotocol.io.
