· 10 min read

MCP explained: How AI agents talk to your company tools

A technical breakdown of the Model Context Protocol — the open standard that lets AI agents connect to Jira, Slack, GitLab, and any enterprise tool through one protocol.

Here is a problem every engineering leader will recognize: your company uses 15 tools. You want AI agents to work with them. That means 15 custom integrations per AI provider. You use Claude, ChatGPT, and Copilot? That is 45 integrations. Add Gemini? 60. Each one with its own authentication, data format, and maintenance burden.

This is the N x M problem. N AI clients times M enterprise tools equals an explosion of custom code that no team can sustain. As Anthropic put it when they launched the Model Context Protocol in November 2024:

"Yet even the most sophisticated models are constrained by their isolation from data — trapped behind information silos and legacy systems."

Anthropic MCP announcement, November 2024

MCP solves this. One protocol. Any AI client. Any tool. Build the integration once, and it works everywhere.

Key takeaway

MCP (Model Context Protocol) is an open standard that gives AI agents a universal way to connect to enterprise tools — like USB-C gave devices a universal connector. Every major AI provider now supports it. With 97 million monthly SDK downloads and backing from the Linux Foundation, it is no longer experimental. It is infrastructure.

The USB-C analogy

Before USB-C, every device had its own cable. Your phone used Micro-USB. Your laptop used MagSafe. Your camera used Mini-USB. Your tablet used Lightning. Every new device meant another drawer full of cables and another charger on your desk.

Then USB-C happened. One connector. It negotiates power delivery, data speed, and display output automatically. The industry resisted — until even Apple adopted it.

Before MCP, AI integration looked the same. OpenAI had function calling (2023). Anthropic had tool use. Google had extensions. Every vendor had their own approach. If you built a Jira integration for Claude, it did not work with ChatGPT. If you built it for ChatGPT, it did not work with Gemini.

MCP is the USB-C moment for AI. One protocol that every provider speaks. And the parallel goes deeper: just as USB-C negotiates capabilities between devices at connection time, MCP negotiates capabilities between AI clients and tool servers at startup.

The strongest signal that MCP has won? OpenAI adopted it. In March 2025, Sam Altman posted:

"people love MCP and we are excited to add support across our products."

Sam Altman, CEO OpenAI, March 2025

When your biggest competitor adopts your standard, it stops being your standard. It becomes the standard.

How MCP works (the technical version)

If you have worked with the Language Server Protocol (LSP), MCP will feel familiar. LSP standardized how code editors talk to language tooling — so one Go language server works in VS Code, Neovim, and Zed. MCP does the same thing for AI: it standardizes how AI applications talk to external tools.

MCP architecture diagram showing the Host (AI application like Claude or ChatGPT) containing multiple Clients, each with a 1:1 JSON-RPC 2.0 connection to an MCP Server (Jira, GitLab, Slack). Transport: stdio for local, Streamable HTTP for remote.

Architecture: Host, Client, Server

MCP has three roles:

  • Host — The AI application your user interacts with (Claude Desktop, ChatGPT, VS Code, Cursor). It coordinates one or more MCP clients.
  • Client — A component inside the host that maintains a 1:1 connection to an MCP server. Each connected tool gets its own client instance.
  • Server — A program that exposes capabilities to the AI. This is where your Jira integration, your database connector, or your Slack bridge lives.

This separation matters for enterprise. The host handles user interaction and security policy. The client handles protocol negotiation. The server handles tool-specific logic. Each layer can be developed, deployed, and secured independently.

Three primitives

Everything an MCP server can expose falls into three categories:

  • Tools — Executable functions the AI can invoke. "Create a Jira ticket." "Query the database." "Merge this MR." These are the actions.
  • Resources — Data the AI can read. File contents, database records, API responses, dashboard metrics. These provide context.
  • Prompts — Reusable templates that structure how the AI interacts with a tool. Think of them as expert workflows packaged for reuse.

Tools let AI do things. Resources let AI know things. Prompts let AI know how to do things well. The combination is what turns a chatbot into an agent.

Protocol: JSON-RPC 2.0

Under the hood, MCP uses JSON-RPC 2.0 — the same lightweight RPC format used by LSP. Messages are either requests (with an ID, expecting a response), responses, or notifications (fire-and-forget, no response expected). This is mature, well-understood infrastructure — not a novel invention.

Transports: local and remote

MCP supports two transport mechanisms:

  • stdio — Standard input/output for local processes. Zero network overhead. The host spawns the server as a subprocess and communicates over stdin/stdout. Ideal for developer tools running on your machine.
  • Streamable HTTP — HTTP POST for requests, optional Server-Sent Events for streaming responses. Supports OAuth 2.0, bearer tokens, and API keys. This is the transport for remote servers, cloud deployments, and enterprise use.

Capability negotiation

When a client connects to a server, they handshake:

  1. Client sends an initialize request with its supported capabilities and protocol version.
  2. Server responds with its capabilities — which tools it offers, what resources it exposes, which prompts it provides.
  3. Client confirms with a notifications/initialized message.
  4. Normal operation begins. The client can discover tools via tools/list and invoke them via tools/call.

This is exactly like USB-C power delivery negotiation. Both sides declare what they support, agree on what to use, and then operate within those bounds. No guessing, no runtime surprises.

The adoption explosion

MCP went from a single company's open-source project to industry infrastructure in 13 months. Here is the timeline:

Nov 2024 Anthropic launches MCP Mar 2025 OpenAI adopts MCP Agents SDK + ChatGPT desktop Apr 2025 Google DeepMind joins Nov 2025 Major spec update Async ops, server identity Dec 2025 Donated to Linux Foundation Mar 2026 97M monthly downloads 10,000+ active servers Sources: Anthropic, OpenAI, Google DeepMind, Pento.ai, Linux Foundation

The numbers tell the story:

For context: the Language Server Protocol — the closest comparable standard — took years to reach broad adoption. MCP reached ubiquity in under 14 months.

Who supports MCP

In December 2025, Anthropic donated MCP to the Agentic AI Foundation under the Linux Foundation. The founding members read like a who's-who of AI and enterprise technology:

Anthropic, OpenAI, Block, AWS, Google, Microsoft, Cloudflare, Bloombergsource: Pento.ai

This is not one company's side project. It is a vendor-neutral standard governed by the same foundation that stewards Linux, Kubernetes, and Node.js.

Enterprise tool coverage

The ecosystem already covers the tools CTOs care about:

  • Project management: Atlassian (Jira, Confluence), Azure DevOps, Linear
  • Code and CI/CD: GitHub, GitLab, Docker, Kubernetes, cloud providers
  • Communication: Slack, Microsoft Teams
  • Business applications: Salesforce, HubSpot, Notion, Stripe
  • Data and analytics: PostgreSQL, OpenSearch, Algolia
  • Observability: Sentry, Grafana, LangSmith, Arize Phoenix

The official server repository maintains reference implementations, while the broader MCP Registry lists thousands more. And because MCP is an open protocol, anyone can build a server for any tool — no vendor approval required.

"Open technologies like the Model Context Protocol are the bridges that connect AI to real-world applications, ensuring innovation is accessible, transparent, and rooted in collaboration."

Dhanji R. Prasanna, CTO at Block, Anthropic MCP announcement

The security question (addressed honestly)

MCP's rapid adoption has outpaced its security story. This is worth being direct about.

In April 2025, researchers documented several vulnerability classes in the MCP ecosystem:

  • Prompt injection — Malicious data from a tool response can manipulate the AI's behavior.
  • Tool spoofing — Lookalike tools that silently replace trusted ones.
  • Data exfiltration — Tool permissions that allow an MCP server to access more data than intended.

These are real risks, not theoretical ones. The November 2025 spec update addressed several of them with async operations, server identity verification, and better permission models. But the ecosystem is still maturing.

For enterprise adoption, the question is not "is MCP secure?" but "how do I deploy MCP securely?" The answer is the same pattern enterprises already use for REST APIs: put a gateway in front of it.

Just as API gateways like Kong or Apigee sit between clients and backend APIs to enforce authentication, rate limiting, and audit logging — an MCP gateway sits between AI clients and MCP servers to provide:

  • Per-user authentication — OAuth 2.0 per service, not shared API keys.
  • Access control — Which users can invoke which tools, with what parameters.
  • Audit logging — Every tool invocation recorded, traceable to a specific user.
  • Policy enforcement — Block destructive actions, require confirmation for sensitive operations.

Raw MCP servers were not designed for enterprise. They are building blocks. The gateway pattern makes them enterprise-ready.

What this means for your company

If you are a CTO or VP Engineering evaluating AI strategy in 2026, here is the landscape:

The standard is settled. Every major AI provider supports MCP. The Linux Foundation governs it. You are not betting on a single vendor.

The ecosystem is real. With 10,000+ servers and SDKs in 11 languages, you are not building from scratch. If your team uses Jira, Slack, GitLab, or any major enterprise tool, there is likely an MCP server for it already.

The window is closing. Gartner estimates that 40% of enterprise applications will feature task-specific AI agents by end of 2026. Companies that build the integration layer now will have agents that can actually work across their tools. Companies that wait will be doing it under competitive pressure, with less time to get it right.

The practical path for most organizations:

  1. Start with one high-value workflow that crosses 3+ systems (e.g., Jira ticket to merged MR to Slack notification).
  2. Use a gateway rather than connecting MCP servers directly to AI clients. You need authentication, audit trails, and access control from day one — retrofitting security is always harder.
  3. Measure delivery, not output. Track cycle time from ticket to production, not lines of code generated.
  4. Expand gradually. Each new MCP server you connect extends what your AI agents can do — without rebuilding existing integrations.

See MCP in action

mcpgate is one way to deploy MCP for your team — a self-hosted gateway that connects AI agents to your work tools through a single MCP endpoint, with per-user OAuth, policy guardrails, and audit logging.

Try the live demo — no signup required. Or read the docs to evaluate whether the gateway pattern fits your architecture.

Frequently asked questions

Is MCP only for Claude?

No. MCP was created by Anthropic but is now governed by the Linux Foundation under the Agentic AI Foundation. OpenAI, Google, Microsoft, AWS, and others are founding members. MCP works with Claude, ChatGPT, Gemini, Microsoft Copilot, Cursor, VS Code, and any client that implements the protocol.

Do I need to write code to use MCP?

It depends. If you use an MCP gateway or a pre-built server from the ecosystem, you can connect tools with configuration only — no code required. If you need a custom integration for an internal tool, you will need to write an MCP server, but the SDKs in 11 languages make this straightforward. A basic MCP server is typically under 200 lines of code.

What about security?

MCP itself is a protocol — security depends on how you deploy it. The April 2025 vulnerability disclosures were real and led to spec improvements in November 2025. For enterprise use, the recommended pattern is a gateway that adds authentication (per-user OAuth, not shared keys), access control, and audit logging on top of MCP servers. Do not expose raw MCP servers to production AI agents without a security layer.

How does MCP compare to OpenAI function calling?

OpenAI function calling (2023) lets you define tools that ChatGPT can invoke — but it is vendor-specific. A function calling integration you build for ChatGPT does not work with Claude or Gemini. MCP is a vendor-neutral protocol: build one MCP server and it works with every client that supports the standard. MCP also provides capabilities (Resources, Prompts) that function calling does not have, enabling richer context sharing between AI and tools.

What is the difference between an MCP server and an MCP gateway?

An MCP server exposes one tool or service (e.g., a Jira server, a GitLab server). An MCP gateway aggregates multiple MCP servers behind a single endpoint, adding cross-cutting concerns like authentication, authorization, rate limiting, and audit logging. The relationship is the same as a REST API vs. an API gateway — the gateway does not replace the servers, it sits in front of them.

Further reading

Official MCP resources

Adoption and ecosystem

Enterprise context

Last updated: April 2026