How to connect Claude to Jira, Slack, and GitLab with one MCP endpoint
One MCP endpoint for Jira, Slack, GitLab, and the rest of your daily stack. Self-hosted, per-user OAuth, policy hooks. Works with Claude, ChatGPT, Codex.
If you want to connect Claude to Jira, Slack, and GitLab without managing separate configs and API tokens for each tool — an MCP gateway is the answer. One endpoint, one authentication flow, all your tools.
MCP (Model Context Protocol) is the standard that lets AI agents like Claude, ChatGPT, and Gemini talk to external tools. An MCP gateway takes this further: instead of connecting each tool individually, your AI connects to one central endpoint that handles everything.
This guide explains how it works, why it beats individual MCP servers, and how to set it up for your team.
TL;DR
- An MCP gateway replaces individual MCP servers with a single endpoint for all tools
- Your AI authenticates once — the gateway routes to Jira, Slack, GitLab, Google Workspace, and the rest of your daily stack
- Policy hooks let you block, validate, and transform AI actions before they execute
- Fully self-hosted — your data never leaves your infrastructure
What Claude can do when it's connected to your tools
When you connect Claude to Jira, Slack, and GitLab through a single MCP gateway, it can chain actions across services in one prompt:
"Take PBE-2962, look at the linked merge request, review the diff, merge it if the pipeline is green, then move the Jira ticket to Done and post a summary in #dev-updates."
That's five tools in one prompt. Claude does all of it. No tab switching, no copy-pasting issue keys.
More examples of what becomes possible:
- Morning briefing — "What's on my calendar today, any new Jira tickets assigned to me, and unread Slack messages in #dev?"
- Incident response — "Check Sentry for new errors, find the related Grafana dashboard, and create a Jira ticket with the stack trace."
- Sprint planning — "Show me all open issues in project PBE, group by priority, and draft a sprint plan in Notion."
- Code review workflow — "List open MRs in GitLab, check pipeline status, and post a summary in Slack for each one that's ready to merge."
This works today. But not by connecting each tool one by one.
Why individual MCP servers don't scale
The obvious approach is to install a separate MCP server for each tool — Atlassian offers one for Jira, Google has one for Workspace. In practice, managing multiple servers breaks down quickly:
- Authentication sprawl. Each server needs its own credentials — API tokens for Jira, OAuth for Google, bot tokens for Slack. Five tools means five different credential stores.
- No cross-tool workflows. Individual MCP servers can't chain actions. "Merge the MR and update the Jira ticket" becomes two separate prompts with manual copy-paste in between.
- No policy control. If Claude has a Jira API token, it can delete issues. There's no layer between "AI wants to do something" and "it happens."
- Per-developer setup. Every team member has to install and configure every MCP server on their machine. One person's config breaks after an update — good luck debugging.
The vendor-provided MCP servers (Atlassian's, Google's) solve the connection problem for their own tool. But they don't solve the orchestration problem — connecting multiple tools together, with policy control, for a whole team.
MCP gateway vs individual MCP servers
| Individual MCP servers | MCP gateway | |
|---|---|---|
| Setup | One config per tool per developer | One endpoint, configured once for the team |
| Authentication | API keys/tokens per tool, stored locally | Per-user OAuth, encrypted, centrally managed |
| Policy control | None — AI has raw API access | Pre/post hooks: block, validate, transform, audit |
| Cross-tool workflows | Manual — AI can't chain across servers | Built-in — one prompt triggers actions across services |
| Team rollout | Every developer configures everything | Admin configures once, team connects via SSO |
| Data residency | Tokens on developer laptops | Self-hosted, encrypted, your infrastructure |
How to connect Claude to all your tools with one MCP gateway
An MCP gateway sits between your AI agent and all your company tools. Your AI connects to one endpoint. The gateway handles authentication, policy enforcement, and routing to the right service.
Connecting Claude Code
One command registers the gateway as an MCP server in Claude Code:
claude mcp add mcpgate https://your-gateway/mcp -s user -t http -s user stores the config per user (not per project). -t http tells Claude to use HTTP transport, which is the standard for remote MCP servers. Replace your-gateway with your actual gateway URL.
After this, Claude Code has access to all services configured in the gateway — Jira, Slack, GitLab, Google Workspace, everything. No per-service setup needed.
Connecting Claude for Work (company-wide)
For teams using Claude Pro or Team, an admin configures the gateway once at claude.ai/admin-settings/connectors:
Name: mcpgate
URL: https://your-gateway/mcp Every team member gets access to all configured services immediately — no individual setup required.
Connecting ChatGPT, Codex, or Gemini
An MCP gateway uses the standard MCP protocol. Any compatible AI client can connect:
- ChatGPT — Settings → Apps → Add App → enter your MCP URL
- Codex —
codex mcp add mcpgate --url https://your-gateway/mcp - Gemini —
gemini mcp add --transport http mcpgate https://your-gateway/mcp
Where does my data go? Is this self-hosted?
This is the first question teams ask — and the answer matters when your AI agent can read Slack messages and Jira tickets.
An MCP gateway like mcpgate is fully self-hosted. It runs as a Docker container on your own infrastructure. No data flows through a third party. No cloud service to trust.
| What | Stored? | Where |
|---|---|---|
| OAuth tokens (Jira, Slack, etc.) | Yes, encrypted | Your Redis, encrypted with your key |
| Messages, documents, API responses | No | Proxied in memory, never persisted |
| User identifiers | Hashed only | SHA256 hash — no emails in storage keys |
| Logs | Ephemeral | PII scrubbed — no names, emails, or IPs |
The source code is public — you can verify every claim yourself.
Why an MCP gateway uses your own OAuth apps
When you connect Jira or Google Workspace to the gateway, you create your own OAuth app in the provider's developer console (Atlassian, Google Cloud, etc.). This is by design.
A shared "gateway OAuth app" would make the gateway vendor a data processor. Your API tokens would be issued to their app. If they got breached, every customer's tokens would be compromised.
With your own OAuth app:
- Tokens are scoped to your organization. Jira issues a token for your Jira instance, authorized by your OAuth app.
- You control the permissions. You choose the API scopes. Want read-only Slack access? Set that in your app, not in the gateway.
- You can revoke anytime. Delete the OAuth app → all tokens are immediately invalid. No vendor involved.
Setting up an OAuth app takes about 5 minutes per service. The setup wizard links directly to the right console for each provider and tells you exactly which redirect URI to paste.
How MCP tool calls flow through the gateway
Every tool call passes through a hook pipeline. This is what makes an MCP gateway different from a raw proxy:
- Authentication — the gateway verifies who's asking. Each user has their own OAuth tokens per service. Claude can only access what you can access in Jira, Slack, etc.
- Pre-hooks — before the action executes, hooks can validate, block, or transform it. Require confirmation before deleting a Jira issue. Auto-convert Markdown to Jira's ADF format. Enrich issues with templates. All configured in YAML.
- Execution — the action runs against the service API with the user's own OAuth token.
- Post-hooks — after the action, hooks cap response size, add formatting hints, or chain follow-up actions. Created a Jira issue? The post-hook can automatically notify the team in Slack.
Hooks are defined in YAML — no code required. Add or change a hook and hot-reload without restarting the gateway:
curl -X POST https://your-gateway/admin/reload See the hooks documentation for the full list of built-in hooks.
Real example: Claude merges a GitLab MR and closes a Jira ticket
Here's a workflow our team runs daily through the MCP gateway:
"Look at the open merge requests in ai-gateway, find the one for issue #642, check if the pipeline passed, and if yes merge it. Then close the Jira ticket with a comment summarizing what was fixed."
What Claude does through the gateway, in one shot:
- Queries GitLab for open MRs → finds the right one
- Checks the CI pipeline status → green
- Merges the MR via the GitLab API
- Closes the linked Jira ticket with a formatted summary comment
Four tools, zero tab switches, zero copy-paste. Every step went through policy hooks — the merge only executed because the pipeline was green, and the Jira comment was automatically converted from Markdown to Jira's native format.
Without an MCP gateway, this same workflow would require:
- Two separate MCP servers (GitLab + Jira), each with their own config
- Manual copy-pasting of the MR URL and issue key between prompts
- No format conversion, no policy checks, no audit trail
Supported services
mcpgate includes built-in integrations for the typical work stack, each defined in YAML:
| Service | What the AI can do |
|---|---|
| Google Workspace | Gmail, Calendar, Drive, Docs, Sheets (~90 actions) |
| Slack | Search messages, read channels, post messages |
| Jira | Create/update issues, transitions, worklogs, comments, attachments |
| GitLab | Issues, merge requests, pipelines, deployments, CI/CD |
| Notion | Pages, databases, blocks, comments |
| Figma | Files, components, comments, dev resources |
| Grafana | Dashboards, application logs, metrics |
| Sentry | Error tracking, issue queries, stack traces |
| Amplitude | Charts, active users, real-time analytics |
| Metabase | BI dashboards, SQL queries, schema exploration |
Custom services can be added in YAML without writing code or restarting the gateway.
Getting started
mcpgate is free for up to 5 users, no time limit.
Install
git clone https://gitlab.com/mcpgate/mcpgate.git
cd mcpgate
docker compose up -d Open localhost:8642 — the setup wizard handles everything: login method (Google/Microsoft SSO or your own OIDC), team settings, and service connections. No .env file needed.
Or try the live demo first.
Team plan for up to 50 users with audit log and custom branding — see pricing.
Frequently asked questions
What is an MCP gateway?
An MCP gateway is a single endpoint that sits between your AI agent and all your company tools. Instead of connecting each tool individually with its own MCP server, the AI connects to the gateway — which handles authentication, routing, and policy enforcement for all services at once.
Does an MCP gateway work with ChatGPT, or only Claude?
It works with any AI client that supports the MCP protocol — Claude, ChatGPT, Codex, Gemini, and others. They all connect to the same endpoint.
How is this different from the official Jira or Slack MCP servers?
Official MCP servers connect one tool with raw API access. An MCP gateway connects all tools through one endpoint, adds per-user OAuth authentication (not shared API keys), policy hooks to control what the AI can do, and works for teams — not just a single developer.
Is my data safe? Can the gateway provider see my data?
mcpgate is self-hosted. It runs on your infrastructure. No data, tokens, or API responses flow through mcpgate.de or any third party. The source code is public.
How does MCP authentication work with a gateway?
Each user authenticates with the gateway via SSO (Google, Microsoft, or any OIDC provider). Per-service OAuth tokens are stored encrypted in your Redis instance. When Claude makes a tool call, the gateway uses your token for that service — so Claude can only access what you can access.
Can I add custom tools that aren't built in?
Yes. Services are defined in YAML — endpoints, authentication method, and optional hooks. No code needed. See the services documentation.
How do I update the gateway?
docker compose pull && docker compose up -d Zero-downtime. Configuration and connected services are preserved.
Last updated: April 2026