MCP Is the New REST. Google Cloud Just Made It Enterprise-Ready.

There is a protocol called MCP, Model Context Protocol, that is quietly becoming the standard for how AI agents interact with the outside world. It was published by Anthropic as an open standard, and in the time since, OpenAI, Google, Microsoft, and most major open-source agent frameworks have adopted it. If REST is how applications talk to APIs, MCP is how agents talk to tools.

Google Cloud launched managed, remote MCP servers in December 2025. Understanding what that means requires first understanding what problem MCP solves.

The Tool Problem for AI Agents

An AI agent that can only reason without acting is not very useful for most enterprise applications. To do anything meaningful, an agent needs to call external systems: look up a customer record, trigger a workflow, query a database, send a notification. These are tool calls, and the agent needs to know what tools exist, what each one does, what parameters it takes, and what it returns.

Before MCP, every agent framework handled this differently. OpenAI had its function-calling format. Anthropic had tool use. LangChain had its own abstraction. Developers building agents had to write integration code for each framework separately, and when they switched models or frameworks, they rewrote it again.

MCP standardizes this. An MCP-compatible agent can discover available tools from any MCP server, read their schemas, and call them, without the developer hardcoding every integration. It is the same standardization moment REST was for web APIs two decades ago, and it is happening fast.

What Google Cloud Actually Built

The managed MCP server offering does something practically useful: it takes existing REST APIs and surfaces them as MCP tools without requiring any changes to those APIs. The mechanism is Apigee, which handles the REST-to-MCP protocol transcoding automatically. You configure an Apigee MCP proxy pointing at your existing APIs, and they become tools that any MCP-compatible agent can discover and call.

The governance layer is what makes it enterprise-ready. Every MCP tool call routes through Apigee, which applies the same policies as any other API: OAuth 2.0 authorization, rate limiting, token quotas, and full audit logging. An agent cannot call a tool it is not authorized to call. A runaway agent looping on the same tool call hits rate limits and stops. Every call is logged to Cloud Logging. The compliance documentation that enterprise customers ask for is generated automatically.

Apigee API Hub automatically registers all MCP proxies as a searchable tool catalog. Agents can discover what tools are available at runtime rather than working from a fixed list hardcoded at development time. This matters as the tool catalog grows, a dynamic discovery model scales in ways that static configuration does not.

Why the Open Standard Part Matters

One of the less obvious advantages of building on MCP rather than a proprietary format is portability. AWS Bedrock uses its own tool-use format, which means tool integrations built for Bedrock do not work with Claude, Gemini, or open-source agents without rewriting them. MCP integrations built on Google Cloud Managed MCP Servers work across any MCP-compatible agent: Claude, Gemini, GPT-4, LangChain, AutoGen, and whatever comes next.

For ISVs building products that need to integrate with customers’ AI workflows, this is meaningful. A customer running Anthropic agents and a customer running Google agents can both call the same MCP tools through the same Apigee-governed endpoint. The ISV writes the integration once.

The Practical Implication

If you have an existing REST API catalog and you are thinking about agentic features, the path has gotten shorter. The APIs you already have become the tool library that agents can call, with enterprise governance applied at the infrastructure layer, and no new servers to build or operate. The question is less “how do we build MCP tooling” and more “which of our existing APIs do we want agents to be able to use, and at what access tiers.”

A few things worth sitting with: Are your customers already asking whether their AI agents can call into your product programmatically? If MCP becomes the default integration protocol for enterprise agentic AI, what does your API catalog look like as a competitive asset? And if a competitor surfaces their APIs as governed MCP tools before you do, how does that change the integration conversation?

Want to go deeper? Here are a few links worth your time