This whitepaper explores how Model Context Protocol (MCP) transforms traditional APIs into intelligent, context-aware ecosystems. MCP complements—not replaces—APIs, enabling adaptive reasoning, tool integration, and agentic workflows. Part of the Integration Fabric series, it offers insights on MCP architecture and enterprise adoption strategies for an AI-first world.
The rapid evolution of artificial intelligence—particularly with the emergence of Large Language Models (LLMs) like GPT, Claude, and Gemini—has triggered a foundational shift in how software components interact, collaborate, and execute tasks. Traditional APIs, designed for predictable, request-response interactions, are effective for transactional tasks but lack the flexibility required for intelligent, adaptive, and collaborative functions now enabled by LLMs and autonomous agents.
In an AI-first world, the expectation from digital systems is no longer just data retrieval or CRUD operations. Instead, we are moving towards intelligent systems capable of understanding context, dynamically discovering tools, coordinating with other agents, and making autonomous decisions on behalf of users.
This whitepaper explores the evolution of API communication from its early days of machine-to-machine data transfer to the emerging paradigm of agent-to-agent collaboration powered by Model Context Protocol (MCP), Agent Communication Protocol (ACP), and A2A (Agent-to-Agent). These protocols are designed to encapsulate not just data, but intent, context, memory, and adaptive behavior, thereby enabling a much more powerful interaction model between machines.
This whitepaper is intended for technology strategists, enterprise architects, platform engineers, and innovation leaders looking to understand how to future-proof their systems for agentic computing. It provides both a conceptual foundation and a practical guide for adopting next-generation AI protocols and building intelligent, context-aware, and collaborative systems of the future.
Historically, APIs (such as REST, SOAP, or GraphQL) were sufficient for:
However, as LLMs began to orchestrate tools, interpret user goals, and manage workflows, these constraints became apparent:
This meant that even though LLMs could “think,” they were essentially boxed in by outdated communication constructs.
At Infosys we are helping customers to create AI Integration Fabric.

These measures ensure APIs can sustain the increased load generated by autonomous agentic interactions without degradation or failure.
As artificial intelligence systems, particularly Large Language Models (LLMs)—have advanced from static predictors to dynamic agents capable of understanding and interacting with complex tasks, the need for new communication protocols has become paramount. Traditional API-based communication was built for deterministic interactions: client requests a resource; server responds with data. But LLM-driven systems are inherently contextual, stateful, goal-oriented, and often collaborative — requiring a new paradigm of interaction far beyond static endpoints and payloads.
Rise of Agentic Communication
To empower LLMs as autonomous agents — capable of executing multi-step reasoning, invoking tools, and collaborating with peers—new forms of communication had to emerge that support:
This shift gave birth to a new category of protocols, often referred to as Agentic Communication Protocols, which differ fundamentally from RESTful interfaces.
| Dimension | Traditional APIs | Agentic Protocols (LLM/AI) |
|---|---|---|
| Interaction Model | Request–Response | Intent-Based, Multi-Turn, Contextual |
| Session Management | Stateless | Stateful, with embedded memory |
| Endpoint Discovery | Static, pre-integrated | Dynamic, On-the-Fly Discovery |
| Payload Semantics | Structured Data | Intent + Context + Instruction |
| Collaboration | One-to-One | Many-to-Many (Agent-to-Agent, Tool-Orchestrated) |
| Execution Style | Deterministic | Reasoned, Adaptive, Goal-Oriented |
Before agent-specific protocols matured, some early efforts tried to bend existing tools:
These early frameworks were valuable steppingstones, but they did not introduce protocol‑level abstractions—meaning they could orchestrate tools inside a single application, but could not support standardized, interoperable communication between multiple autonomous agents. As a result, they were unable to scale to true multi‑agent ecosystems, which require shared schemas, message semantics, and service‑level contracts rather than library‑specific bindings.
The following protocols are now emerging as industry standards:
ACP defines the formal message of semantics for structured agent interactions.
It focuses on how agents express intent, describe actions, reference tools, exchange structured messages, and maintain conversational state.
Core characteristics:
ACP is essentially the rules and structure of communication — not the topology or the collaboration model.
A2A defines the decentralized collaboration model between autonomous agents.
It emphasizes peer-level behavior rather than message structure.
Core characteristics:
While ACP defines how messages are structured, A2A defines how agents relate to each other in a networked environment. In simple terms, ACP governs the language agents speak, while A2A governs the way agents interact and collaborate using that language.
As agents become stateful and tool‑driven, the UI must stay tightly synchronized with agent actions in real time — which is why UI synchronization now requires a formal protocol. A dedicated protocol is necessary because traditional UI update mechanisms cannot reliably reflect rapid, multi‑step agent actions or tool calls, leading to inconsistency, race conditions, or loss of state.
Together, these form a layered communication stack, like OSI models, but optimized for semantic, goal-driven exchanges.
Interestingly, these ideas are not entirely new. Multi-Agent Systems (MAS) from academic AI research proposed frameworks like:
However, with modern LLMs, we now have the reasoning substrate to make these frameworks practical and useful at scale.
While MAS concepts inspire parts of today’s agentic ecosystem, modern protocols such as MCP, ACP, and A2A do not directly inherit or implement FIPA‑ACL. They borrow high‑level ideas (e.g., intent expression, role‑based communication), but their semantics, message structures, safety boundaries, and runtime designs are fundamentally new and tailored for LLM‑driven systems.
Agent communication protocols are now being actively integrated into open-source and enterprise ecosystems:
These implementations demonstrate that protocols are no longer optional — they are foundational to building reliable, reusable, intelligent agent systems.
As LLMs evolve to become long-running, memory-augmented, tool-rich agents:
While MCP is rapidly stabilizing, related protocols such as ACP and AP2 are still in active evolution and not yet fully standardized. Their specifications, governance structures, and interoperability patterns are expected to mature over the next several cycles. Enterprises should view them as emerging standards, not finalized ones.
We are entering a world where agents are not just endpoints, but collaborators — and this demands a new, intelligent language for communication.
As LLMs evolve to become long‑running, memory‑augmented, tool‑rich agents:
We are entering a world where agents are no longer just endpoints but active collaborators — and this shift requires a new, intelligent language for communication.
Model Context Protocol (MCP) is an open standard that enables applications to provide structured context to Large Language Models (LLMs). Like USB‑C for devices, MCP standardizes how AI models connect to data sources and tools, allowing seamless integration and portability of context across applications.
A key architectural distinction is that APIs contain and execute business logic, whereas MCP exposes a model‑friendly semantic layer on top of those APIs. MCP does not replace or reinterpret the underlying logic; rather, it wraps existing capabilities in a consistent structure that LLMs can understand, invoke, and reason over.
Unlike traditional APIs, MCP handles dynamic information — conversation history, environment state, intent metadata — allowing LLMs to adapt and act intelligently in real-world scenarios. The logic still runs through traditional APIs or services; MCP simply provides an LLM‑native interface that semantically exposes those capabilities.
Example: Visual Studio Code acts as an MCP host, connecting to multiple MCP servers (e.g., Sentry, filesystem) via separate MCP clients.

Note that MCP server refers to the program that serves context data, regardless of where it runs. MCP servers can execute locally or remotely. For example, when Claude Desktop launches the filesystem server, the server runs locally on the same machine because it uses STDIO transport. This is commonly referred to as a “local” MCP server. The official Sentry MCP server runs on the Sentry platform and uses Stream able HTTP transport. This is commonly referred to as a “remote” MCP server.
The STDIO transport is intended primarily for local development and desktop integrations, where the MCP server runs on the same machine. For distributed or enterprise‑grade deployments, Streamable HTTP is the recommended transport, enabling remote execution, scalability, service reliability, and secure cloud integration.
MCP servers expose domain-specific capabilities to AI applications via standardized interfaces. Examples include:
Servers operate through three core building blocks: Tools, Resources, and Prompts.
Transport Consideration:
Most production deployments use Streamable HTTP for MCP servers to support remote access and distributed scaling. STDIO should be used only for local or single‑machine scenarios.
| Building Block | Purpose | Who Controls It | Real‑World Example |
|---|---|---|---|
| Tools | For AI actions | Model‑controlled | Search flights, send messages, create calendar events |
| Resources | For context data | Application‑controlled | Documents, calendars, emails, weather data |
| Prompts | For interaction templates | User‑controlled | “Plan a vacation”, “Summarize my meetings”, “Draft an email” |
Servers provide functionality through three building blocks:
Tools – AI Actions
Resources - Context Data
Resources provide structured access to external information that the host application can retrieve and supply to AI models as context.
Overview
Protocol Operations
Example
Resource templates enable flexible queries, e.g.:
JSON
{
"uriTemplate": "weather://forecast/{city}/{date}",
"name": "weather-forecast",
"title": "Weather Forecast",
"description": "Get weather forecast for any city and date",
"mimeType": "application/json"
}
User Interaction Model
Prompts – Interaction Templates
Prompts provide reusable, structured templates for common tasks, enabling consistent workflows and reducing reliance on unstructured natural language input.
Overview
Protocol Operations
Example
Plan a Vacation Prompt:
JSON
{
"name": "plan-vacation",
"title": "Plan a vacation",
"description": "Guide through vacation planning process",
"arguments": [
{ "name": "destination", "type": "string", "required": true },
{ "name": "duration", "type": "number", "description": "days" },
{ "name": "budget", "type": "number" },
{ "name": "interests", "type": "array", "items": { "type": "string" } }
]
}
Workflow:
Error Handling Consistency: MCP servers should return structured, JSON‑schema‑aligned error formats for both tool calls and resource operations. Consistent error semantics allow LLMs to interpret failure modes, retry intelligently, request clarification from users, or adjust plans without producing hallucinated assumptions.
For enterprise deployments, adopting a predictable error schema (e.g., standardized error codes, messages, and remediation hints) is essential for reliability, auditability, and safe automation.
User Interaction Model
Multi Server Works Together

Understanding MCP Clients
MCP clients are created by host applications (e.g., Claude.ai, IDEs) to communicate with MCP servers.
Core Client Features
Clients enrich interactions by enabling servers to:
Sampling
Sampling lets servers request AI completions via the client, enabling agentic behaviors while maintaining security and user control.
Why it matters:
Flow:

Security:
Example: Flight Analysis Tool
A travel server tool (findBestFlight) uses sampling to evaluate 47 flight options based on user preferences. The client mediates AI analysis and ensures user consent before returning recommendations.
Elicitation
Elicitation enables servers to request specific information from users dynamically, making workflows more adaptive and interactive.
Overview
Flow

Example: Holiday Booking
Before finalizing a Barcelona trip, the server elicits:
User Interaction
No. MCP does not replace APIs; it complements them. MCP serves as a context‑orchestration interface, not an execution or runtime layer. Its role is to standardize how LLMs access tools, resources, and prompts — but it does not execute business logic, host services, or replace the underlying API infrastructure.
To illustrate, let’s walk through a simple example of MCP in action. Suppose a user asks a chatbot a question requiring external context beyond the model’s training data, such as: “What’s the weather in San Francisco today?”
The MCP client first ensures it has up‑to‑date tool definitions from connected MCP servers. It injects the get_weather tool definition into the conversation context before sending everything to the LLM. The LLM recognizes it needs external data to answer the query and invokes:
get_weather(city="San Francisco")
The MCP client routes this tool call to the appropriate server. The server performs the actual logic — which is simply calling a traditional weather API — and returns the result. MCP’s involvement ends at orchestrating the tool invocation and the context surrounding it.
The returned data is passed back to the LLM, which uses it to generate the final response for the user.

Notably, the underlying computation here is still a standard API call. This highlights the key point: MCP does not replace APIs. It provides an LLM‑friendly interface and consistent context‑management layer on top of existing APIs, without altering how those APIs execute.
A raw LLM maps inputs to outputs. An agentic LLM system adds:
When an LLM can choose tools, reflect on outcomes, and plan next steps, it becomes agentic.
Role of MCP
MCP enhances this by:
Agentic Architecture
An agent emerges from:
Example Workflow
Goal: “I want to focus on deep work today.”
Throughout this process, the MCP client manages the reasoning loop—deciding when to re-prompt the LLM, when to approve or reject actions, and how state evolves—while MCP provides the structured tools and context that make the workflow possible.
MCP servers can also act as clients to other MCP servers, enabling modularity, composition, and delegation—like microservices for agents. This approach decouples tool logic from the agent runtime, creating a composable system of MCP servers that work like Lego blocks.
Example: Orchestrator Server
A dev-scaffolding server can coordinate upstream servers:
Enterprise‑Aligned Scenarios
1. Data Pipeline Automation
An enterprise may structure its data operations as a chain of MCP servers:
A central Pipeline Orchestrator for MCP Server simply delegates these downstream MCP servers. This mirrors enterprise ETL/ELT systems but with LLM-accessible semantics and composability.
2. Chained API Orchestration for Business Workflows
Large enterprises often rely on multiple back‑office APIs (CRM, ERP, billing, HR). Instead of giving an LLM direct access to all systems, each is exposed as a dedicated MCP server:
A Workflow MCP Server sits on top, orchestrating a multi-step business process such as “Create an order, check inventory, generate an invoice.” Each server acts as both a client (to downstream MCP servers) and a provider (to the agent), producing a secure, modular chain.
Remote MCP Servers
Most MCP servers today run locally via stdio, requiring manual installation alongside clients. While simple for testing, this limits scalability and interoperability:
Shift to Remote
Anthropic’ s spec updates introduce Streamable HTTP, enabling stateless remote servers and paving the way for a distributed MCP ecosystem.
This evolution supports scalable, enterprise-grade deployments where MCP servers behave like cloud microservices rather than local plugins.
Remote MCP servers also introduce network dependency and added latency, which must be evaluated against enterprise reliability requirements (e.g., RTO/RPO, failover strategies, and QoS guarantees). High‑availability architectures—load-balancing, retries, regional replicas—become critical when servers are no longer local.
Scenario:
A global logistics company wants its AI assistant to handle queries like: "Where is my shipment? Can you update the delivery address and check if customs clearance is complete?"
Challenges:
Solution leveraging MCP:
Business Impact:
Note: These performance ranges reflect typical improvements reported in logistics automation and API modernization benchmarks; actual impact may vary by system complexity and baseline process efficiency.
| Tool / Platform | Type | Key Features | Language / Tech |
|---|---|---|---|
| openapi-mcp-generator | CLI / SDK | Converts OpenAPI specs to MCP servers; supports OAuth2, Zod validation | TypeScript |
| api-wrapper-mcp | Wrapper | Wraps REST APIs as MCP tools using YAML config; supports Claude integration | Go |
| RapidMCP | Hosted Service | No-code conversion of REST APIs to MCP servers; remote deployment | Cloud-based |
| mcp.run | Platform | Registry + remote execution of MCP servers; dynamic updates | Cloud-based |
| VS Code API-to-MCP | IDE Extension | Scans code for REST APIs and auto-generates MCP wrappers | Node.js |
| Tyk API Gateway | Gateway | Adds MCP interface to existing APIs; centralized auth and observability | Enterprise Gateway |
| Anthropic MCP SDK | SDK | Official SDK for building MCP servers and clients | TypeScript |
Important Note on API → MCP Conversion
While these tools accelerate MCP adoption, conversion is not purely syntactic. Even when OpenAPI specs or REST endpoints are auto‑wrapped, teams must validate:
This ensures that the resulting MCP server is not only functional but also usable, safe, and interpretable by LLM-powered agents.
As organizations embrace protocols like MCP, A2A, ACP, and AP2, scaling adoption requires a structured approach that balances technical readiness, governance, and ecosystem alignment.

At Infosys, we are building an agentic stack under the Infosys Topaz Fabric, enabling customers to select and integrate models, MCP servers, or custom agents to create enterprise-specific solutions.
We recommend that enterprises undertake a comprehensive assessment of their current integration landscape and define or establish the following:
1. Establish a Strategic Roadmap
2. Build Modular Architecture
3. Leverage Open Standards and SDKs
4. Invest in Governance and Security
5. Enable Developer Ecosystem
6. Scale Through Automation
7. Foster Cross-Functional Collaboration
8. Manage Protocol and Server Lifecycles
Enterprise-scale adoption requires mature lifecycle management to ensure long-term stability and compatibility.
This guarantees that MCP‑based systems evolve predictably and remain reliable at enterprise scale.
Forecast:
By the next 2–3 years, agent protocols will become foundational for AI-driven workflows, with AP2 emerging as a critical enabler for trusted agentic commerce.
The future of APIs is evolving from static, request-response interfaces to intelligent, context-aware, and agent-driven ecosystems. Traditional APIs will remain foundational, but their role is shifting—they will serve as the underlying infrastructure while AI-enabled protocols like MCP, A2A, ACP, and AP2 introduce a new layer of intelligence and interoperability.
To stay foundational in an AI‑first world, APIs themselves must expose machine‑consumable semantics—clear metadata, introspection capabilities, deterministic behavior, and predictable structures—to make them fully AI‑ready.
These protocols do not replace APIs—they augment them, making them AI-ready for complex workflows, secure transactions, and cross-platform collaboration. For enterprises, adopting these standards means moving beyond isolated endpoints toward API ecosystems that are intelligent, interoperable, and future proof.
The next generation of APIs will be agent-aware, protocol-driven, and trust-centric, forming the backbone of digital transformation in an era where AI agents orchestrate tasks, decisions, and commerce autonomously.
Throughout the preparation of this whitepaper, information and insights were drawn from a range of reputable sources, including research papers, articles, and resources. Some of the key references that informed the content of this whitepaper include:
These references provided the foundation upon which the discussions, insights, and recommendations in this whitepaper were based.
To keep yourself updated on the latest technology and industry trends subscribe to the Infosys Knowledge Institute's publications
Count me in!