Tools

Trend 8: Agent-ready API enablement becomes mainstream

Agent-ready API enablement has recently evolved into a more structured and scalable approach for connecting AI agents with backend services. A key enabler is OpenAPI Toolset (part of Google Agent Development Kit/ADK), which can automatically transform a standard OpenAPI specification into callable tools (e.g., RestApiTool) that AI agents can invoke directly, removing the need for manual wrapper coding. On the GraphQL side, Apollo MCP Server has matured into a robust production-grade solution for exposing GraphQL operations as agent-consumable tools under the MCP standard. GraphQL queries and mutations defined in the schema or persisted-query manifests are automatically available to any MCP-capable client, giving agents structured access to data and business logic without bespoke integration. Beyond these, newer frameworks are emerging that further streamline agent-to-API workflows. For example, tools like FastAPI-MCP enable Python-based REST services to be exposed as MCP servers with minimal configuration, making them instantly usable by agents. There is also growing emphasis on dynamic tool discovery and selection, as demonstrated by frameworks such as ScaleMCP, which allows agents to retrieve and register tools at runtime, reducing overhead and avoiding redundant tool-repository maintenance.

A leading multicountry quick-service restaurant operator modernized its finance operations through an agentic AI-enabled, cloud-based accounts payable platform that autonomously processes invoices end to end, with multilingual support, complex validations, and minimal human intervention. Infosys supported the transformation, which improved accuracy and efficiency while enabling predictive insights, adaptive processing, and continuous learning powered by Azure GPT-4o.

Tools

Trend 9: Observability expands to support AI-driven and agentic systems

The rise of AI-powered applications and agentic workflows has driven observability platforms to evolve far beyond traditional APM. Modern AI observability tooling now captures not only infrastructure metrics but also prompt-level traces, token usage, model invocation latency, error rates, and downstream embedding drift or safety signals. For instance, Amazon CloudWatch now offers a dedicated generative AI observability capability that delivers out-of-the-box dashboards tracking latency, token consumption, errors and model usage — and crucially, supports end-to-end prompt tracing across models, knowledge bases, tools, and agent workloads. Beyond CloudWatch, observability is increasingly standardized around OpenTelemetry (OTel), now extended with generative AI semantic conventions and instrumentation libraries that automatically capture telemetry for LLM-based applications — including prompts, completions, tool calls, token counts, and cost metrics. Vendors such as IBM Instana have released generative AI observability sensors that leverage OTel to instrument full AI stacks (models, agents, runtimes, infrastructure), offering integrated tracing, performance monitoring, alerting, and cost/resource analytics. These developments facilitate organizations to deploy generative AI and agentic systems in production with high confidence in reliability, observability, and operational governance.