Insights
- As organizations enter the agentic artificial intelligence (AI) era, the traditional product-centric value delivery (PCVD) model must evolve to integrate autonomous AI agents alongside human team members.
- Organizations will need to integrate orchestration, service, and copilot agents throughout the software development life cycle (SDLC), while addressing critical challenges in security, privacy, and governance.
- Early implementations demonstrate 40% to 50% effort savings and faster time-to-market, though full-scale adoption requires careful governance, upskilling, and change management.
The product-centric value delivery (PCVD) model is an important step in digital and artificial intelligence (AI)-first journeys, guiding the transition from legacy, project-based delivery to cloud-native, microservices-driven architectures. By uniting product-oriented delivery teams, known as PODs, around products and value streams, PCVD, an organizational operating model that defines how teams are structured and how value is delivered, fosters ownership, domain expertise, and rapid innovation.
PODs are small, autonomous, cross-functional teams typically consisting of five to 10 members, including software engineers, designers, quality assurance specialists, and product owners, all focused on a single product or value stream. These teams have the expertise to deliver complete software features without external dependencies.
PCVD focuses on both products and platforms, distinguishing it from traditional project-based delivery, where self-contained teams own the product, its full life cycle, and the implementation strategy and roadmap.
The software development life cycle (SDLC) is the technical process for building software. PCVD encompasses SDLC as one of its execution mechanisms, ensuring that customer problems are defined and desired outcomes are achieved through objectives and key results (OKRs). The SDLC delivers on that intent by building the features and services that create value.
PCVD can be viewed as an evolutionary step, that builds on DevOps practices by making products, not projects, the core unit of value delivery, through autonomous, customer-journey-centric PODs that are OKR-driven and owned by POD members and the product owner. This structure makes it natural for agentic AI to extend the PCVD model by embedding autonomous, goal-driven agents directly into POD workflows.
The agentic AI inflection point
However, as organizations enter the agentic AI era, the traditional PCVD model must evolve to integrate autonomous AI agents alongside human team members and harness the complementary strengths of both forces. This integration represents a fundamental shift in how value is created and delivered.
How agentic AI is applied in PCVD
Agentic AI in PCVD is making its mark in phased proofs-of-concept and experiments, with full-scale implementations emerging.
Enterprises across industries are embedding orchestration, service, and copilot agents throughout SDLC phases, from backlog analysis to automated testing and compliance enforcement.
Three primary agent types serve different functions in PCVD environments:
- Orchestration agents coordinate multiagent workflows across distributed systems, enabling goal-oriented, predictable and reproducible behavior, and adaptive decision-making for complex enterprise processes.
- Service agents perform specific, well-defined tasks such as code migration, reverse engineering, or test case generation autonomously.
- Copilot agents augment human work by providing real-time assistance, code suggestions, and intelligent recommendations during development activities.
Because these agents operate at SDLC touchpoints such as discover, build, test, release, run, and learn, their capabilities can be applied to any SDLC model, be it waterfall, agile, or DevOps. However, value is better recognized in a PCVD model.
By using PODs throughout the SDLC, organizations are seeing significant results. For instance, Infosys' agentic AI foundry enables multiagent, product-led workflows for global clients, delivering up to 50% faster incident resolution and 40% effort savings in development and operations, based on deployments in 2024 and 2025. Similarly, GitHub Copilot, acting as an agentic augmentation tool in build and test stages, has demonstrated 55% faster task completion in controlled studies, with coding effort reductions of between 30% and 50%, depending on task complexity and programming language used.
Infosys ' agentic AI foundry enables multiagent, product-led workflows for global clients, delivering faster incident resolution and effort savings in development and operations.
With a view of the entire product life cycle, a leading consulting and professional services organization has used agentic AI to automate complex migration tasks such as converting Informatica mappings to Azure Data Factory pipelines, achieving effort savings of between 50% and 80% while reducing delivery risk.
Infosys has automated code generation for migration to a strategic data platform from multiple legacy sources for a major utility client, ensuring faster modernization with minimal manual intervention. In addition, Infosys deployed AI-driven reverse engineering to produce clear, structured descriptions of what the AI system should do, reducing dependency on subject matter experts and accelerating design phases for multiple clients. In all of this, a culture of innovation and experimentation was at the fore, enabling product owners and business sponsors to define their strategy, unlocking funding from the top reaches of the organization.
Major enterprise platforms are now embedding agentic AI capabilities. For example, Microsoft's Agent framework, which was available in January 2025, and AWS Amazon Bedrock multiagent collaboration, launched in November 2024, provide orchestration frameworks that coordinate multiagent workflows across distributed systems. These deployments validate the PCVD model's promise: faster time-to-market, higher quality, and operational resilience, while preserving human oversight and accountability.
Why PCVD is ideal for agentic AI
PCVD is ideal for agentic AI because it organizes work around durable products and accountable PODs, creating a natural structure for continuous delivery and clear ownership. Within this model, agents are not treated as bolt-on tools but are deliberately governed, orchestrated, and embedded into POD workflows, allowing automation to operate coherently alongside human judgment and oversight. By integrating agents into well-defined product teams and delivery cycles, PCVD accelerates time-to-market, improves quality through tighter feedback loops, and increases operational resilience, all while preserving ethical control and accountability over increasingly autonomous systems.
Implementation challenges and considerations
While AI agents enhance visibility by continuously monitoring data and surfacing insights that humans might overlook, integrating them into PCVD creates new challenges that organizations must address proactively:
Heightened security risks
Introducing autonomous agents into product-based teams creates new entry points for cyberattacks. Agents often interact with multiple systems, application programming interfaces (APIs), and datasets, which increases the attack surface. If not fully governed, an agent could execute harmful actions based on manipulated inputs or unauthorized access. Recent analyses highlight that AI agents create identity-centric security risks, including credential theft, privilege escalation, and unauthorized data access.
Recent analyses highlight that AI agents create identity-centric security risks, including credential theft, privilege escalation, and unauthorized data access.
Ensuring robust authentication, agent permissions, and real-time monitoring becomes critical. Organizations must implement zero-trust architectures for agent interactions, with continuous authentication and authorization checks at every system boundary.
Increased privacy concerns
Agentic AI systems require access to sensitive operational and customer data to make effective decisions. This raises concerns around data minimization, consent, data residency, and compliance with regulations such as Europe’s General Data Protection Regulation (GDPR). Under GDPR Article 5, which speaks to data minimization, and Article 25, which speaks to privacy by design, organizations must ensure that agents process only necessary data and implement privacy protections from the outset. The California Privacy Rights Act and Brazil’s general data protection law also give individuals rights when AI systems process their personal data.
Without strong data-handling protocols, organizations risk inadvertent exposure, misuse of personal information, and legal consequences. Data protection impact assessments should be conducted for each agent deployment, particularly when processing personal or sensitive data.
Lack of predictability in agent behavior
Autonomous agents operate based on goal-oriented logic and dynamic learning, which can lead to unexpected or opaque decision paths. Their actions might not always be easily explainable or traceable, making it harder for teams to understand why an agent made a specific recommendation or took a particular step. This unpredictability can erode trust and complicate accountability, especially in high-stakes or regulated environments.
Organizations must implement comprehensive logging, audit trails, and explainability mechanisms to ensure agent decisions can be reviewed, understood, and justified when necessary. They must also evaluate agentic AI at both system and component levels, using both black box and white box evaluation techniques to minimize ethical drift and increase reliability.
How to build a strong human-agent ecosystem
The POD should be considered as an ecosystem where intelligent agents and humans actively collaborate, learn, and contribute within product-based teams. This is because agents handle repetitive tasks and provide predictive insights at scale without fatigue, while humans offer context, make judgment calls, and approve critical actions. In high-stakes areas, ungoverned autonomy can lead to serious consequences.
As companies transition to a human-agent operating model, they must build an ecosystem where both can collaborate effectively and responsibly. This requires thoughtful design across data, governance, ethics, and team enablement, far beyond simply deploying autonomous agents into workflows.
As companies transition to a human-agent operating model, they must build an ecosystem where both can collaborate effectively and responsibly.
Core principles for human-agent collaboration
The following principles outline what it takes to create a resilient, high-performing human-agent ecosystem:
Ensure agents learn based on new information
Organizations should enable a data-driven continuous feedback loop so that AI agents learn and adapt to improve models and processes. This helps organizations quickly detect performance drifts, bias, or errors so corrective action can be taken early. Implement automated monitoring systems that track agent performance metrics, decision quality, and alignment with intended outcomes.
Establish clear agent-human boundaries
Organizations must establish transparent boundaries between agent and human responsibilities, ensuring agents augment rather than replace human expertise (Figure 1). They should provide role-based access control to limit agent permissions to specific tasks, preventing unauthorized actions.
Figure 1. Agent and human responsibilities in PCVD environments
Source: Infosys
Clear separation of duties helps maintain accountability, making it easier to trace decisions back to either the human or the agent. Regular reviews of agent permissions ensure safeguards stay aligned with evolving business needs and risk levels.
Embed ethical guidelines
Organizations must embed ethical guidelines for agent behavior, including fairness, transparency, and accountability. Use interpretable models and transparent logging so humans can understand agent reasoning and outputs for explainability. Establish ethical review boards that assess agent deployments for potential bias, fairness issues, and societal impact before production release.
Govern through established protocols
Strong governance models are essential for oversight of agent deployment, performance monitoring, and alignment to objectives. Organizations should:
- Use standardized platforms and ensure agents interface through modular, secure, and well-documented APIs, such as the model context protocol.
- Implement comprehensive governance structures, including data stewardship, auditability, and traceability requirements.
- Maintain human-in-the-loop oversight for critical decisions, especially those impacting customers or regulatory compliance.
- Define escalation protocols for when agents encounter situations outside their decision authority.
- Conduct regular governance audits to ensure compliance with policies and regulations.
Institute a product mindset
A product mindset ensures teams are in tune with market and customer demands. Design thinking workshops can give product teams greater insight into customer personas and their needs. Enablement programs can be used to make work between product owners, agents, and engineering personnel more productive and fulfilling.
Invest in upskilling
Organizations must invest in training and change management to help teams adapt to working alongside AI agents, addressing resistance and fostering trust. Key upskilling areas include understanding agent capabilities and limitations; prompt engineering and agent interaction techniques; reviewing and validating agent outputs effectively; identifying when to escalate to human judgment; monitoring agent performance and detecting drift; and ethical AI considerations and responsible deployment.
Introducing agents incrementally, with clear communication and support, helps avoid overwhelming teams. Change management strategies should address common resistance patterns, secure leadership buy-in, and include comprehensive communication plans.
Implementation roadmap: Where to start
Organizations seeking to build human-agent PCVD frameworks should follow a phased approach that balances ambition with pragmatism. The following approach ensures that the right use cases and agent-human teams deliver reasonable ROI in an adequate timeframe.
Phase 1: Assessment and pilot selection
Assess organizational readiness: Evaluate current PCVD maturity and POD effectiveness before assessing technical infrastructure, including APIs, data platforms, and security. At this stage, it is also important to gauge cultural readiness for human-agent collaboration and identify skill gaps and training needs.
Select pilot POD and use case: It is important to choose a well-functioning POD with strong leadership that will work on a use case with clear success metrics and manageable scope. To ensure the use case can be championed at the leadership level and deliver meaningful ROI, a good rule of thumb is to pick a domain where an EBITDA of at least 30% is possible. Also, ensure executive sponsorship and adequate resources while defining overall success criteria and measurement frameworks.
Phase 2: Agent integration and governance
Deploy initial agents: Start with copilot agents in build and test phases to lower risk, and establish secure APIs and access controls. At this stage, implement comprehensive logging and monitoring, while training team members on agent interaction and oversight.
Establish governance framework: As mentioned, governance is key. Organizations that want to scale PCVD with agents in the next phase should define specific roles such as agent steward, human oversight board, and POD lead. Other must-dos in this phase include defining document decision rights and escalation protocols; creating audit trails and explainability requirements; and implementing security controls and privacy safeguards.
Phase 3: Scale and optimization
Expand to additional PODs and agent types: In this phase, more agents are needed. Introduce orchestration agents for workflow coordination and deploy service agents for specialized tasks. Once successful patterns are created, scale to other PODs and refine governance based on lessons learned.
Continuous improvement: Organizations will want to use learnings to improve agent-human dynamics. To do this effectively, establish feedback loops for agent performance, and monitor business impact metrics like velocity, quality, and cost. As teams scale, iterate on agent capabilities and human-agent workflows, and share best practices across the organization.
Current state and future outlook
A full implementation that combines PCVD with orchestrated multiagent systems and structured human checkpoints remains an evolving model. Full-scale implementation involving PCVD, multiagent orchestration, and structured human checkpoints is rare today.
Most enterprises are in progressive adoption, starting with agent augmentation in build/test or release phases and moving toward orchestration across PODs (Figure 2).
Figure 2. Human-agent PCVD implementation maturity model
Source: Infosys
Infosys deployments combine POD-based delivery with orchestration and service agents in partial implementations across client engagements. Large consulting organizations and regulated enterprises are experimenting with human-agent POD models, but end-to-end adoption remains a work in progress.
However, with pilot programs accelerating and hyperscaler platforms maturing, full-scale adoption of an integrated human-agent PCVD framework is no longer a distant prospect but an imminent reality, with mainstream adoption anticipated by 2027, or even 2026 for progressive organizations.
Human-agent PCVD collaboration as the future operating model
Our research into responsible enterprise AI found that PCVD is the go-to operating model for organizations looking to scale agentic AI systems, responsibly. However, the introduction of agents into POD teams brings both opportunity and risk.
Building a human-agent PCVD framework requires more than deploying AI agents into existing workflows. It demands a fundamental reimagining of how teams collaborate, how value is delivered, and how governance ensures innovation and accountability.
But the effort is worthwhile. Organizations that successfully integrate autonomous agents with human expertise within PCVD structures could gain competitive advantages, including faster time-to-market, higher quality outputs, and enhanced operational resilience. Part of this success is because PCVD models create a culture that rewards innovation and sees failure as a learning process, delivering agent plus human solutions in rapid feedback loops.
The path forward requires attention to security, privacy, and ethics, coupled with strong governance frameworks and comprehensive upskilling. While full-scale adoption is still emerging, the organizations beginning this journey today, starting with focused pilots, learning iteratively, and scaling thoughtfully, will be best positioned to thrive in an increasingly agentic future.