Enterprise AI: The board’s role in strategic governance

Insights

Download
  • Boards are engaging with AI more frequently, but many lack comprehensive, enterprisewide strategies.
  • AI’s rapid integration into core business functions emphasizes the urgent need for clear oversight and transparent decision-making at the board level.
  • Half of the board members rank misinformation and privacy breaches as one of the most serious AI-related threats to their companies.
  • Traditional governance models are struggling to keep pace as AI takes on a greater role in operational and strategic decisions.
  • There’s a notable gap in how boards hold management accountable to deliver value from AI initiatives, highlighting an area for improvement.
  • Organizations that adapt their governance frameworks for AI are positioning themselves as future leaders, while others risk falling behind.

AI oversight: Elevating boardroom governance

Artificial intelligence (AI) is expanding across the enterprise — embedded in everything from customer support and employee productivity to product development. The few areas untouched by AI likely won't remain that way for long. AI is emerging as a foundational pillar for how businesses operate and create value.

As adoption accelerates, AI governance must catch up and keep pace with this fast-moving technology. Only then can the board provide clear, explicit leadership. What were once occasional boardroom briefings are now frequent AI strategy sessions. Even when AI isn’t the first topic of discussion, it’s still firmly fixed on the agenda.

But more frequent board discussions are not enough. Directors face a fundamental shift in how they oversee strategy and operations. AI — particularly agentic AI capable of autonomous action — is taking on a growing role in day-to-day decisions, rendering many traditional oversight models obsolete.

The need to modernize governance frameworks and increase oversight is clear. Yet our research has found board responses often lag what experts would consider best practices.

To better understand these dynamics, the National Association of Corporate Directors (NACD) and Infosys Knowledge Institute surveyed 300 board members at North American companies with revenues of at least $1 billion. The directors were asked this summer about their board’s role in corporate AI strategy, oversight, and risk assessment.

The research reveals significant gaps in enterprisewide AI strategies, particularly how boards hold executives accountable for AI initiative value delivery. This inconsistency reveals a governance landscape in flux — one that is struggling to keep pace with the rapid evolution of AI. While some organizations are proactively redefining their structures to lead in the transition to enterprise AI, others risk falling behind, ceding ground to better-prepared competitors.

AI embedded on the board agenda

Company relationships with AI are fundamentally changing. Experimentation is giving way to widespread use. Research has shown enterprise AI is on the verge of scaling, with half of AI use cases achieving some or all their business objectives.

This shift underscores AI’s increasingly central role at the heart of how organizations approach growth, risk management, and competitive differentiation. As AI becomes more deeply embedded in operations and begins to reshape corporate strategy, boardrooms are dedicating an increasing share of their limited agenda to understanding its far-reaching implications — both the opportunities it unlocks and the risks it introduces.

The vast majority of boards (86%) now receive AI updates on a regular schedule or even at every meeting (Figure 1). As a result, most directors are actively learning about AI and tracking trends, whether through briefings from internal and external technologists, resources from groups like NACD, or even respected news coverage. Most directors say they already have a solid grasp of emerging technologies, including generative AI, agentic AI, and quantum computing — with half making it a point to remain current on these subjects. But half is not enough. Board members must ensure that keeping current with emerging technologies is a priority.

Figure 1. AI is a frequent topic for boards

Figure 1. AI is a frequent topic for boards

Source: Infosys Knowledge Institute

The intense focus on AI by corporate leaders is expected to continue, despite recent hints of retrenchment. Gartner has concluded that many AI use cases have entered the dreaded “trough of disillusionment” and that half of the companies planning to replace customer service staff with AI will abandon those efforts.

Despite recent pushback, companies continue to invest heavily in AI and the rapidly evolving solutions flooding the market. IDC projects AI spending will increase 32% annually through 2029 — and account for more than one-quarter of global IT spending.

As investments increase and use cases demand sharper prioritization, corporate directors will require continuous learning to provide effective oversight. This emphasis on AI has grown to where it could overshadow risks that traditionally occupied corporate boards. Directors now believe they are even more informed about emerging technologies like AI than they are about geopolitical risks and changing regulations (Figure 2). This confidence extends to their colleagues on the board, who they say are just as knowledgeable.

Figure 2. Emerging technology draws more attention than traditional risks

Figure 2. Emerging technology draws more attention than traditional risks

Source: Infosys Knowledge Institute

Gap in AI decision-making

The complexity and unpredictability that accompany AI are also reshaping how boards assess the most critical challenges to making high-quality, long-term strategic decisions. Nearly half of directors identify weak data or analytics as one of the top two barriers to effective decision-making (Figure 3). Compounding this is the overwhelming volume of information that directors receive, increasing the risk of decision paralysis.

Again, these factors are seen as more critical than traditional challenges such as risk aversion and conflicting priorities or interpersonal dynamics with other directors.

Figure 3. Half of directors lack data and insights needed for critical decision-making

Figure 3. Half of directors lack data and insights needed for critical decision-making

Source: Infosys Knowledge Institute

AI complicates the role of corporate board members by introducing new layers of complexity to already high-stakes decisions. Financial cycles, competitive threats, and regulatory shifts are known quantities with long histories. As a result, directors can generally rely on established frameworks and experience and expertise to evaluate and respond to relevant risks.

By contrast, AI is evolving at a frenetic pace with unpredictable capabilities, risks, and implications. This uncertainty raises the stakes for governance, requiring directors to balance innovation and caution while navigating a technology that redefines and even reinvents itself in real time.

The rise of agentic AI further accelerates this complexity. Agentic systems can take multiple, autonomous actions — automating processes intelligently in ways that weren’t previously possible.

Regulatory compliance becomes exponentially more complex when AI systems can take thousands of actions daily without human review. Traditional compliance approaches that rely on periodic audits, approval workflows, and after-the-fact review are insufficient for systems that operate in real time across multiple jurisdictions and regulatory domains.

  • – Syed Quiser Ahmed
  • Head of Infosys Responsible AI Office

Decisions around AI deployment now demand fluency in fast-evolving technical capabilities and how they affect governance. This expands the board’s mandate into new territory where the long-term consequences of today’s choices are harder to predict — and mistakes can scale quickly.

Facing the realities of AI risk

Directors now confront a widening collection of complex risks and opportunities unique to AI’s scale and effect. While automation, advanced analytics, and predictive modeling promise significant benefits, boardroom discussions increasingly focus on the dark side of innovation: the spread of misinformation, privacy violations, and AI-driven impersonation through deepfakes (Figure 4).

These threats are not theoretical. Half of the board members rank misinformation and privacy breaches as one of the most serious AI-related threats to their companies. Their fears are backed up by the World Economic Forum’s 2025 Global Risks Report, which listed misinformation and disinformation as fourth on the list, just behind geoeconomic confrontations. The report listed threats to the reputation of a company’s products and services, not just the more commonly understood threats to government legitimacy and individuals. Although AI isn’t responsible for misinformation, the technology does make it easier to create and distribute — essentially supercharging the falsehood engine.

Figure 4. Boards fear the impact of AI-powered misinformation

Figure 4. Boards fear the impact of AI-powered misinformation

Source: Infosys Knowledge Institute

Security incidents, inexplicable AI outputs, hallucinated responses, and biased results further underscore that AI-related risks extend well beyond technical failures or regulatory penalties. Directors often cite reputational damage as their most serious threat — underscoring how important and fragile brand equity has become in the age of AI (Figure 5). Scandals damage short-term stock prices, talent recruitment, and revenue.

The link between customer backlash, brand deterioration, and lost revenue is direct: Once confidence falters, brand equity and competitive strength can quickly unravel. A cascading crisis quickly hits share prices and then decreases revenue, reducing enterprise value longer-term if not quickly addressed.

Figure 5. Compounding AI threats worry boards

Figure 5. Compounding AI threats worry boards

Source: Infosys Knowledge Institute

Despite these clear, potentially existential threats, boards generally do not expect the worst from AI. Few directors (14%) are highly concerned about the reputational risks associated with AI failures, such as inaccurate outputs, bias, or offensive content (Figure 6). Most are moderately concerned, although they acknowledge they are monitoring this issue closely.

Even with an elevated concern of AI risk, about one-third of directors (31%) worry little about reputational harm, indicating significant confidence in the existing safeguards and governance structures. However, this confidence could be misplaced. Current guardrails may not suffice as AI scales and adoption grows, presenting new, unpredictable challenges.

Figure 6. Few boards are very concerned about AI harming reputations

Figure 6. Few boards are very concerned about AI harming reputations

Source: Infosys Knowledge Institute

This broad awareness has pushed boards to strengthen governance frameworks designed to anticipate and mitigate risks before they escalate. Many now conduct regular risk assessments and AI scenario-planning exercises, engaging both internal and external experts to broaden perspectives. There is an increasing focus on transparent practices and clear accountability in how directors approach oversight, reflecting AI’s social and ethical dimensions alongside its technical challenges.

Comprehensive AI strategy

Broad awareness is not sufficient, however. It’s time for directors to get more hands-on. In more than half the cases, the board’s role remains largely supervisory (Figure 7). Directors review AI strategies presented by management, provide high-level feedback, and endorse proposals that are aligned with corporate goals. This approach is consistent with traditional board oversight: ensuring accountability without becoming immersed in execution.

A smaller cohort (13%) remains disconnected from AI oversight, entrusting management with setting the correct direction and managing risks. While this delegation frees boards to focus on other priorities, it risks leaving directors underprepared for the governance challenges that AI inevitably presents — from reputational threats to ethical dilemmas and regulatory scrutiny.

Figure 7. Half of boards take a passive approach to AI oversight

Figure 7. Half of boards take a passive approach to AI oversight

Source: Infosys Knowledge Institute

Some directors understand the need to be more active: One-third of boards take a more hands-on approach. These directors are more likely to go beyond surface-level reviews to conduct thorough assessments of their company’s AI strategy. They tend to examine use cases, probe potential risks, and challenge assumptions before initiatives move forward. These boards influence not only the trajectory of their company’s AI adoption but how effectively it creates long-term value.

Taken together, these patterns reveal a governance approach that is still in transition. Boards are experimenting with different engagement levels, stepping up their traditional oversight responsibilities for deeper scrutiny of a technology that is reshaping business strategy.

Enterprisewide AI plans

Boards need to consider the entire business over the long term: At present, their approaches are uneven. For more than half of boards, their companies’ AI strategies are limited to individual departments rather than an enterprise perspective (Figure 8). These targeted initiatives allow companies to experiment and capture some value early, but they risk creating fragmented systems that are harder to scale or integrate across business units.

Figure 8. AI deployment strategies in boardrooms

Figure 8. AI deployment strategies in boardrooms

Source: Infosys Knowledge Institute

Some boards are acting in a more holistic way: 29% approved enterprisewide AI plans with clearly defined goals, investment priorities, and key performance indicators (KPIs). These organizations treat AI as a core business capability rather than a series of isolated pilots. By embedding AI strategy into the broader corporate agenda, these boards build accountability into execution and ensure investment decisions align with long-term value creation.

The remaining 15% leave their companies vulnerable to a wide range of AI risks, as well as to more decisive and engaged competitors. This group is still deliberating, debating a structure for long-term AI strategy but not yet committing to a formal plan. This cautious approach reflects both the unpredictability of AI’s evolution and the difficulty of setting strategy when faced with regulatory uncertainty and possible threats to reputation.

Furthermore, there is a gap between simply planning for AI-related incidents and ensuring that businesses are truly prepared to respond effectively. Although three out of four directors say AI-related incidents are now included in their organization’s crisis protocols, a majority has not rigorously tested them (Figure 9). Nearly one-third report that they have protocols in place and have evaluated them. However, a large group (43%) has reviewed their company’s crisis communications protocols but not tested them. This gap leaves companies vulnerable to reputational damage if an AI-driven failure — such as a biased algorithm or misinformation incident — unfolds in real time.

Figure 9. Preparedness for AI-related crises

Figure 9. Preparedness for AI-related crises

Source: Infosys Knowledge Institute

The divide among boards underscores a potential inflection point in AI governance. Directors who elevate AI strategy to an enterprisewide priority — and measure its impact with rigor — will be better positioned to steer their companies through the risks and opportunities of an AI-driven economy. Enterprises that confine AI to departmental silos or delay long-term planning will find themselves reacting to disruption rather than shaping it.

AI decision explainability and transparency

Not enough boards are taking direct responsibility for explainability and transparency in AI decision-making. Regulators, investors, and customers demand greater accountability in how algorithms shape outcomes and how they avoid biased, inaccurate, or discriminatory results.

Yet only about half of corporate boards have taken direct ownership of this responsibility (Figure 10). That group of directors says their boards are kept informed of explainability practices but largely defer oversight to management. While this trust-based model reflects confidence in executive teams, it raises questions about whether boards are positioned to provide adequate checks on AI systems.

Figure 10. Half of boards oversee AI explainability and transparency

Figure 10. Half of boards oversee AI explainability and transparency

Source: Infosys Knowledge Institute

The other half of survey respondents report that their boards have established formal processes to actively oversee explainable AI decisions. These processes include structured reporting, independent audits, or direct involvement in reviewing algorithms that engage with customers. By embedding explainability into governance routines, these boards signal that they view transparency not only as a compliance requirement but as a strategic imperative tied to customer trust and long-term value.

An ethical approach to AI values transparency and explainability helps companies mitigate risks and improve business performance. Pharmaceutical giant Novartis developed an AI framework to embed ethics in senior decision-making. This not only burnished the company’s reputation but created business value. The Novartis platform for clinical trial feasibility and site selection demonstrated in a US pilot program the ability to recruit nearly three times more Black patients than competitors. This increased diversity is expected to allow Novartis to accelerate drug trials, provide more robust data, and reduce costs.

More boards need to move toward direct oversight rather than relying on management to safeguard explainability. The more active group recognizes that transparency in AI is too important to leave unchecked.

Management accountability

As part of this more direct oversight, boards need to hold the business accountable. Boards are setting measurable goals for AI, but establishing KPIs is only the first step. Nine out of 10 companies report having KPIs for their AI initiatives, addressing efficiency gains, revenue contributions, and improvements in customer engagement — areas where AI has proven it can deliver tangible value. The critical question is whether boards ensure that leadership delivers against them.

Figure 11. Management accountability for AI outcomes

Figure 11. Management accountability for AI outcomes

Source: Infosys Knowledge Institute

On that front, the picture is less encouraging. Only one in four companies directly links AI performance metrics to leadership evaluations. In these organizations, executives are held accountable for translating AI investments into measurable business outcomes that align with shareholder value. This link strengthens incentives, ensures consistent focus on execution, and signals to investors that AI is treated as a strategic priority rather than an experimental side project.

For the majority, however, AI goals are disconnected from leadership accountability (Figure 11). Without this alignment, companies risk treating AI as a technology initiative rather than a driver of enterprise transformation and value. Boards may receive reports on AI progress, but the urgency to deliver sustainable value diminishes unless leadership performance is linked to results.

This gap highlights a critical opportunity for boards. By embedding AI outcomes into leadership performance reviews and incentive structures, directors can ensure executives not only experiment with AI but scale it in ways that create shareholder value. Without this connection, AI strategies will stall — resulting in strong promises but weak delivery.

Recommendations

Tradition has its place, but bold leaders know when to move beyond what worked in the past. As AI becomes more deeply embedded across the enterprise, forward-looking boards are rethinking conventional risk oversight — and for good reason. AI introduces risks and opportunities that surpass those of earlier technological shifts, demanding stronger governance.

The director of the future will dive much further into details — and take a more hands on approach — than what has been expected from corporate governance practices. Boards should not become entangled in operational execution, a balance essential for effective governance. However, directors will struggle to provide adequate oversight without pushing boundaries that make the board and leadership a little uncomfortable. Slight discomfort now might avert something much worse later.

Establish strategic governance: Boards must extend their oversight to ensure AI initiatives align with business objectives, while evolving responsibly with the technology. Directors should insist that regulatory and ethical guardrails are built directly into AI design, development, and operations — doing so later is too late. With embedded compliance, organizations ensure that AI systems comprise the following:

  • Real-time monitoring — Systems that observe AI continuously to catch potential violations before they happen.
  • Automated compliance checks — Triggers that stop or flag actions if certain thresholds or regulatory rules are violated, routing them for human intervention.
  • Comprehensive audit trails — Processes that maintain records of each decision, action, and rationale behind them, allowing for transparency, accountability, and traceability.

These steps will be effective only when companies also embed ethics into their decision-making about when, where, and how to use AI — ensuring that fairness, accountability, and individual and societal impact remain central.

Champion comprehensive AI strategy: Directors must advance an enterprisewide AI strategy — not a departmental one — to reduce risk and align initiatives with business goals and the need for responsible AI. Department-led projects might deliver quick wins but often create duplication, integration problems, and governance gaps. Infosys research shows that organizations with unified AI oversight and strong executive sponsorship are more likely to realize value. By balancing growth opportunities with risks, stronger board direction and oversight will enhance strategic alignment, consistent standards, and responsible AI use at scale.

Demand measurable value: Boards should hold management accountable for AI initiatives to deliver measurable results that align with shareholder value. This requires defining clear performance metrics, linking them to leadership accountability, and regularly reviewing AI-related KPIs. Boards can then ensure AI initiatives are effectively managed and contribute to company success.

Prioritize responsible AI practices: Unlike traditional business risks, AI introduces new threats that demand board-level attention. The spread of AI-driven misinformation, now the top concern for directors, can damage trust faster and more broadly than past reputational challenges. Other risks — such as privacy violations and ethical breaches — compound these threats. Boards must adapt by conducting targeted AI risk assessments and embedding mitigation strategies into enterprise risk management. The unique risks posed by AI demand a big-picture outlook that boards are well suited to provide, helping protect the company’s reputation and reduce financial exposure.

Download

Connect with the Infosys Knowledge Institute

All the fields marked with * are required

Opt in for insights from Infosys Knowledge Institute Privacy Statement

Please fill all required fields