
Insights
- Most knowledge workers use AI, but many lack training – increasing the risk of misuse and legal issues.
- This underscores the necessity for organizations to train employees in responsible AI.
- Yet many struggle to make the concept stick.
- A proactive, change-management approach – instead of a mere compliance checkbox – helps foster genuine understanding and responsible use among employees.
Artificial intelligence (AI) has evolved from being a niche academic research area to a significant driver of the global economy. Organizations acknowledge it is important for them to do AI responsibly and ethically. However, it is challenging for them to help employees grasp what responsible AI truly means, and even harder to reinforce it so it becomes second nature. To tackle this, they need to implement human-centered change management practices and employee engagement, supported by clear governance frameworks and guidelines for AI use.
What is responsible AI and why it matters
Responsible AI refers to designing and implementing AI systems that are ethical, transparent, and legally compliant, and that help employees to work better. It involves being mindful of privacy, bias, and security, and mitigating the risks involved with the use of AI. It includes helping employees understand how AI makes decisions, guiding them on its ethical use, and clarifying the level of human oversight needed when AI is in action.
Some 75% of global knowledge workers are using AI in the workplace. However, another study shows that 66% of people depend on AI-generated outputs without verifying their accuracy, and 56% have made mistakes in their work as a result of using AI. The majority have received no formal AI training, and half admit to having only limited knowledge of how it works. In fact, 48% claim they have entered financial, sales and customer information related to their organizations into public AI tools. Such actions can have grave consequences for organizations, damaging their credibility and potentially exposing them to legal action. As public platforms are vulnerable to cyberattacks, phishing, or data breaches, sensitive information could become accessible to unauthorized individuals. Some of the data entered could be confidential or subject to regulations like the EU’s General Data Protection Regulation (GDPR), putting compliance at risk. This reinforces the need for organizations to embed responsible AI into their strategy, core business functions, workflow, and culture, to facilitate proper usage of AI.
Public AI platforms are vulnerable to cyberattacks, phishing, and data breaches, putting sensitive organizational information – entered by employees – at risk of unauthorized access, with potentially serious consequences.

Barriers to implementing responsible AI
Organizations are eager to be ahead in the AI implementation race, but they struggle with enforcing the responsible AI aspect. While 87% of business executives agree that adopting responsible AI principles is essential, 85% admit they are not adequately prepared to put those principles into practice. Cybersecurity, privacy, and accuracy are the biggest concerns of leaders, per a study. Also, it is challenging for organizations to help employees understand what responsible AI is and make it stick, for several reasons.
Constantly changing parameters: AI systems evolve in cycles measured in weeks rather than years. A regulatory framework drafted today could become obsolete before reaching legislative debate. Traditional governance models built for stable technologies cannot match this pace. Meanwhile, AI capabilities continue their rapid growth, with each new breakthrough bringing both benefits and ethical risks.
Implementation friction: AI tools are being integrated quickly into workplaces, often faster than the organization can develop and implement responsible use policies. Although more than 55% of enterprises identify ethical AI as a priority, only 12% have implemented mature governance frameworks for AI. Inconsistent guidelines lead to confusion around acceptable AI use.
Time-lagged bias: When a bias in an AI algorithm becomes apparent only sometime after deployment, it is difficult to connect that failure to specific decisions made during design or development. That in turn weakens the learning loop — how organizations build on their understanding and implementation of responsible AI — that could otherwise strengthen responsible practices. This reinforces the need for incorporating responsible AI practices from the early stages of use case and product development.
Knowledge translation hurdles: Organizations struggle with explaining responsible AI without using technical jargon. While technical experts grasp algorithmic fairness concepts, product development or marketing teams might struggle to see how these apply to design decisions or to campaign metrics.
Behavioral challenges: Even well-intentioned employees can struggle with applying abstract principles in concrete situations. In the face of deadlines and pressure to deliver, training sessions can fade from memory as teams default to familiar shortcuts.
Minimal employee engagement regarding AI: Helping employees understand what responsible AI is – and motivating them to make it stick – becomes even more difficult, if not impossible, when organizations fail to prioritize change management, training, and education for AI initiatives. As part of its AI Business Value research, Infosys asked respondents to select the definition from Figure 1 that best reflected how employees in their organization engage with AI. Based on their responses, companies were grouped into four distinct archetypes.
Figure 1. Workforce readiness archetypes
Source: Infosys Knowledge Institute
In the research, 52% of respondents reported their organizations having minimal engagement with employees on AI or supporting them in understanding AI’s role. Respondents also said they had little or no change management or training initiatives in place to address AI (Figure 2).
Figure 2. Most organizations are explorers and watchers
Source: Infosys Knowledge Institute
To address the challenge of implementing responsible AI at the workplace, organizations must embed change management-based human-centered processes, employee engagement, and robust governance structures and policies. Responsible AI also demands collaboration across IT, HR, legal, compliance, and business teams. Misalignment between these functions can delay or derail implementation.
Success in implementing responsible AI requires intentional and comprehensive organizational change management (OCM). OCM equips companies with concrete mechanisms and tools to turn ethical AI principles into everyday practices and behaviors. It confirms cross-functional teams responsible for delivering responsible AI are empowered, not sidelined, and resistance to change is anticipated, not punished. In other words, OCM provides organizations the bridge between strategy and action. It aligns people, processes, and incentives so that responsible AI becomes not an add-on, but a cultural default.
Comprehensive organizational change management aligns people, processes, and incentives so that responsible AI becomes not an add-on, but a cultural default.

Setting the course
A proactive approach to implementing responsible AI requires moving beyond compliance checkboxes to genuine stakeholder engagement and value-driven design, through change management.
For this to happen, organizations must establish a centralized AI governance task force that oversees responsible AI aspects within the organization and keeps abreast of the latest developments in what constitutes responsible AI. This task force helps in AI acceptance through various measures such as involving stakeholders across the organization and society, developing guidelines and a roadmap for ethical AI use, and reducing AI-related risk.
Commenting on how policymakers can design AI governance to promote ethical and beneficial use while actively preventing misuse, Fei-Fei Li, a professor of computer science at Stanford University and a founder of the Stanford Human-Centered Artificial Intelligence Institute, said on the podcast Firing Line, “…A pragmatic approach that focuses on applications and ensuring a guardrail for safe deliverance of this technology is a good starting point.”
Establishing a responsible AI framework to maintain consistency in AI decisions and actions is also important. Infosys has outlined 12 principles of responsible AI (Figure 3) designed to help organizations establish robust governance frameworks that foster user trust and drive widespread acceptance of AI.
Figure 3. Principles of responsible AI
Source: Infosys Knowledge Institute
Organizations should make employees aware of where and how they can access and experiment with AI tools. This empowers employees to become AI advocates and actively contributes to shaping and evaluating the organization’s enterprise AI future. Companies should also set up channels where employees can raise ethical concerns about the misuse of AI.
Leading practice is for organizations to conduct mandatory training (Figure 4) on responsible AI and regularly upskill employees to equip them for the evolving responsible AI landscape. Infosys research found that up to 18 percentage points can be added to the chance of success of AI if a company has fully developed its change management framework for AI training and involves employees in making decisions about its implementation.
Integrating responsible AI metrics into executive performance indicators and HR goals can further facilitate accountability that extends beyond policy statements to measurable outcomes.
Figure 4. Organizational change management-driven implementation strategy
Source: Infosys
Integrating change management and employee engagement into responsible AI in this way builds organizational buy-in while enabling early risk detection and mitigation. Organizations that successfully implement this approach will differentiate themselves through enhanced stakeholder trust, reduced regulatory risk, and sustainable responsible AI practices.