Responsible AI

Trend 13: AI ethics throughout the development lifecycle

Responsible AI concepts should be factored in from the beginning to ensure the business stays out of any AI ethics and bias issues. Explainability is one such critical concept. The design and development teams should be aware and informed of every step in the AI lifecycle to answer any related questions, providing all information AI users would seek to understand how and why the system made a decision. This way, an organization can remain clear of adverse ethical issues and maintain customer trust.

These scenarios demand efficient tools to make AI systems more transparent and interpretable, ensuring trust, fairness, transparency, reliability, and auditability. AI models should adhere to the following principles:

  • Purposeful: An AI system should be designed with empathy and follow a human-centric approach with socially responsible use cases. For example, consider user preferences and behavior to provide recommendations.
  • Ethical: Models should comply with legal and social structures and be designed with high-cost functions that prevent unethical behavior. There should be transparency in data and models.
  • Human reviewed: Although AI models are built to operate independently without human interference, human dependency is a necessity in some cases. For example, in fraud detection or cases where law enforcement is involved, human supervision is required to review decisions made by AI models.
  • Bias detection: An unbiased dataset is an important prerequisite for reliable and nondiscriminatory predictions. AI models are being used for credit scoring by banks, resume shortlisting, and in some judicial systems. However, some datasets were found with an inherent bias toward color, age, and/or sex.
  • Explainable: Models should enable easy interpretation of results such as predictions, recommendations, etc. Explainable AI helps understand the decision-making process of AI systems and recognize which features of the given input are emphasized while making predictions.
  • Accountable: Models should use telemetry for auditing all human and machine actions. There should be data lineage for traceability, and all models/datasets should be version controlled.
  • Reproductive: The ML model should be consistent when giving predictions. Many practitioners think that explainable AI (XAI) is applied only at the output stage, but the role of XAI is throughout the whole AI lifecycle.

Thus, consistent and continuous governance can make AI systems understandable and resilient in various situations.


To keep yourself updated on the latest technology and industry trends subscribe to the Infosys Knowledge Institute's publications

Infosys TechCompass