Professor Matthias Holweg develops AI ethics and compliance toolkit

Professor Matthias Holweg, Christ Church fellow and American Standard Companies Chair in Operations Management, has co-authored a topical report exploring the impact of unethical applications of artificial intelligence (AI), and how organisations can avoid them.

Professor Matthias Holweg As the EU prepares to regulate the human and ethical implications of AI, Professor Holweg and his fellow Saïd Business School researchers have developed a ‘capAI’ (conformity assessment procedure for AI), which will help organisations assess their current AI systems to prevent privacy violations and data bias.

Additionally, it will support the explanation of AI-driven outcomes, and the development and running of systems that are trustworthy and AIA compliant.

Writing in the California Management Review, the researchers note the introduction of AI has not been without its ethical dilemmas.

'For example, Amazon’s Rekognition face search and identification technology has been accused of serious gender bias, while Google faced an internal backlash for helping the US government analyze drone footage using artificial intelligence. Despite the growing reputational risk caused by AI failure, most companies are strategically unprepared to respond effectively to the public controversies that accompany AI-related criticisms.'

They further reflect on the reputational damage that AI missteps can create in organisations, saying that: 'Ninety percent of criticisms toward AI have only taken place since 2018, and it may not be surprising that most organizations are not yet strategically prepared on how to respond to AI failures. We thus put forward a framework enabling organizations to diagnose the reputational risk of AI failures and to develop their response strategies more systematically.'

These issues led the team to create an ethics-based audit protocol that will help firms develop trustworthy and compliant AI systems.

Professor Holweg said: 'To develop the capAI procedure, we created the most comprehensive database of AI failures to date.

'Based on this, we produced a one-of-a-kind toolkit for organisations to develop and operate legally compliant, technically robust and ethically sound AI systems, by flagging the most common failures and detailing current best practices.

'We hope that capAI will become a standard process for all AI systems and prevent the many ethical problems they have caused.'

In addition to ensuring compliance with the proposed EU Artificial Intelligence Act (AIA) which is likely to come into force this year, capAI can help organisations working with AI systems to:

  • monitor the design, development, and implementation of AI systems
  • mitigate the risks of AI failures of AI-based decisions
  • prevent reputational and financial harm
  • assess the ethical, legal, and social implications of their AI systems.

Similar laws are also being prepared in the USA.