Generative Data Intelligence

Need to Build Trustworthy AI Systems Gains Importance as AI Progresses

Date:

As AI systems take on more responsibility, the strengths and weaknesses current AI systems need to be recognized to help build a foundation of trust. (GETTY IMAGES)

By John P. Desmond, Editor, AI Trends

The push is on to build trusted AI systems with an eye toward instilling confidence that results will be fair, accuracy will be sufficient, and safety will be preserved.

Gary Marcus, the successful entrepreneur who sold his startup Geometric Intelligence to Uber in 2016, issued a wakeup call to the AI industry as co-author with Ernest Davis of “Rebooting AI,” (Pantheon, 2019) an analysis of the strengths and weaknesses of current AI, where the field is going, and what we should be doing.

Marcus spoke about building trusted AI in a recent interview with The Economist. Here are some highlights:

“Trustworthy AI has to start with good engineering practices, mandated by laws and industry standards, both of which are currently largely absent. Too much of AI thus far has consisted of short-term solutions, code that gets a system to work immediately, without a critical layer of engineering guarantees that are often taken for granted in other fields. The kinds of stress tests that are standard in the development of an automobile (such as crash tests and climate challenges), for example, are rarely seen in AI. AI could learn a lot from how other engineers do business.”

AI developers, “can’t even devise procedures for making guarantees that given systems work within a certain tolerance, the way an auto part or airplane manufacturer would be required to do.”

“The assumption in AI has generally been that if it works often enough to be useful, then that’s good enough, but that casual attitude is not appropriate when the stakes are high.”

IBM Team Identifies Four Pillars of Trusted AI

Support for building trust in AI systems was furthered in a recent paper by an IBM team suggesting Four Pillars to Trusted AI, as described in a recent account in Towards Data Science from Jesus Rodriguez, chief scientist and managing partner at Invector Labs.

“The non-deterministic nature of artificial intelligence(AI) systems breaks the pattern of traditional software applications and introduces new dimensions to enable trust in AI agents,” Rodriquez states. Trust in software development has been built through procedures around testing, auditing, documentation, and many other aspects of the discipline of software engineering. AI agents execute behavior based on knowledge that evolves over time. It’s difficult to understand.

Rodriguez suggests the Four Pillars from IBM are a viable idea for establishing the foundation of trust in AI systems. The foundations are:

  • Fairness: AI systems should use training data and models that are free of bias, to avoid unfair treatment of certain groups.
  • Robustness: AI systems should be safe and secure, not vulnerable to tampering or compromising the data they are trained on.
  • Explainability: AI systems should provide decisions or suggestions that can be understood by their users and developers.
  • Lineage: AI systems should include details of their development, deployment, and maintenance so they can be audited throughout their lifecycle.

To help identify whether an AI system is built consistent with the four pillars of trust AI, IBM proposes a Supplier’s Declaration of Conformity (SDoC, or factsheet, for short) that helps to provide information. It should answer basic questions, including this selection:

  • Does the dataset used to train the service have a data sheet or data statement?
  • Was the dataset and model checked for biases? If “yes” describes bias policies that were checked, bias checking methods, and results.
  • Was any bias mitigation performed on the dataset? If “yes” describes the mitigation method.
  • Are algorithm outputs explainable/interpretable? If yes, explain how is explainability achieved (e.g. directly explainable algorithm, local explainability, explanations via examples).
  • Describe the testing methodology.

Human Observers Need to Understand the AI System

Trust in an AI system is built by repeated correct performance of the system, making it highly reliable, and also a system that human observers can understand. When working with resilient, intelligent robotic systems, for example in the military, that are built to adapt and evolve to yield increasingly improved performance, it is challenging to understand the system. The human observers need to be able to understand how the system is improving through experience, suggests Nathan Michael, CTO of Shield AI, writing recently in National Defense. Shield develops AI for national security and defense applications.

“One of the greatest challenges with artificial intelligence is that there is an overwhelming impression that magic underlies the system. But it is not magic, it’s mathematics.

What is being accomplished by AI systems is exciting, but it is also simply theory and fundamentals and engineering. As the development of AI progresses, we will see, more and more, the role of trust in this technology, Michael stated.

Read the source articles in The Economist, Towards Data Science and in National Defense.

Source: https://www.aitrends.com/ethics-and-social-issues/need-to-build-trustworthy-ai-systems-gains-importance-as-ai-progresses/

spot_img

Latest Intelligence

spot_img