Generative Data Intelligence

Working Towards Explainable AI

Date:

“The hardest thing to understand in the world is the income tax.” This quote
comes from the man who came up with the theory of relativity – not exactly the
easiest concept to understand. That said, had he lived a bit longer, Albert
Einstein might have said “AI” instead of “income tax.”

Einstein died in 1955, a year before what is considered to be the first artificial intelligence program – Logic Theorist – was presented at the Dartmouth Summer Research Project on Artificial Intelligence. From then on, the general concept of thinking machines became a staple of popular entertainment, from Robby the Robot to HAL. But the nitty-gritty details of AI remain at least as hard to understand as income tax for most people. Today, the AI explainability problem remains a hard nut to crack, testing even the talent of experts. The crux of the issue is finding a useful answer to this: How does AI come to its conclusions and predictions?

USE ANALYTICS AND MACHINE LEARNING TO SOLVE BUSINESS PROBLEMS

Learn new analytics and machine learning skills you can put into immediate action with our online training program.

It takes a lot of expertise to design deep neural networks and even more to get them to run efficiently – “And even when run, they’re difficult to explain,” says Sheldon Fernandez, CEO of DarwinAI. The company’s Generative Synthesis AI-assisted design platform, GenSynth, is designed to provide granular insights into a neural network’s behavior – why it decides what it decides – to help developers improve their own deep learning models.  

Opening up the “black box” of AI is critical as the technology affects more and more industries – healthcare, finance, manufacturing. “If you don’t know how something reaches its decisions, you don’t know where it will fail and how to correct the problem,” Fernandez says. He also notes that regulatory mandates are an impetus for being able to provide some level of explanation about the outcomes of machine learning models, given that legislation like GDPR demands that people have the right to an explanation for automated decision making.

Big Players Focus on AI Explainability

The explainability problem – also known as the interpretability problem – is a focus for the big guns in technology. In November, Google announced its next step in improving the interpretability of AI with Google Cloud AI Explanations, which quantifies each data factor’s contribution to the output of a machine learning model. These summaries, Google says, help enterprises understand why the model made the decisions it did – information that can be used to further improve models or share useful insights with the model’s consumers.

“Explainable AI allows you, a customer who is using AI in an enterprise context or an enterprise business process, to understand why the AI infrastructure generated a particular outcome,” said Google Cloud CEO Thomas Kurian. “So, for instance, if you’re using AI for credit scoring, you want to be able to understand, ‘Why didn’t the model reject a particular credit application and accept another one?’ Explainable AI provides you the ability to understand that.”

In October, Facebook announced Captum, a tool for explaining decisions made by neural networks with deep learning framework PyTorch. “Captum provides state-of-the-art tools to understand how the importance of specific neurons and layers affect predictions made by the models,” Facebook said.

Amazon’s SageMaker Debugger for its SageMaker managed service for building, running, and deploying Machine Learning models interprets how a model is working, “representing an early step towards model explainability,” according to the company. Debugger was one of the tool upgrades for SageMaker that Amazon announced last month. 

Just How Far
has Explainable AI Come?

In December at NeurIPS 2019, DarwinAI presented academic research around the question of how enterprises can trust AI-generated explanations. The study that was explained in the paper, Do Explanations Reflect Decisions? A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms, explored a more machine-centric strategy for quantifying the performance of explainability methods on deep convolutional neural networks.

The team behind the research quantified the importance of critical
factors identified by an explainability method for a given decision made by a
network; this was accomplished by studying the impact of identified factors on
the decision and the confidence in the decision.

Using this approach on explainability methods including LIME, SHAP, Expected Gradients, and its GSInquire proprietary technique, the analysis:

“Showed that, in the case of visual perception tasks such as image classification, some of the most popular and widely-used methods such as LIME and SHAP may produce explanations that may not be as reflective as expected of what the deep neural network is leveraging to make decisions. Newer methods such as Expected Gradients and GSInquire performed significantly better in general scenarios.”

That said, the paper notes that there is significant room for
improvement in the explainability area. 

AI Must be
Trustworthy

Gartner addressed the explainability problem in its recent report, Cool Vendors in Enterprise AI Governance and Ethical Response. “AI adoption is inhibited by issues related to lack of governance and unintended consequences,” the research firm said. It names as its cool vendors DarwnAI, Fiddler Labs, KenSci, Kyndi and Lucd for their application of novel approaches to help organizations increase their governance and explainability of AI solutions.

The profiled companies employ a variety of AI techniques to transform “black box” ML models into easier to understand, more transparent “glass box” models, according to Gartner:

“The ability to trust AI-based solutions is critical to managing risk,” the report says, advising those responsible for AI initiatives as part of data and analytics programs “to prioritize using AI platforms that offer adaptive governance and explainability to support freedom and creativity in data science teams, and also to protect the organization from reputational and regulatory risks.”

Gartner
predicts that by 2022, enterprise AI projects with built-in transparency will
be 100% more likely to get funding from CIOs.

Explainable
AI for All

Explainability isn’t just for helping software developers
understand at a technical level what’s happening when a computer program
doesn’t work, but also to explain factors that influence decisions in a way
that makes sense to non-technical users, Fernandez says – why their mortgage
was rejected, for example. It’s “real-time explainability.”

Supporting that need will only grow in importance as
consumers increasingly are touched by AI in their everyday transactions.
Followers are coming up on the heels of early adopter industries like
automotive, aerospace, and consumer electronics. “They are starting to figure
out that investment in AI is becoming an existential necessity,” says
Fernandez.

AI already is transforming the financial services industry, but it hasn’t reached every corner of it yet. That’s starting to change. As an example, Fernandez points to even the most conservative players getting the message:

“Banks in Canada rarely embrace new and emerging technologies,” he says, “but we are now talking to two of the Big Five who know they have to move quickly to be relevant to consumers and how they do business.”

DarwinAI has plans to significantly enhance its solution’s
explainability capabilities with a new offering in the next few months.

Image used under license from
Shutterstock.com

spot_img

Latest Intelligence

spot_img