Exploring AI part three - Explainable AI - Lifting the lid off the black box

28 February 2019 / Martin Benson

Other parts of this series:

Exploring AI part one: What on earth is AI anyway?

Exploring AI part two: Unpacking Natural Language Processing 

Exploring AI part three: Explainable AI – lifting the lid off the black box

 

Over the last few years, Artificial Intelligence (AI) and Machine Learning technology has made huge advances, with 83% of businesses saying AI is a strategic priority for their business today*. What’s more, the Government has AI high on its agenda and released plans to grow AI in the UK in its latest paper.

As AI grows in complexity, sophistication and autonomy, it opens a huge number of opportunities to both businesses and a society as a whole. However, as more of these opportunities are created, the need to explain how AI programs come to a decision is becoming even more crucial, with 76% of CEO’s saying they are concerned about the potential for bias and the lack of transparency**.

As AI becomes more prominent in our daily lives and is used to make more important decisions, it’s crucial to be able to trust AI and understand how decisions are made. The ‘black box’ problem surrounding AI is already raising questions around how we can ensure that it is being applied in an ethical way, and this is (rightly) holding back more widespread use of AI systems in some areas. When the decisions that AI systems make affect people’s lives or health, having systems that are trusted and provide accountability for the decisions made has never been more crucial.

Why is this hindering AI adoption?

Currently, due to a lack of explainability surrounding AI and machine learning, it is difficult to know how an algorithm arrives at its decision and therefore, it is difficult to know when a mistake or an error has occurred. If that decision determines whether a person is able to buy a house, receive treatment for cancer or be released from prison for example, then it’s critical that the basis for the decision is understood and stands to reason.

Separately, there is a common misconception that AI is completely objective but, it is only objective in relation to the data fed into the system. Machine learning relies heavily on data so if the inputted data is biased, the outcome will also be biased. So, if care is not taken to prevent it, machine learning systems can encode and replicate extant human biases, potentially propagating them at greater scale. This makes it possible for sexism, racism and other forms of discrimination to be built into the algorithms of intelligent systems that shape how humans are categorised and advertised to.

What is explainable AI?

Most AI and machine learning systems are unable to explain the process or reasoning behind the decision made, and generally speaking, the inner workings of these systems are too complex for us to examine and rationalise.

Interpretability or human-interpretable interpretations (HII) of a machine learning model is the extent to which a human (including non-experts in machine learning) can understand the choices taken by models in their decision making process (how, why and what).

Explainable AI is a field that is focused on shedding light on the way that AI systems make decisions. This might include any of the following:

  • Strengths and weaknesses of the program
  • The criteria the program uses to arrive at its decision
  • Rules that constrain behaviour of the system
  • Perturbation and sensitivity analysis that investigates how the system behaves as inputs vary
  • Diagnosis of errors the system may be prone to

In the light of recent regulation changes such as GDPR, business risk and ethical concerns, the use of explainable AI will play an increasing role.

Why is explainable AI so important?

The lack of visibility into how AI algorithms work leads to challenges, particularly in fields where the wrong decision being made can cause irreparable harm or where decision makers are required to provide explanations for the outcome.

In particular, in the worlds of finance, insurance and banking where the business must explain each and every decision taken by the model to both regulators and customers. There are also many real-world scenarios in which biased or incorrect models could have significant effects such as predicting potential criminals, credit scoring, fraud detection, loan assessment and self-driving cars where understanding of the model and interpretation are extremely important.

Where will explainable AI be used?

1. Marketing campaigns

Explainable AI can be used by marketers to create much more relevant messaging, offers and creative to match the interests of potential customers.

2. Online recommendations

AI is used to recommend products or services to customers based on their previous viewing or purchase history

3. Credit and loan application decisions

If an application process is rejected by an AI system, the bank should be able to trace the decision back to the specific step where the denial occurred and also provide a reasoning for the AI system’s decision making at that particular step. In addition, these models must show regulators how a certain decision is arrived at due to the need for transparency of decisions made within the sector.

4. Healthcare

There are some areas of medicine where AI is being used to generate automated diagnoses, most notably in the area of tumour detection. While AI can perform these tasks at a superhuman level, it is nevertheless critical to understand how the systems produce their answers both because the cost of false negatives is so high and also to inform on treatment needs.

Conclusion

As AI becomes more explainable, trust and confidence in its abilities builds, which should rapidly increase adoption rates. This in turn puts businesses in a strong position to innovate and stay ahead of competitors while being able to remain transparent and ethical. Explainable and more responsible AI will be the backbone of the intelligent systems of the future that enable the intelligent enterprise. Explainable AI won’t replace people, but will complement and support them so they can make better, faster, more accurate and more consistent decisions.

 

Sources:

*Forbes (https://www.forbes.com/sites/louiscolumbus/2017/09/10/how-artificial-intelligence-is-revolutionizing-business-in-2017/#7a7386d35463)

**PWC (https://www.pwc.com/us/en/services/consulting/library/artificial-intelligence-predictions/responsible-ai.html)