Fujitsu Laboratories and Hokkaido University have announced the development of a new technology based on the principle of “explainable AI” that automatically presents users with steps needed to achieve a desired outcome based on AI results about data, for example, from medical checkups.

“Explainable AI” represents an area of increasing interest in the field of artificial intelligence and machine learning. While AI technologies can automatically make decisions from data, “explainable AI” also provides individual reasons for these decisions–this helps avoid the so-called “black box” phenomenon, in which AI reaches conclusions through unclear and potentially problematic means.

While certain techniques can also provide hypothetical improvements one could take when an undesirable outcome occurs for individual items, these do not provide any concrete steps to improve.

For example, if an AI that makes judgments about the subject’s health status determines that a person is unhealthy, the new technology can be applied to first explain the reason for the outcome from health examination data like height, weight, and blood pressure. Then, the new technology can additionally offer the user targeted suggestions about the best way to become healthy, identifying the interaction among a large number of complicated medical checkups items from past data and showing specific steps to improvement that take into account feasibility and difficulty of implementation.

Ultimately, this new technology offers the potential to improve the transparency and reliability of decisions made by AI, allowing more people in the future to interact with technologies that utilise AI with a sense of trust and peace of mind.

Developmental background

Currently, deep learning technologies widely used in AI systems requiring advanced tasks such as face recognition and automatic driving automatically make various decisions based on a large amount of data using a kind of black box predictive model. In the future, however, ensuring the transparency and reliability of AI systems will become an important issue for AI to make important decisions and proposals for society. This need has led to increased interest and research into “explainable AI” technologies.

For example, in medical checkups, AI can successfully determine the level of risk of illness based on data like weight and muscle mass (Figure 1 (A)). In addition to the results of the judgment on the level of risk, attention has been increasingly focused on “explainable AI” that presents the attributes that served as the basis for the judgment.

Because AI determines that health risks are high based on the attributes of the input data, it’s possible to change the values of these attributes to get the desired results of low health risks.

About the technology

Through joint research on machine learning and data mining, Fujitsu Laboratories and Arimura Laboratory at the Graduate School of Information Science and Technology, Hokkaido University have developed new AI technologies that can explain the reasons for AI decisions to users, leading to the discovery of useful, actionable knowledge.

AI technologies such as LIME and SHAP, which have been developed as AI technologies to support decision-making of human users, are technologies that make the decision convincing by explaining why AI made such a decision. The jointly developed new technology is based on the concept of counterfactual explanation (3) and presents the action in attribute change and the order of execution as a procedure.

While avoiding unrealistic changes through the analysis of past cases, the AI estimates the effects of attribute value changes on other attribute values, such as causality, and calculates the amount that the user actually has to change based on this, enabling the presentation of actions that will achieve optimal results in the proper order and with the least effort.


Using the jointly developed counterfactual explanation AI technology, Fujitsu and Hokkaido University verified three types of data sets that are used in the following use cases: diabetes, loan credit screening, and wine evaluation. By combining three key algorithms for machine learning — Logistic Regression, Random Forest, and Multi-Layer Perceptron — with the newly developed techniques, we have verified that it becomes possible to identify the appropriate actions and sequence to change the prediction to a desired result with less effort than the effort of actions derived by existing technologies in all datasets and machine learning algorithm combinations. This proved especially effective for the loan credit screening use case, making it possible to change the prediction to the preferred result with less than half the effort.

Using this technology, when an undesirable result is expected in the automatic judgment by AI, the actions required to change the result to a more desirable one can be presented. This will allow for the application of AI to be expanded not only to judgment but also to support improvements in human behaviour.