TU/e gaat ‘black box’ van voorspellende modellen visualiseren

The models that can predict future events are often so complex that almost no one understands any longer how a particular recommendation comes about, thereby creating increasing doubt among those concerned about the outcomes. To gain more insight into how a prediction actually comes about, Eindhoven University of Technology (TU/e) will literally show what happens in such a model. Professor of Visualization Jack van Wijk has received a TOP NWO grant of almost € 700,000 to do that.

Predictive models use algorithms that analyze past events in order to predict future events. These models are increasingly being used for a wide range of applications by industry, industry, government, healthcare and education. Examples include predicting required machine maintenance, detecting fraud, granting credit and determining the medical treatment that has the best chance of success.

Insight into complex models

Decisions and recommendations based on predictive models can have a major impact on the lives of those involved. In recent years, however, the scientific community, but also the government and society have raised more and more questions about the use of such methods. Often the models are so complex that no one can understand how a recommendation came about. Van Wijk: “When it comes to suggesting a movie, we think this is fine. But when it comes to serious medical intervention, rejecting a mortgage or sending a helicopter to investigate a suspicious ship, few will blindly accept and follow it.” In addition, the automatically generated models may also lead to undesirable negative side effects such as discrimination, for example when decisions are based on incorrect or distorted data.

For this reason, Van Wijk is using NWO’s TOP grant to develop new methods and techniques that provide insight into which choices and information have led to a certain recommendation by an automated decision model. For example, what information has been used, whether it is correct, which aspects are considered to be most relevant, which aspects influence each other, why different recommendations are made in two cases that are nearly enough the same, and the level of certainty a recommendation contains.

Users cental

This insight is important for various parties involved; developers can improve future decision models while domain experts can assess the plausibility of such choices. But central to the research are the people about whom the models make decisions. The aim is to develop tools to enable them to navigate intuitively and simply through a decision model, using interactive visualization. For example, to enable them to judge for themselves whether the recommendation made by the model was fair and gain insight into how they can ensure that the model makes a different decision next time around.

Cooperation

Visualization is central to the proposal, but collaboration with experts in the field of data analysis, human-machine interaction and psychology is crucial to its success. The proposal came out of a collaboration within JADS, the Jheronimus Academy of Data Science. The research will be conducted by three PhD students and supervised by a team of researchers from different backgrounds.