Saliency Cards: Making Sense of Machine-Learning Models
Researchers from MIT and IBM Research have developed a tool to help users choose the best saliency method for their particular task. Saliency cards provide standardized documentation of how a method operates, including its strengths and weaknesses and explanations to help users interpret it correctly. They hope that, armed with this information, users can deliberately select an appropriate saliency method for both the type of machine-learning model they are using and the task that model is performing.
What are Saliency Methods?
Saliency methods are techniques used in machine learning to explain model behavior. They are critical in real-world scenarios where human users need to know when to trust the model’s predictions, such as in medical imaging. However, machine-learning models are so complex that even the scientists who design them may not understand exactly how they make predictions.
Picking the Right Method
With so many saliency methods available, users often settle on a method because it is popular or because a colleague has used it. However, choosing the wrong saliency method can have serious consequences. For example, one saliency method, known as integrated gradients, compares the importance of features in an image to a meaningless baseline. The features with the largest importance over the baseline are most meaningful to the model’s prediction. This method typically uses all 0s as the baseline, but if applied to images, all 0s equates to the color black. This could be a big problem when analyzing medical images where black pixels may be meaningful.
Saliency Cards to Help Users
To help users avoid problems like this, the researchers created saliency cards that summarize how a saliency method works in terms of 10 user-focused attributes. The attributes capture the way saliency is calculated, the relationship between the saliency method and the model, and how a user perceives its outputs. For example, one attribute is hyperparameter dependence, which measures how sensitive that saliency method is to user-specified parameters. With the card, a user could quickly see that the default parameters may generate misleading results when evaluating X-rays.
Conducting User Study
The team conducted a user study with eight domain experts, from computer scientists to a radiologist who was unfamiliar with machine learning. During interviews, all participants said the concise descriptions helped them prioritize attributes and compare methods. The interviews also revealed some surprises. Researchers often expect that clinicians want a method that is sharp, meaning it focuses on a particular object in a medical image. However, the clinician in this study actually preferred some noise in medical images to help them attenuate uncertainty.
Future Directions
Moving forward, the researchers want to explore some of the more under-evaluated attributes and perhaps design task-specific saliency methods. They also want to develop a better understanding of how people perceive saliency method outputs, which could lead to better visualizations. In addition, they are hosting their work on a public repository so others can provide feedback that will drive future work. In the end, this is the start of a larger conversation around what the attributes of a saliency method are and how those play into different tasks.
Conclusion
Saliency cards are designed to give a quick, glanceable summary of a saliency method and break it down into the most critical, human-centric attributes. They are really designed for everyone, from machine-learning researchers to lay users who are trying to understand which method to use and choose one for the first time. By using saliency cards to guide researchers in choosing appropriate methods for their models, we can trust a machine’s predictions more accurately.
GIPHY App Key not set. Please check settings