Image Alt

Marked Mindz

What are the best times to trust AI assistants’ predictions?

A busy hospital radiologist uses an artificial intelligence system to diagnose medical conditions using X-ray images from patients. The AI system can speed up her diagnosis, but how do you know when to trust its predictions?

 

She doesn’t. She may instead rely on her knowledge, the system’s confidence level, or an explanation of the algorithm’s prediction to make an estimate. This may seem convincing, but it may still be incorrect.

 

MIT researchers have created an onboarding method that helps humans understand when an AI “teammate” is trustworthy. It guides them to better discern the situations in which a machine can make correct predictions and those where it makes mistakes.

 

The training technique could be used to help people make better decisions and come up with faster conclusions when working with AI agents.

 

“We propose a teaching stage where we gradually introduce humans to this AI model so that they can, for their own purposes, see its strengths and weaknesses,” Hussein Mozannar, a graduate candidate in the Social and Engineering Systems doctoral programme within the Institute for Data, Systems and Society (IDSS), who is also a researcher at the Computer Science and Artificial Intelligence Laboratory CSAIL and the Institute for Medical Engineering and Science. We mimic the interaction of the AI with humans in practice but also provide feedback to aid them in understanding each interaction with the AI.

 

Mozannar co-authored the paper with Arvind Sayanarayan (assistant professor of computer science in CSAIL), and David Sontag (senior author, associate professor of electrical engineering at MIT, and leader of Clinical Machine Learning Group). The research will be presented to the Association for the Advancement of Artificial Intelligence (February).

 

Mental models

 

This research focuses on how people view others. The radiologist may consult a colleague who has expertise in that area if she is unsure about a case. She has built a mental model from past experience and knowledge about her colleague that helps her assess his advice.

 

Mozannar states that humans build similar mental models when they interact to AI agents. Therefore, it is crucial for those models to be accurate. Cognitive science suggests that humans can make complex decisions based on past experiences and interactions. The researchers devised an onboarding process that shows AI and human working together. These examples serve as reference points for the future. The algorithm was created to identify the best examples of AI that the human can learn from.

 

Mozannar states that we first learn about a human expert’s biases, strengths, and then use observations of past decisions made unguided using AI. “We combine our knowledge of the AI with our knowledge of the human to determine where the AI will be most helpful to the human. We then get cases in which the AI is needed by the human and cases where it is not necessary.

 

Researchers tested their onboarding method on a passage-based task. The user is given a passage and a question. The passage contains the answer. The user must answer the question, then click the button to “let AI answer.” However, the user cannot see the AI answer ahead of time, so they have to rely on the mental model of AI. These examples are shown to the user during the onboarding process. The AI system then attempts to predict the answer. While the AI system may prove to be wrong and the human might be right, the AI will provide an explanation of why it made the right prediction. Two contrasting examples will help the user understand why the AI did what it did.

 

Perhaps the training question is asking which plant of two is native to more continents. This question was based on a complicated paragraph in a botany textbook. The AI system can either answer the question on its own or leave it to the human to answer. She then sees two more examples to help her understand the AI’s capabilities. Maybe the AI is wrong about a question about fruits, but right about a question about geology. The words that the system used in each case to predict the outcome are highlighted. Mozannar explains that the highlight words help the user understand the limitations of the AI agent.

 

The user writes down the rule that she has learned from the teaching example. This will help her retain the information and can be used later to guide her interactions with the agent. These rules are also a formalization for the user’s mental model about the AI.

 

The effects of teaching

 

Researchers tested the teaching method with three different groups of participants. One group completed the entire onboarding process, while another group did not receive follow-up examples. The baseline group received no teaching, but could see the AI’s response in advance.

 

“Those who received instruction performed just as well as those who did not receive it, but were able to see the AI’s answer.” Mozannar states that they can simulate the AI’s answers as well as they could if they had actually seen them.

 

Researchers dug deeper into data to find out which rules each participant wrote. The researchers found that nearly half of those who had received training were able to write accurate lessons about the AI’s capabilities. The people who learned the correct lessons correctly were able to spot 63 percent of the cases, while those without accurate lessons could only see 54 percent. The AI answers could be seen by those who did not receive instruction but were able to see them correctly on 57 percent.

 

Teaching is a powerful tool that has a positive impact on students. This is the key takeaway. He says that participants are more likely to learn effectively if they can be taught well than if they were given the answer.

 

However, the results show that there is still some room for improvement. Only half of those trained in AI were able to build accurate mental models. Even those who did, they were wrong only 63 percent of time. Mozannar states that even though they had learned the right lessons, they didn’t always follow their own rules.

 

Researchers are left scratching their heads over this question: even though the AI is right, why wouldn’t people listen to their mental models? This question is something they want to investigate in the future. They also plan to improve the onboarding process to speed up the process. They are interested in conducting user studies using more advanced AI models, especially in the health care setting.

You don't have permission to register