
Although many people are worried about an artificial intelligence (AI) takeover, technology is not yet and likely quite distant from being able to entirely replace a human when it comes to doing certain tasks. Although we have developed algorithms that are able to do what seems like highly complex behavior, there is still a finite limit to what they are able to do. Even in the forefront of machine learning technologies and artificial intelligence, there are enormous challenges that would need to be overcome to ever be able to teach a computer to interpret information as well as their human counterparts can.
One of the most obvious benefits of automation is that it is very exact in what it does. By this I mean that given a specific program, the automation can follow the exact instructions millions of times without ever deviating. The downfall is that automation will only ever follow the program, meaning if it encounters a problem or an unfamiliar situation, it can become stuck. This is different than humans, who, over time, can become complacent or fatigued, so that the likelihood of a mistake will increase. However, the strength of a human is being able to problem solve novel situations.
Because of the differences between humans and automation, it is often beneficial to create a system that incorporates both the human and automation. Emerging research on Human-AI teams has started to show the benefits of these teams, which are often able to outperform either the Human or AI had they performed the task independently.
Since this partnership works so well, why don’t we implement Human-AI teams everywhere? Unfortunately, it’s not that simple. The same research that has shown the Human-AI teams can outperform has also shown that in some situations they will underperform. One of the critical factors that determines whether the team will over or under perform is trust.
How can trust (or sometimes the lack of trust) have such a detrimental effect on the team? In the same way that humans can trust (or not trust) each other, the same principles apply to humans trusting AIs, and if someone doesn’t have an appropriate amount of trust, they may be causing more harm to their objective than good.
Imagine that you were working with an AI and you had to trust it to complete some tasks for you. If you had complete trust in that AI, you would probably never check its work. If that were the case, and for some reason the AI suddenly did make a mistake, there’s a good chance you would miss the mistake. However, the solution isn’t to never trust the AI either. If you do not trust the AI at all then you would have to check all of its work, which would be an inefficient use of time and resources.
What we know is that trust in automation must be set appropriately. If it is too high or too low, the performance of the team starts to degrade. With the inevitable rise of Human-AI teams, we need to better understand the dynamics of Human-AI teams and what contributes to the trust humans have in their machine partners.
Image created with a text-to-picture AI with the inspiration “AI and human working together”