In a recent study conducted through the Cognitive and Motor Performance Lab, we were able to show what happens over many interactions with automation, specifically looking at situations where the automation would learn (i.e. get smarter) over time, compared to automation that would degrade and get worse over time.
One thing that has been fairly well documented about automation is that often when people get the chance to interact with automation over longer periods of time, they tend to start to distrust it. This effect is even worse if the automation is very faulty, with automation of under 70 per cent reliability causing continued decreased levels of trust in the system. Typically, automation with above 70 per cent reliability can maintain some level of trust, but this effect can be circumstantial.
However, in our most recent study we found that automation that crosses through this 70 per cent threshold does not necessarily cause any regaining of trust. In our study, we used automation whose reliability increased from 50 per cent to 100 per cent over the course of 300 trials. However, unexpectedly, users never regained any trust in the system. In fact, trust continued to drop throughout the entire 300 trials, even once the automation had become 100 per cent reliable!
At first it’s a confusing result, but there are some theories as to why this may have happened. The first theory is that there is a persisting effect of the first interactions with the automation. In the case that you were interacting with automation that was highly faulty, you would likely learn very quickly that it wasn’t worth even considering the automation’s recommendation since your judgment was likely better than that of the automation. After getting into this mindset, you would be less and less likely to check the performance of the automation, even if it were getting better. What contributed to this effect in our study was that we did not directly provide performance feedback to the users, meaning if they weren’t assessing the performance of the automation themselves, they would have no idea it was even getting better.
Although this has told us something new about how people’s trust in automated systems may change (or not change) over time, there is little practical utility in having a system that can never regain the trust of the user. However, it has set a baseline from which it will be easy to introduce new variables to see what may contribute to being able to regain trust of a user. Ideally, this will be done from a lens of practical applications of automation, to determine what changes would be useful in terms of overall performance of the system.
Image generated with a text-to-picture AI using the inspiration “The Computer Is Getting Smarter, It’s Learning”