A large component of my research is understanding how someone’s trust in an automated system can be measured. For those keeping up with some of my previous articles, you likely will have gotten the impression that trust is a fairly critical component to human-AI teams. Although it is not the sole driver, it has a huge influence of the performance of the team and can cause performance decreases in both over and under trust scenarios. For this reason, it’s important to understand both the amount of trust someone has in automation, as well as how that trust changes while someone interacts with automation.
Unsurprisingly, trust is a fairly difficult thing to measure in humans. There are still many unanswered questions about trust, and even when we try our best to define trust there are still often components that are missing. For example, we would suspect that someone that trusts an automate system would also be likely to rely on that automated system for assistance or take their recommendations into consideration, however that is not always the case. The opposite can also happen, where the user may not trust the automation at all yet the still behave in a way that would appear that are relying on the decisions of the automation whenever they themselves are uncertain.
The exact reason as to why this disconnect sometimes appears is still relatively unknown, but often it is seen when conducting research using surveys as a proxy to measure trust. Through strategic questions, it’s possible to capture an overall sense of trust that someone may have towards automation simply through self-reporting. This method of measurement is fairly convenient due to its simplicity, however, also contains a few drawbacks. Frist of which is that it could be possible that someone’s self-reported perception of trust may not be in-line with the actions or behaviours that the user exhibits towards the automation. The second major issue is that being asked periodic questions about how much you trust an automated system is not a practical solution for real world implementation of trust measurements while interacting with automation. Could you imagine if your car asked you every 5 minutes how much you trusted it? Not only would it be distracting and bothersome, but it would also be a little eery and potentially make you question whether you actually should trust it.
In an attempt to circumvent these issues, other methods have been explored as means to capture someone’s level of trust. Many of these methods rely on physiological measures that don’t require a user to divert their attention from the task at hand.
A very basic method of measuring trust using a physiological measure is through heart rate. Not only is it a very simple measure that can be taken continuously, but it is also minimally intrusive to a user, especially with the advent of smart watches. With the direction that research has been going recently, it may become even less intrusive by having AI use video footage of your face to measure your heart rate (yes that’s a real thing!). However, one of the major limitations of using heart rate as a proxy for trust is that heart rate is dependent on many physiological states that aren’t specifically related to trust. One example would be workload, with people experiencing higher workloads feeling more stress and thus a higher heart rate, which would not directly correlate with the trust someone has in the automated system. This issue can be mitigated to an extent through including other physiological measures such as skin conductance (basically just how much you sweat), however is still limited in its interpretation.
Other methods recently being explored is the use of neurological measures as a proxy for trust. It has been shown that there are certain neurological markers that are related to trust, thus allowing someone to determine how much someone trusts automation. The benefit to this type of system is that it can be a continuous measure throughout the interactions with the automation, and it is a system that is not dependent on the subjective perception of trust. The downside of such systems however are that they are often large, expensive, and not viable in practical real life scenarios.
Another up-and-coming measure is eye-tracking. With advancements in the quality of video capture and processing power, it is possible to track eye position of a person, allowing for a fairly accurate interpretation of what someone is looking at. Although this isn’t a direct measures of trust, what someone is attending to is highly indicative of their thought process, especially when it comes to tasks that require divided attention. An example of this would be driving a car, where it is expected that you have your eyes on the road, but also balancing the tasks of checking your mirrors, your blind spots, the gauges on your dash, or sometimes changing the radio station. The amount of time spent or the number of times checking each subset of the task can inform on what the driver is experiencing or thinking in that moment. The same applies to automation. Specifically, if a user is performing a task but has to occasionally monitor an automation working on another task, it is possible to tell just how much that user is willing to trust the automation based on how frequently they look at the automation, how long they stay looking, and what parts of the task they are actively looking at. There are still some limitations when it comes to interpretation of all this information (especially in real time applications), however it appears to be a promising endeavour due to its relatively easy implementation, its non-invasive format, and the relatively inexpensive technology required.
The overarching point of all this is that in the coming years it is likely that automation will begin to incorporate methods to measure how much you seem to trust it. Using this information, it will likely update how it chooses to behave so that it may best optimise the system as a whole to prevent accidents or errors. Although it may seem farfetched, much of the technology required to non-invasively measure a humans trust already exists and with some tweaking will become viable in many commercial applications