With the incorporation of artificial intelligence (AI) into workplaces, it has been expected that the amount of work that can be achieved will increase while also increasing the safety of that work. AI is currently being used in some cutting-edge fields such as aircraft piloting and monitoring, self-driving cars, and stock market trading. However, if AI really is so smart why do we continue to see so many catastrophic failures?
Current use of AIs often incorporates a human-AI team, in which the AI is allowed to perform a given set of pre-defined actions, however, the expectation is that a human is always watching over those actions to make sure that they are the correct, intended, and safe actions. A recent example of this comes from companies like Tesla, where their self-driving cars are allowed to drive themselves but the person in the driver’s seat is still expected to be monitoring the car. What’s important to note here is that this is a relationship where the human has to trust the AI (at least enough to let the AI drive), but not so much as to stop monitoring the AI.
Research on human-AI teams have shown that trust is a critical component of how a human-AI team will perform. Specifically, trust must be correctly calibrated so that the human doesn’t over-trust but also doesn’t under-trust the system, both of which can lead to catastrophic failure.
Some of the negatives of under- and over-trusting are fairly evident. For example, under-trusting can lead to lack of use of the automation, meaning the human-AI would likely perform no better than had the human performed by themselves. On the other hand, over-trusting can lead to situations where the AI is almost entirely in control, and in the event that it makes a wrong decision the mistake could go unnoticed.
Depending on the task at hand, it can be fairly inconsequential to over- or under-trust an AI. Typically, more complex tasks have the potential for more catastrophic errors. In the case of over-trust, research has shown that this effect gets worse as the automation gets better. With better automation, the human watching over the task will become more and more detached from the situation itself, so when the inevitable failure of the system happens, they human immediately gets overwhelmed with not knowing the current situation. It’s as if you were taking a nap while your car was driving. When the error occurs, even if the AI was able to warn you about it happening (see my colleague Colin McCormick’s article on attention), you would be so overloaded with information it would be nearly impossible for you to make any decision that would result in a better outcome.
Although AI technology may be getting better and smarter, it is likely that it will continue to see catastrophic failures, specifically in cases where AI is used and is over-trusted. By giving an AI more power over controlling a task, we further remove ourselves from understanding the specific circumstances of the task as well as the practice required to perform the task manually. Inevitably, the AI will fail and if we aren’t careful to watch it along the way no one will be able to recover it from a catastrophic failure.
Image created by a text-to-picture AI with the inspiration “Catastrophic System Failure”