Home Editor's Pick AI Errors vs. Human Errors

AI Errors vs. Human Errors

by internationaldirector

Written By: David Winter – International Director

There is a general concern that intelligent machines can be dangerous to humanity. Yet an examination of human behavior and machine behavior reveals that the former can inherently carry a greater risk to human welfare than the latter. An analysis of errors committed by AI (artificial intelligence) and other errors committed by humans showed that the types, probabilities, implications and consequences of those two genres of errors vary to a large extent, despite the commonalities.

AI errors

As our machines become more advanced, their errors can become more serious. According to Roman Yampolskiy, director of the Cyber Security Lab at the University of Louisville, the errors that AI systems make are a direct result of the intelligence that those machines are required to exhibit. Those mistakes can have their origins either in the learning phases of AI, or in the performance phases.

Contrary to common belief, which posits that machines can be more objective and unbiased, algorithms can often be prejudiced. For example, an AI system designed to predict the likelihood of an offender committing yet another crime in the future had its predictions influenced largely by race. The system falsely predicted that black men were more likely to commit other crimes. Aside from being racist, the AI system was inaccurate in its predictions overall. This and multiple other examples show that AI predictions can be bigoted and unethical.  

But not all AI errors are about misconceptions. Some are about performance. An example of such errors is the fatal crash of a Tesla car running by Autopilot. The car crashed into a tractor trailer on a Florida highway, killing the driver on board. The incident prompted Tesla to make upgrades to the car’s Autopilot, but brought to awareness the risks involved in self-driven cars.

As AI proliferates and becomes a major production factor in many industries, the consequences of those mistakes could be grave. Particularly, the use of AI in military may not bode well for entire populations at large. Yet the AI revolution is not showing any signs of abating.

The source of AI errors

The main challenge to AI originates from the considerable percentage of the data on which AI systems rely coming from humans. Such data carries with it the irrationalities and subjectivity of humans, who are mostly driven by self-interest.

Often the data comes from the masses, who are known for their erratic behavior. For example, in financial markets, crowds have caused bubbles, and still cause bubbles, that effectively burst later, wiping large sums of money from investors’ capital. These crowds are driven by their sentiment not only in the financial sphere, but almost everywhere else. Robots are learning from data, which, in the most part, has not been filtered, and this leads to effectively transferring the propensities of humans to machines and causing AI errors.

Cognitive basis of human errors

Lots of work has been done by psychologists and neurologists alike to shed light on the sources of human misjudgment. Human biases come to the forefront when discussing factors leading to errors. People tend to overestimate the importance of what they know, and are incognizant of the effects of what they do not know. They see patterns where there aren’t any, tend to give weight to more recent events, confuse chronological order with causation and cling to certainty even when it is much costlier than uncertainty. Those tendencies are costly in terms of the quality of our decisions and their outcomes.

Types of human errors

The costs of such biases increase when the person involved assumes a higher position with greater power, especially if the number of people affected by his or her decisions is large. Decision- and policy-makers have a greater responsibility to catch their biases before those biases radically affect the quality of their decisions and cause negative consequences. On the other hand, those on the field need to be aware of execution failures. And both need to realize that miscommunication dramatically increases the chance of failure in both planning and execution.

Human errors can be skill-based, rule-based or knowledge-based. Skill-based errors are execution errors, whereas the remaining two are planning-based. Most human errors (61 percent) are skill-based, such as slips and lapses, and they have a better chance of being detected. Other errors are less frequent and are less likely to be detected.

Impact of errors

A common factor in both human and AI errors is that both affect humans. The degree of impact can vary. But thus far, human errors have proven far costlier and more catastrophic. They range from allowing a controlled fire to get out of hand and burn down hundreds of houses, to a doctor making a diagnostic or procedural mistake and causing a patient’s death. Classifications of human errors can vary. They can be a result of poor planning or poor execution, or both. Yet, those errors are a major cause of disasters in all sorts of industries, such as aviation and nuclear-power generation, to name a few.

The element of predictability

One key difference that separates AI errors from human errors is the element of predictability. AI errors are far more predictable than human errors, in part because they are systematic, and because the behavior of machines can be modeled (unlike that of humans). And the fact that AI errors are easier to predict means they are easier to fix and even prevent. Regardless of how advanced AI systems become, they will always remain less creative than humans, which is a positive implication overall, as robots won’t go too far before humans can intervene and correct their courses.

What is important when approaching AI is to learn that humans make mistakes and to become aware of the limitations of the data they generate. While many are afraid that AI may run rampant due to its intelligence, the solution is to make it even more intelligent. Only by becoming more intelligent can it have the capability to discover and address human errors proactively.

Conclusion

The margin of human error remains vaster than the margin of AI errors. And in essence, the source of AI errors is human error. Investment is needed to enhance error detection for both types of errors, with the aim of mitigating their impacts. As both humans and machines evolve, the possibility of occurrence of new errors increases (and of old errors decreases), which warrants adequate risk-management efforts. While costly, those errors offer a great learning opportunity, and they carry the seed of further improvement, which is why they should not be ignored. Errors can be a sign of intelligence, and hopefully they can be a small price to pay in exchange for great advancement. Now that machines can learn, they can also be taught to deal with those errors more effectively.

 

Related Articles

Leave a Comment