Camera With Artificial Intelligence Misidentifies Cell Phone Use, and Driver Is Fined for Just Scratching His Head While Driving.
An unusual case caught attention in the Netherlands. A driver was fined 380 euros for allegedly talking on his cell phone while driving. However, according to him, he was just scratching his head. The mistake was made by a camera with artificial intelligence.
Accusatory Photo and Surprise at the Fine
The incident occurred in November of last year. Tim Hansen received the violation notice a month after driving on a Dutch highway.
The fine indicated use of a cell phone while driving. Hansen, however, did not recall making any calls during that trip.
-
11,000-year-old indigenous civilization discovered in Canada, with permanent fire pits, stone tools, and bison remains, could change theories about the history of North America.
-
Grey seal pups were appearing with terrifying marks on Sable Island, and after decades of speculation about sharks and boat propellers, scientists discovered that the danger came from where it was least expected: adult males of their own species during the breeding season.
-
Intense sound of cicadas reveals curious behavior of the species and explains why this sound dominates the hottest days.
-
Meal voucher will be accepted on any card machine: new PAT rule promises to turn the game around for 22 million workers by November 2026
Curious, he accessed the website of the Central Judicial Collection Agency and examined the photo generated by the automatic system.
At first glance, it did look like he was holding a phone. But upon closer inspection, he realized that his hand was empty. He was just scratching the side of his head.
Despite this, the fine was validated by a human employee after checking the image. In other words, the system’s error was not identified during the review.
Expert Explains How Errors Occur
Tim Hansen works with information technology. He develops algorithms for image analysis and editing.
He used this experience to explain how the technology used by the Dutch police works.
According to him, these AI models need to predict binary responses: yes or no. And, in many cases, the system can make mistakes. In his case, the algorithm concluded there was a cell phone in his hand. But there was nothing.
“This is a false positive,” Tim explained. “The model predicted that I was holding a phone, but it was wrong. A perfect model only gets it right, but that is very rare.”
Incomplete Data Can Induce Error
Hansen also pointed out a possible technical reason for the failure. The technology was likely trained with many images of people using cell phones near their faces.
But it may have few images of people just with their hands near their heads.
This causes the system to automatically associate the hand’s position with cell phone use, even when there is no device at all. He suggested that the training database be expanded with more varied images.
Human Filter Also Fails
Even with the use of artificial intelligence, there is still a human reviewer.
This person must check the image and decide if the violation actually occurred. But, in Hansen’s case, the reviewer also confirmed the error.
For him, this shows that human supervision, although important, does not guarantee that all mistakes will be corrected. And it reinforces the need for more caution in using technologies that directly impact people’s lives.

Be the first to react!