Like AI, Honey Bees Can Recognize Human Facial Attribution
It’s pretty rare in the animal kingdom to have a complex neural memory system that can recognize unique facial features, but it is a trait that can be found in Apis mellifera, the honey bee. Honey bees can visit up to 5,000 flowers in a day, and the need to recognize nectar flowers is mission critical. Bees also have a mathematical dance, known as the waggle dance, that can give directions to other bees in the hive to nectar-producing flowers up to a mile away.
Although not a part of the honey bee’s daily needs, a study in 2005 by Adrian Dyer from Monash University revealed that honey bees do have the ability to recognize different human faces—with the help of some reward-based training with sugar water. And while Martin Giurfa from the Université de Toulouse, France debates that the bees aren’t mentally registering these images as faces, rather they’re seeing these faces as strange flowers, the ability to recognize these patterns remains.
Top AI ML News: Botco.ai Launches Instachat Builder Enable Rapid AI Chat Deployment
Does this sound familiar?
Facial recognition has come under global scrutiny for the unfortunate bias that can creep into AI models over time. The growing need for facial recognition is undeniable, especially as our society moves closer to accepting that cameras are involved in every part of our world and demanding actual video proof of crimes.
I am an optimist and am aligned with John Locke’s philosophy that people are good at their core. This extends into my philosophy of technology. When people worry about big brands or the government having my email content or my social posts information—I often laugh to myself and wonder what they would think looking at how completely diverse yet holistically boring my communications and social posts are.
Recommended AI ML Projects: Online RL-based DayDreamer can Train a Robot without New Machine Learning Algorithms and Simulation
A friend and I were driving from California to Arizona for the 2009 Tour de Tucson, and we received a ticket for speeding from a camera system on the freeway. We were never pulled over nor did we know until we got home we had a speeding infraction. Yes, we were speeding and yes, we broke the law.
Yes, the picture of my friend driving with me in the passenger seat sawing logs was both frustrating and funny.
A year later on the same journey, the expensive system was deactivated out of public outcry of an invasion of privacy. In one year, the ticketing system generated $87 million in traffic fines. According to the National Highway Traffic Safety Administration, fixed speed cameras reduce injury crashes by 20–25%, and mobile speed cameras reduce injury collisions by 21–51%.
As a society, we want cameras that can both reduce harm and/or risk to human lives, and look the other way when it comes to our infractions. We cannot turn off technology that helps protect us because we fear it may catch us doing something like speeding.
That being said, we cannot afford or allow for software that produces false positives with bias on racial profiling.
Facial recognition will get better but we need to benchmark software more often to ensure no bias has crept into the models. Looking at attribution recognition is the next phase of this unstructured video AI processing journey.
Instead of using the eye, nose, and mouth to recognize a human, the ability to collectively measure and track smaller details like the distance of ears to the nose, details found on shoes, tattoos, and shapes of glasses is more like mapping the facets of a diamond than traditional facial recognition—a task that would be best performed by AI.
Another benefit AI offers is resource reservation. The cost of processing the amount of video being captured today would be staggering if it were all to be managed by humans, who can also make mistakes and have biases.
Today, government and legal entities are using facial recognition for tracking suspects or their movements after the fact to help prosecute perpetrators. Attribution recognition is critical to be both further developed, measured for bias, and accepted by society as a way to reduce crime and a means to reducing costs within the legal system.
New AI automation within the systems has also improved in identifying and blocking bias. By benchmarking within the AI model–the automation can test against its own processing to see if bias has crept into the learning model over time and exposure to content.
Bias in nature can save a life (e.g. the vibrant colors of a poison dart frog or the owl-like eyes on the Polyphemus moth’s wings), but bias within AI can ruin a life. While we simply cannot afford to turn off the system because we don’t like being caught breaking the law, what we can and need to do is hold our technology creators, the AI models, and our government accountable if they are going to record us and use technology to track unique individuals.
To support specific cases, employee workload, and anti-bias in the state and federal government, we are working with several AI attribution models for autonomous face and voice recognition, proving a commitment to delivering the best technology possible while benchmarking the technology needed to avoid bias.
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.