Take On Payments, a blog sponsored by the Retail Payments Risk Forum of the Federal Reserve Bank of Atlanta, is intended to foster dialogue on emerging risks in retail payment systems and enhance collaborative efforts to improve risk detection and mitigation. We encourage your active participation in Take on Payments and look forward to collaborating with you.
Comments are moderated and will not appear until the moderator has approved them.
Please submit appropriate comments. Inappropriate comments include content that is abusive, harassing, or threatening; obscene, vulgar, or profane; an attack of a personal nature; or overtly political.
In addition, no off-topic remarks or spam is permitted.
Facial Recognition Biometrics: Bruised but Still Standing
So far, 2020 has been a rocky year for facial recognition biometrics. In June, Amazon, Microsoft and IBM delivered a body blow, announcing they would not sell their facial recognition software to law enforcement agencies. They cited a lack of accuracy, a potential for misuse or abuse, and the lack of federal privacy legislation to safeguard individual rights. Widespread use of facial masks due to the COVID pandemic dealt another punch. Masks have generally rendered facial recognition inoperable for any number of applications on mobile phones. The masks have also hobbled the Transportation Security Administration's plans to further automate passenger authentication and check-in processes. Will the technology be able to recover and go another round?
Unfortunately, there is a great deal of misinformation and misinterpretation of studies about the technology behind facial recognition and its use, particularly with regard to claims of racial and gender bias. Critics often point to a 2018 study by MIT and Microsoft researchers in which three facial classification algorithms misclassified the gender of light-skinned males at a rate of less than 1 percent but darker-skinned females as high as 34 percent. Critics of facial biometrics technology have pointed to the research as evidence of bias against various minority groups.
It is important to note that "gender classification" is a very different from "facial recognition," although they are often lumped together in the media. In a gender classification process, a digital facial image of an individual is captured and processed through an algorithm that determines whether the image is that of a male or female. Numerous studies have shown that the accuracy of such classification systems is largely based on the database of images being used to "train" the algorithm—that is, to teach it to properly classify an image. The smaller the database, the less accurate the classification.
In a facial recognition process, the digital image captured by the camera is compared using a recognition algorithm to see if it matches the individual's image in a database or on their identification document. While the top performing algorithms are highly accurate, studies have found that results can vary based on lighting, camera definition, viewing angle, and other factors. While most people think facial recognition is new technology, the casino industry has used it to identify banned players since the 1990s.
In a future post, I will discuss the findings of the National Institute of Standards and Technology in its 2020 evaluation of more than 200 facial recognition algorithms. The promising news is that the top performing algorithms showed no discernible bias.
While there are certainly privacy and other issues connected to facial recognition and other biometric technologies, I believe objective education and discussions can address these issues. So I think the technology is not on the ropes but is ready to go another couple of rounds.
Take On Payments Search
- account takeovers
- data security
- digital currency
- financial inclusion
- identity theft
- payments risk
- payments studies/research
- TOP payments inclusion
- supervision and regulation
- workforce development