Facial Recognition Technology
Contributors
Emani Fung is a current senior at Columbia University from West Orange, New Jersey. As a student of Economics and Political Science, her objective is to use research as a tool to improve policy outcomes at various levels of society. Her recent positions include interning in the US House of Representatives in the legislative office of Rep. Mikie Sherrill, as well as internships in portfolio analytics and corporate sustainability research with a financial services company.
Key Things to Know
Facial Recognition Technology (FRT) utilizes a branch of AI called computer vision, which enables a computer to identify and understand faces. It is most commonly used in mobile phone facelocks, smart door locking, and, perhaps more insidiously, in law enforcement.
Like other biometric technologies, FRT is frequently used by federal, state, and local law enforcement agencies to identify criminal suspects, victims, and persons of interest. Typically, these systems pool images from mugshots, driver’s licenses, police body cameras, public surveillance footage, and even social media to comprise extensive image datasets of civilian faces.
According to one study by the Government Accountability Office, twenty of the forty-two federal law enforcement agencies make use of FRT.
Another study by the Congressional Research Services found the Federal Bureau of Investigations to be a leading federal agency in the use of FRT, using it most frequently in the surveillance of US persons.
FRT is also used by the Department of Homeland Security, Customs and Border Protection, and other border enforcement agencies to identify and verify the identities of migrants coming into the US
Relevant Risks
Although FRT may provide a useful shortcut for law enforcement agencies, with the capacity to simulate human intelligence also comes the ability to replicate human error. Many advocacy groups and government officials at the federal, state, and local levels have criticized the use of FRT due to the risk of 1) furthering inequalities, 2) misidentification, 3) privacy violations, and 4) a lack of transparency.
Furthering Inequalities
FRT is frequently criticized for its tendency to promote ‘algorithmic discrimination.’ The White House defines algorithmic discrimination as the ability of an automated system to “contribute to unjustified different treatment or impacts disfavoring people based on their actual or perceived race, color, ethnicity, sex” or other legally protected classification. This discrimination often manifests in the use of these technologies on poorer communities and communities of color that already experience disproportionate amounts of police surveillance. These communities are often victim to more frequent stops and arrests on lower standards of reasonable suspicion, higher incarceration rates, and other forms of discrimination that lead to lost jobs, educational opportunities, livelihoods, and lives. Some advocacy groups, like the ACLU, also fear that, for people of color, FRT can lead to the denial of access to essential needs and services based on mere facial scans. Another concern came from the Council on American-Islamic Relations, who worried that “If we let face recognition spread, we will see more deportations, more unjust arrests, and mass violations of civil rights and liberties.”
Misidentification
To make matters worse, some reports have found another problem with FRT: it does not always work. According to one 2018 study by the National Institute of Standards and Technology (NIST), even algorithms developed by the leading facial recognition vendors are “not close” to achieving perfect identification rates, despite improvements in accuracy over the years. But not only is this technology imperfect; its error rates are also disproportionately higher for the communities of color which are already marginalized by the criminal justice system. In fact, one NIST study found that Black Americans were up to 100 times more likely to be misidentified than white men. Another study conducted by the ACLU used Amazon’s Rekognition software, a common tool in many state and local agencies, and found that the system incorrectly matched 28 members of Congress with mugshot images. These false matches were disproportionately people of color. These discrepancies are likely due to the fact that FRT commonly relies on mugshot databases, where Black and brown faces are overrepresented due to racial discrimination in the criminal justice system. So not only does law enforcement use of FRT make people of color more subject to often unwarranted and violating surveillance, but it also makes them more at risk of false identification for crimes they did not commit.
Privacy Violations
Most people within FRT databases are unaware of the system’s existence in the first place. These technologies attach personal information to the faces of private citizens, thus granting law enforcement agencies extensive access to private civilian data. This sensitive data is often loosely protected, endangering privacy rights. At times, FRT has even been used by some police forces to identify nonviolent protesters in violation of First Amendment freedoms of expression, association, and assembly.
Lack of Transparency
One final problem with FRT is the fact that it is trained on data that is malleable over time, in often unexpected and significant ways. This can negatively impact system functionality and trustworthiness, creating what is termed the ‘black-box problem.’ This problem describes a user’s inability to trace the steps of a machine’s decision-making. Not unlike the human brain, AI often loses track of the inputs that inform its decisions. If a system malfunctions, as in the case of a false facial identification, it can therefore be difficult to determine why it occurred, how best to address the problem, and whom to hold accountable.
Previous Legislation, Bills, and Frameworks
Various bills and government frameworks have attempted to address the previously-defined risks.
In Congress:
The House of Representatives
The George Floyd Justice in Policing Act of 2020 included the Federal Police Camera and Accountability Act and the Police CAMERA Act, which would restrict the use of facial recognition in police officer body cameras
The Facial Recognition and Biometric Technology Moratorium Act of 2020 proposed a state and local moratorium on the use of all biometric surveillance technologies
The FACE Protection Act of 2019 would require a federal court authorization determining that there is ‘probable cause’ for the use of FRT before any such technology could be used at the federal level
The Facial Recognition Act of 2022 called for extensive regulation of FRT at the federal level
The Senate
The Commercial Facial Recognition Privacy Act of 2019 proposed regulations on the commercial use of FRT data
The Ethical Use of Facial Recognition Act proposed restricting the use and purchase of FRT data at the federal, state, and local levels
The Senate Facial Recognition Technology Warrant Act of 2019 urged for the requirement of a federal court authorization before the use of FRT in federal agencies, similar to the House’s FACE Protection Act
Other Congressional Proposals
Calls for FRT regulation have been made at several House Oversight Committee hearings and by notable members of Congress such as Senate Majority Leader Chuck Schumer
In the Executive Branch:
The White House
The White House notably issued a Blueprint for an AI Bill of Rights in 2022
Executive Agencies
Separate frameworks for ethical AI use have been put forth by federal agencies such as the Department of Defense, the Department of Energy, the NIST, and the US Intelligence Community, among others
State, Local, and International Governments
In the US
FRT moratoriums have been both proposed and enacted across states and municipalities around the country
In Europe
The EU has regulated data privacy since 2016 with the passage of the General Data Protection Regulation (GDPR)
This regulation helped inform the EU’s AI Act, which is set to become the world’s first comprehensive legal framework for AI
Regulation Proposals
To address the aforementioned risks, I propose five key regulation instruments to be implemented across all federal agencies in order to promote a culture of fairness and radical transparency around the use of FRT. I also propose that any state, local, or tribal government requesting federal grants be required to implement these regulations.
Mandated Human Review
All agencies using FRT must ensure that each positive identification or identity verification conducted by an automated system is assessed by a trained human reviewer, in order to protect against algorithmic bias or inaccuracy and to improve agency accountability in the event of a misidentification.
Internal Performance Reviews and Impact Assessments
All agencies using FRT must implement a yearly internal performance evaluation to assess the accuracy and effectiveness of their systems and to assure parity in performance across subgroups of race, ethnicity, age, and gender. If a statistically significant discrepancy is found, then all system use must be halted until said discrepancy is addressed. Results from each performance evaluation must be submitted yearly to Congress.
Public Statements and Disclosures
All findings from internal performance reviews and impact assessments must be made available to the public, along with detailed documentation of model training data and other inputs affecting the software’s decision-making, so as to increase transparency and encourage FRT developers to prioritize model comprehensibility. Additionally, a positive FRT match cannot be the sole criterion for any decision made by an agency; a detailed internal report of all evidence used in agency decision-making must be required for all matters involving the use of FRT.
Federal Court Authorizations
All agencies must be prohibited from using FRT in the ongoing surveillance of one or more individuals without authorization by a federal court order. Such authorizations are to be granted only if it is determined that the agency has probable cause for the use of FRT. Agencies are to be prohibited from sharing any information obtained under these circumstances with any other agency or entity that has not been similarly authorized by a federal court.
Enforced Data Protection
Information stored in agency FRT software must be rigorously protected. For long-term storage, faces should be stored under numeric labels, as opposed to names, in order to remove all information connecting back to individual civilians.
All of these policies should help construct an environment around FRT usage that prioritizes fairness, transparency, and accountability. If policymakers begin advocating for these principles at the federal level, then they can encourage other private, state, municipal, and even international entities to do the same.
Further Reading
Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots
California blocks bill that could’ve led to a facial recognition police-state
California: Tell Governor Newsom to Stop Face Surveillance on Police Body Cams
Federal Law Enforcement Use of Facial Recognition Technology
NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software
Ongoing Face Recognition Vendor Test (FRVT) Part 2: Identification
Police are using protests as an excuse to unleash new surveillance tech