Government watchdog finds little oversight over the use of facial recognition technology by US agencies
20 U.S. agencies are using facial recognition with a near-total lack of accountability about how the systems are deployed
More than half of the twenty U.S. federal agencies that have used facial recognition technologies in the last several years could not tell Congressional investigators which systems they were using and had not assessed their privacy risks.
The new revelations were published in a sweeping new report by the U.S. Government Accountability Office, a federal watchdog agency, which examined how a network of federal agencies, including the Department of Veterans Affairs, the FBI and the Internal Revenue Service, used controversial facial recognition technologies with little to no oversight and regulation.
Andrew Ferguson, a law professor at the American University Washington College of Law specializing in privacy, civil rights, and surveillance, called the first-of-its-kind study a “red flag” that exposed “how ad hoc and happenstance the adoption of all of these technologies are.”
The report surveyed 42 federal agencies employing law enforcement officials about their use of facial recognition systems from January 2015 through March 2020. Nearly half of those surveyed — 20 — reported using the technology, investigators found. Those included the U.S. Customs and Border Protection, the Drug Enforcement Administration, the U.S. Secret Service, among others, as well as the U.S. Fish and Wildlife Service and the U.S. Postal Inspection Service.
Ten agencies reported using the controversial facial recognition start-up Clearview A.I., which the New York Times called the “secretive company that might end privacy as we know it.”
Facial recognition has long been criticized by rights groups for misidentifying minorities and women at a higher rate. In April, a man sued the city of Detroit after he was falsely arrested — his driver’s license photo was erroneously matched to surveillance video of a shoplifter.
According to the GAO report, the majority of respondents said they use private companies to conduct facial recognition searches on their behalf. 13 agencies using third-party vendors admitted they did not know which privately-owned facial recognition systems their employees are using — a revelation that Ferguson said “speaks of the dangers of an unregulated landscape where agencies can just get an idea and go with it without anyone watching.”
Of particular concern to Ferguson was numerous agencies’ acknowledgment that they used facial recognition technology during last summer’s protests against police brutality after the killing of George Floyd. The report revealed that six federal agencies — including the U.S. Capitol Police, Federal Bureau of Investigation and U.S. Marshals Service — used the technology on people who took part in the protests and were suspected of criminal activity. “All six agencies reported that these searches were on images of individuals suspected of violating the law,” the GAO said.
Ferguson said the use of facial recognition during protests highlighted the need for greater accountability measures. “One of the most shocking and troubling aspects of the report was the admission that this was used for First Amendment-protected activities,” he said.
The story you just read is a small piece of a complex and an ever-changing storyline that Coda covers relentlessly and with singular focus. But we can’t do it without your help. Show your support for journalism that stays on the story by becoming a member today. Coda Story is a 501(c)3 U.S. non-profit. Your contribution to Coda Story is tax deductible.
The Big Idea
Ransomware: The New Disinformation
Ransomware increasingly shares the aims of disinformation campaigns: to spread popular doubt in governments and institutions, to undermine expertise, and to foster political and social instability.Read more