Shekhar Yadav/The India Today Group via Getty Images

Indian police use facial recognition to persecute Muslims and other marginalized communities

After the 2020 riots in northeast Delhi, hundreds of arrests were made on the basis of surveillance footage. But the tech is dubious and reflects the biases and prejudices of the government

On a humid afternoon in July, Mohammad Shahid can barely be heard over the noise in the bylanes of Jaffrabad as life continues undimmed. Occasionally Shahid turns his face to the wall. He is telling me about the 17 months that he spent in a Delhi jail before he was eventually charged with participating in the riots in the northeast of the city in February 2020, while then U.S. President Donald Trump was on a two-day visit to India.  

So incensed were the Delhi Police by this coincidence that it noted in the chargesheet that there “could not have been a greater international embarrassment for the Government of India than to have communal riots raging in the national capital while a visit by the U.S. President was underway.” The riots began because supporters of the government’s arguably Islamophobic Citizenship Amendment Act (CAA) — which essentially enables a path to Indian citizenship for illegal immigrants from Pakistan, Bangladesh and Afghanistan, as long as they are not Muslim — attacked protestors.

Many observers alleged that the police aided and abetted the Hindutva mob. Fifty-three people, mostly Muslim, were killed in the violence and many hundreds were injured and displaced. Weeks after the riots, bodies were still being found in open drains.   

Shahid has been home for about a year now, waiting for his trial to begin. He is one of 2,456 people who have been arrested, though nearly half have yet to be charged with any crime. 

Sitting on a mattress in his apartment (one room and a tiny kitchen), Shahid describes his time in prison as a “blur of pain and panic.” He was eventually given bail after he suffered a stroke that left him partially paralyzed. Shahid was also wounded in the riots, shot in the shoulder, a wound that festered and rotted in jail.

His health has improved since he’s been home but he’s still weak and traumatized. While he talks to me, his eyes sometimes filling with tears, his children play games on their phones. The electronic bleeps from the phones, the lazy whir of the ceiling fan, and the voices in the street provide a surreal counterpoint as Shahid tells his story.

“Sochne samajhne ki taakat bilkul khatam ho gayi hai,” he says. He no longer has the will or strength to make sense of what happened to him.

Over two years after the riots, the Delhi police continue to make arrests. Just last week, police arrested a man they claim played a role in the death of a police constable during the riots. Two policemen were killed and around 50 injured. A police spokesperson told reporters that many arrests are being made through facial recognition technology used on images from over 100 closed circuit cameras in the area.

The Delhi police continues to conflate the protests against the Citizenship Amendment Act with “radical” Islamist groups and ideologies. Last month, the government of India banned a Muslim group called the Popular Front of India. Delhi police argued that the national ban vindicated its claims that the PFI had funded the anti-Citizenship Act protests as part of a conspiracy to foment communal violence. But the allegations remain unfounded and appear to portray Muslims en masse as having an agenda. 

“This is about injustice,” says Shaziya, Mohammad Shahid’s wife. “We’ve been troubled ever since the day he was shot. I haven’t spent a day in peace in the last two years.” Shahid faces charges under 16 sections of the Indian Penal Code, including rioting and murder. And, his lawyers say, the police have little to go on except the surveillance technology it has used to identify people like Shahid. “Extensive use of technology in identification and arrest was the hallmark of investigation” into the riots, the Delhi Police wrote in its 2020 annual report.

A drone hovers over a Delhi neighborhood during communal riots in 2020 that left 53 dead and hundreds injured. Photo by Sanchit Khanna/Hindustan Times

Last year, the U.K.-based cybersecurity website Comparitech said Delhi was the most watched city in the world in 2021, with 1,826 CCTV cameras per square mile. Cities in China have since taken over, but Delhi remains among the ten most “surveilled” cities in the rest of the world, alongside the likes of Singapore, London, New York and Los Angeles.

This has been hailed as an achievement by authorities who have promised the installation of more cameras. And in the last few years, CCTV cameras and facial recognition technology have been used not just to police streets and public spaces but also in public school classrooms.

Experts say weak surveillance and data protection regulations mean that this emphasis on CCTV cameras is likely to cause more harm than good.

“An increase in the efficiency of policing is not always good news for all people, says Jai Vipra, a research fellow at the Center for Applied Law and Technology Research, an Indian think tank. “Because,” Vipra adds, “you have to realize how differently groups of people in India experience policing. Some groups are victimized by the police, as is evident from our very skewed prisoner statistics.”

According to recent government data from the year 2020, the majority of Indians in jail are from marginalized and minority communities. Most are either illiterate or have not finished high school. In 2019, almost half of the 12,000 police personnel interviewed for a study felt that Muslims were naturally prone to committing crimes and about 60% of police felt similarly about migrants. 

Vipra found in her research that the use of facial recognition technology in Delhi was likely to disproportionately affect Muslims because of both police prejudice and the over-policing and over-surveillance of areas with a significant Muslim population. “Crime exists in our socio-political reality,” Vipra told me, “and to add technology to it can be really, really dangerous if not done with a lot of care.”

A few months after the riots in northeast Delhi, a fact-finding committee from the Delhi Minorities Commission published a report that described the police as complicit in violence targeted towards Muslims, and in at least one instance as having attacked Muslims themselves. In several court proceedings since, judges have remarked on the police’s shoddy standards of investigation and noted that police incompetence (or malevolence) has meant that several of the accused continue to languish in jails without bail or hope of a speedy trial. Human Rights Watch has said that the authorities in Delhi are pursuing a political vendetta, that they should be “impartially investigating allegations that BJP leaders incited violence,” and that they should “stop using these investigations to silence critics of the government.”

But the Delhi police deny any allegations of bias. The Delhi police commissioner at the time of the riots, SN Shrivastava said that 231 people were arrested on the basis of CCTV or video footage and that 137 of those people were identified through the use of facial recognition technology. The police had recovered and analyzed a total of 945 video recordings and “many rioters were identified on the basis of the clothes they were wearing,” he added.

In August, the Internet Freedom Foundation, an organization that advocates for digital rights, published the results of a Right to Information inquiry it filed that revealed that the “accuracy of their FRT depends on light conditions, distance, and angle of face.” According to the police, all “matches above 80% similarity are treated as positive results.” The Internet Freedom Foundation said there was no reason given for the arbitrary 80% threshold and why results under 80% were classified as “false positives” rather than as negative, suggesting that they could still be used as evidence alongside further corroboration.

“In light of this information,” the Internet Freedom Foundation tweeted, “facial recognition tech surveillance WILL lead to human rights violations.” In August 2020, just months after the northeast Delhi riots, a court in the United Kingdom recognized that using facial recognition technology in such circumstances as the peaceful attendance of a protest march is a human rights violation.

Jaffrabad was ground zero in the 2020 Delhi riots. Police arrested over 2,000 protestors. But observers have argued that the police investigation has reflected the Narendra Modi government’s Islamophobia. Photo by Raj K Raj/Hindustan Times

On the afternoon of February 25, 2020, Shahid stepped out of his small apartment to join his wife and children who were visiting relatives nearby. That evening, Shahid was supposed to catch a train to Amritsar in Punjab where he worked. The riots that engulfed his neighborhood had already begun two days before, though the streets around his house, Shahid said, appeared relatively peaceful. 

Just as Shahid was trying to hail a motorized rickshaw, he says, a crowd came running down the street. Some people were throwing stones at the police and the police were firing tear gas shells into the crowd. Shahid felt something hit him on the back of his right shoulder. He fell to the ground and the people around him pulled his prone body out of the way. Shahid had been shot.

Like many others injured that day, Shahid was rushed to hospital by a stranger on a motorcycle. Another young man was brought into the same hospital. Weeks later, Shahid would learn that the man’s name was Aman and that Shahid was being charged for his murder. 

In early April, when his wounds from the gunshot still required three bandage changes a day, several Delhi police officers came to Shahid’s one-room Jaffrabad dwelling. At the time, a harsh nationwide lockdown was in place to combat the coronavirus. The police shoved Shahid into one of four vehicles and took him to the station to be interrogated.

The police told Shahid they were going to be searching video footage for his face. “If I had picked up a single stone,” Shahid told me, “I might have been afraid. But I did no such thing.” He says he told the police he was blameless. “I was a casualty myself,” he recalls telling them.

Shahid says the police showed him a dozen or so pieces of footage. They asked him to identify himself and others. He said he couldn’t. “At least tell us what you were wearing,” the police reportedly said. Shahid claims he pointed to a man on the video wearing all black and said he had worn similar clothes though not the same headgear. The police, Shahid says, circled the man and zoomed in. His face could not be clearly seen, nor was the man engaging in any violence. 

The next day, though, Shahid was put in a cell. The pixelated image of a figure in black clothes was attached to Shahid’s chargesheet as evidence that he was involved in rioting.

Shahid’s lawyer, Bilal Anwar Khan, told me there was no information made available to him about the source or the authenticity of the video. No witnesses were asked to identify Shahid. “It is a very shallow piece of evidence,” Khan said, “it’s purely conjecture. That they have taken such an approach to identify him is contrary to the law.” 

But it is the kind of evidence that the Delhi police continues to use to justify the arrests of those they say were at the scene of the riots, even if not actually rioting.

Last year, the Chief Minister of Delhi, Arvind Kejriwal, announced his plans to install 140,000 more CCTV cameras across the city, including in schools and in gated neighborhoods and apartment complexes. Kejriwal’s party was only founded in November 2012, but has already been elected twice by resounding margins in Delhi. In February, the party easily won the state election in Punjab, giving it control of two states and making the party a small but significant player in national politics. Its relationship with the BJP, particularly in Delhi, is adversarial and marked by a vicious pettiness, in word and deed.

The central government, which controls the Delhi police, is also installing CCTV cameras around the city. It is using funds from the Safe City project which was launched in eight cities in 2018 and included several tech measures as “minimum desirable components” to ensure women’s safety. The government allocated a little under half a billion dollars to the project from the Nirbhaya Fund, set up in 2013 following the horrifying rape and murder of a 23-year-old woman in Delhi which received worldwide attention. A 2021 report by Oxfam noted that the fund was being predominantly allocated to services that don’t specifically help women.

“The entire perspective in which CCTV cameras are the answer to women’s safety is flawed,” says Kalpana Viswanath, co-founder of Safetipin, a civil society organization focused on making public spaces safer for women. “The CCTV camera is useful for the police, plain and simple,” she told me. “We don’t need fear to drive our lives. We need freedom to drive our lives.” 

Several Indian cities are among the most surveilled in the world. In 2021, it was estimated that Delhi had more CCTV cameras per square mile than any other city. Photo: Anshuman Poyrekar/Hindustan Times via Getty Images

Privacy experts in India have also raised concerns about “function creep,” wherein the use of technology that is acquired for one purpose is eventually used mostly for an entirely different purpose. In 2017, a Delhi High court ordered the government and police to work together to promptly procure facial recognition technology that would help trace and rescue missing children in the city. 

A year later when the court sought to know why the technology had been used in less than one percent of cases, the police said that the technology was so poor that it was often unable to distinguish between boys and girls.

Even if the technology has been upgraded since then, there is little evidence to show that the facial recognition technology used by police in India is now particularly accurate. Anushka Jain, a lawyer at the Internet Freedom Foundation who specializes in transparency issues, told me that given how facial recognition technology is used in India, “an entire community could end up being targeted.” In the United States, for instance, the technology has been used in the wrongful arrests of several Black men.

In some estimations there are at least 125 government-authorized facial recognition systems in use in India today. “I don’t think there is any situation in which law enforcement’s use of this technology can be justified because it is always going to lead to violation of fundamental rights,” Anushka Jain told me. But if the authorities are going to insist on its use, she added, people need to have legal recourse when they are wrongly targeted and faced with criminal action.

“That surveillance falls differently on different people is a fact,” says Vidushi Marda, senior program officer at Article 19, the international human rights organization. “If you are a Dalit woman in India, for instance, the nature and extent to which you are under surveillance are far more than an upper caste Hindu man. There is a disproportionate impact on communities and groups that have been historically marginalized.”

Marda conducted an ethnographic study on Delhi’s Crime Mapping and Predictive System (CMAPS) and found, for instance, that inputs fed into the system reflected institutional assumptions about poorer immigrants and those living in poorer parts of the city being “de facto criminals” and also assumptions that emergency calls made from those areas would contain exaggeration and misreporting. 

“There are so many subjective human decisions that are made before the technology even comes into play,” she says. “All of these assumptions are not written in any manual, they are imbibed into the everyday act of policing. The technology, at the very minimum, will just embed those ideas.”

A recent project report by the National Police Mission proposed predictive policing for women’s safety in urban areas across India. This, even though in the European Union, after considerable evidence that predictive policing reinforces existing discrimination against some communities, rapporteurs have recommended a ban as part of the upcoming Artificial Intelligence Act. But in India it is clear that facial recognition technology and predictive policing — introduced as a measure to protect women and children — is a useful tool for a government that brooks little political opposition and is increasingly focused on control.

In Jaffrabad, Shahid is an example of the system’s collateral damage. He tells me he is focused now on piecing the shards of his broken life back together. A life that now includes mandatory court and hospital visits. “Even if I go down the road,” he told me, “I suddenly start feeling breathless and become drenched in sweat.”

So much has changed, his wife says.  “And it’s going to be a long time before our lives get any better.”

The story you just read is a small piece of a complex and an ever-changing storyline that Coda covers relentlessly and with singular focus. But we can’t do it without your help. Show your support for journalism that stays on the story by becoming a member today. Coda Story is a 501(c)3 U.S. non-profit. Your contribution to Coda Story is tax deductible.

Support Coda

The Big Idea

Battling History

We investigated five battles across Europe over historical events. But none are really about history, they're always a fight over the present.
Read more