top of page

Face Recognition is not the Enemy

By Bill Bratton as seen in the NY Daily News


Face recognition fearmongering is in full swing, with at least a small contingent of Americans breathlessly warning that its use by law enforcement is going to portend the end of privacy as we know it.


Slow down. This technology is a great advance over the way we used to identify suspects, eyewitness memory. It promises to identify the guilty with greater accuracy and exonerate the innocent. And though we should guard against private-sector abuses, we should welcome its responsible use by police and prosecutors.


The state of the debate was nicely captured in a recent New York Times front-page, above-the-fold story headlined “The Secretive Company That Might End Privacy as We Know It.”

The article was about a company called Clearview AI, which has collected millions or even billions of images by scraping the internet’s social media platforms and applying facial recognition software to search them. So far, the product has been marketed to law enforcement.


While the story is laced with success stories from police agencies that tested the product on cold cases that were almost instantly solved by the tool, it was also peppered with ominous warnings and “what ifs.” Here is one example: “The weaponization possibilities of this are endless,” said Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University. “Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail them or throw them in jail.”


Point taken, but law enforcement officers already have all the tools — DMV records, photos and databases, including of fingerprints and DNA — to do “rogue” searches now. One reason that rarely happens is most cops are honest. Another is that the tools used by law enforcement are audited and the penalties for misusing them range from discipline to being fired or arrested.


We shouldn’t dismiss all worries about facial recognition. When you get past the loud and menacing parade of terribles, the Times piece gives us a number of important issues to think about.


Let’s start with the moving target of privacy. How do we define privacy in a brave new world where a generation has become quite accustomed to living out loud? People young and old continue collect and even compete for followers on Twitter and Instagram. Kids are incessantly posting on TikTok and Facebook (now considered the Land of the Grandparents) is where you live and die by how many friends you’ve signed up and how many likes you rack up.


A more pointed question may be — in a world where so many of us use social media to yell “look at me!”— who is responsible for individual privacy? Has the person who puts thousands of selfies out there on publicly accessible internet platforms given up some part of their privacy? Or, is the problem with Google or a company like Clearview AI picking up those pictures to make them searchable?


For now at least, personal responsibility seems to only play a minor role in this conversation. Almost no one who posts every waking moment, meal and jet-ski ride reads the fine print from the provider about how their posts can be seen or used. Not many more adjust their own privacy settings to protect those images. We have met the enemy and it is us.

The speculative handwringing in the Times piece goes beyond the supposed threat of the police misusing this tool: “Searching someone by face could become as easy as Googling a name. Strangers would be able to listen in on sensitive conversations, take photos of the participants and know personal secrets. Someone walking down the street would be immediately identifiable — and his or her home address would be only a few clicks away. It would herald the end of public anonymity.”


That is a bit of a leap. How would these “strangers” be able to listen in on sensitive conversations, take photos of the participants and “know personal secrets”? Ironically, this sounds a lot more like something reporters would be more likely to do than an unspecified “stranger” or the police.


Make no mistake: The broad issues of private and corporate use of facial recognition technology are complicated legally and socially. I don’t want to suggest that we should blithely waltz into a world in which corporations and people can identify anyone they see and therefore know all kinds of things about them.


That said, I want to focus for a moment on the value of these tools to law enforcement, because the use of the technology is not for profit, but for public safety. The benefits are overwhelming. And yet, police seem to be the first target of privacy advocates.

Some cities like San Francisco, Oakland, Berkeley, and on the East Coast, Somerville, Mass., have legally banned the use of facial recognition tools by police. Congress is holding hearings to consider federal legislation to do the same nationally. Privacy advocates pushing these bans stoke fears of all the terrible things that might happen if police continue to use these tools.


The Times quotes Woodrow Hartzog, a law professor from Northeastern University as concluding, “I don’t see a future where we harness the benefits of face recognition technology without the crippling abuse of the surveillance that comes with it. The only way to stop it is to ban it.”


But the Times story also cites several police officers who tried the Clearview product to generate productive leads and then very quickly obtained evidence beyond those matches to satisfy prosecutors and courts that an arrest was warranted. I know from my time as New York City police commissioner that the NYPD used facial recognition technology to solve murders, rapes and assaults that might have never been solved absent this technology.

Here is the important part: Our facial recognition tools only ran against the mugshots of people already arrested by the police. To be clear, our tool did not run against social media, or even against driver’s license photos.


Also, no one has ever been arrested by the NYPD based on facial recognition alone. It is just a tool that led to clues, just like so many other pieces of the puzzle are. Detectives had to take that clue as a starting point and search for hard evidence.


A common example would be a security camera that generated a photo of someone committing a robbery. The NYPD facial recognition tool would run that picture against the mugshots in our own files. If there was a match, a detective would examine it. That detective might look at the crimes that resulted in that person being in NYPD’s files.

If the crime was a robbery with a similar m.o., the detective might look further. Does the person live near or frequent the place of occurrence? Does the crime match past crimes committed by the individual? Is there a picture on the suspect’s social media page of him on the day of the robbery wearing the same distinctive black and yellow coat? Does he have the same tattoo on the left side of his neck as the perpetrator in the original security camera shot?


Detectives will still have to look for evidence to be presented to a prosecutor and a court, but the lead is a good place to start.


One more thing: The advocates will tell you that because the technology is not conclusive like DNA, it will result in the arrest of innocent people. Consider this: The single largest contributor to the conviction of innocent persons today is single eye-witness identification; it has contributed to more than 70% of the wrongful convictions later overturned by DNA evidence.


These IDs are made by people who have never seen the suspect before, were in fear and under tremendous pressure during the incident, and picked a photo or pointed to someone in a line-up.


With that as the primary evidence, it is not clear who the victim really remembers. Does the victim remember the suspect from the night of the robbery, or is it his face from the mugshot that they were not really a hundred percent certain about when they picked it, the one that is now seared into their memory? Victims are human.


Facial recognition is a technology that is based on an image and matched by an algorithm that is dispassionate. The computer has no investment in the outcome. Unlike the human witness, whose identification can lead to an immediate arrest, facial recognition is a layer of science or process that other evidence can be built upon.


Privacy advocates and civil libertarians are right to raise questions about howthe technology is used, but they don’t always balance to tangible good these tools deliver against managing the risks they see.


The NYPD has successfully used facial recognition to identify dead bodies of murder victims and of missing persons who died of natural causes. We have used it to identify live people with dementia who didn’t know their own last names. Responsible use of facial recognition technology by the NYPD has helped bring closure to victims but also families.


I’m not entirely sure why the benefit to victims and justice is not included in the risk-reward calculus of privacy advocates, except that it might be bad salesmanship if you are looking to ban police from using this tool. But I do know this: Police departments that use facial recognition need to have guidelines in place before they use these tools.

0 comments

Comments


bottom of page