As revealed by New York Times reporter Kashmir Hill, hundreds of police departments and law enforcement agencies across the US have begun signing up to access the services of Clearview AI.
Clearview is a company that provides specialized facial recognition services, based on a database of three billion images sourced from across the internet, including social media sites such as Facebook and YouTube, in what could well be a breach of the sites’ terms of service (Twitter explicitly bans images from its website for facial recognition).
From this database, Clearview is then able to run astonishingly efficient searches on any given image of an individual’s face, finding matching images (and their source pages) from across the internet.
This may include social media accounts, where they may have given away more information about themselves than just their appearance, (job, address, hobbies, hours of work, who their friends and family are, etc.).
Now, this may sound pretty terrifying, but there are good arguments for why it may be an incredible tool for law enforcement. The Times’ report showed that already the use of Clearview had led to arrests of suspects in shoplifting, identify theft, credit card fraud, murder, and child sexual exploitation cases, even leading to one arrest within twenty minutes. This technology could revolutionize detective work and police. But it could also transform our relationship with the state – and not in a good way.
And that is precisely why companies and governments have fought shy of using such technology. Google has suspended research in the area and is encouraging a temporary ban, while Francisco’s police department is specifically forbidden from using facial recognition software.
Even in the last few days, the EU has mooted the possibility of a five-year moratorium on facial recognition in spite (or perhaps because) of the inexorable advance of sophisticated facial recognition technology.
Its use by police in the UK has already led to debates about its legality, and how fit for purpose other hi-tech new tools of detection are when it comes to protecting public freedoms. Seventy years after George Orwell’s death, now might not be the most appropriate time to usher in an all-seeing Big Brother state (or would it…?).
Law enforcement agencies have already been compiling their own databases of images for use with their own facial recognition software. They have been doing so for the best part of two decades, but senior politicians in the UK have challenged even this.
There are also numerous practical issues to consider, especially around the risk of false-positive identifications and the many studies that show most software’s ‘rampant’ racial bias when it comes to misidentification.
The New York Times has been doing sterling work busting open the almost accidental private sector surveillance state that digital companies have been building up around us, sharing their findings at their Privacy Project. Still, Clearview is a new and particularly worrying case.
They seem to be a rogue operator, with shaky publicly available company information and little grasp of the legality or otherwise of their own information-gathering techniques (and a willingness to access images uploaded by law enforcement agencies, against promises not to, something researcher Kashmir Hill discovered after she asked her image to be used by cooperating police).
Combine this loose sense of legal propriety with many police departments’ limited grasp of the Clearview technology they are using (and the fact that no independent authority, such as the National Institute of Standard in Technology has yet vetted this software for public use), and you get even deeper into risky territory where horrible miscarriages of justice and invasions of privacy might occur.
There are also worries around escalation. Can facial recognition software be kept only to law enforcement? If not, the ability to quickly and easily get personal data on someone you have a photo of could become a boon for fraudsters, blackmailers, and identity thieves, not to mention stalkers or similar predators.
But it seems there is little politicians can do, or are willing to do, to stop the relentless march of AI. If software can do it, some rogue entrepreneurs will do it, which seems to be the inevitable logic. And so we all march on into dystopia, one misjudged app at a time.
Sign up for our newsletter to get the best of The Sized delivered to your inbox daily.