Clearview AI, the American facial recognition technology provider reportedly used by thousands of government and law enforcement agencies around the world, faces an onslaught of legal complaints across Europe on Thursday for allegedly violating the bloc’s tough privacy laws Has.
European data protection groups complained about Clearview AI’s practices.
A group of data protection activists – including Privacy International (PI) and noyb, the founder of which fundamentally changed the flow of data between the EU and the US – filed complaints with the data protection authorities in France, Austria, Italy, Greece and the UK in which they called the company accused of illegal use of personal data.
Clearview AI scratches images from publicly accessible websites and social media without consent and sells access to the image database to law enforcement agencies and private companies for use as a facial recognition tool.
noyb described his practices as “dishonest” and “extremely intrusive”, which amounts to constant surveillance and a serious threat to personal freedom.
“Extracting our unique facial features, or even sharing them with the police and other companies, is way beyond what we as online users could ever expect,” said Ioannis Kouvakas, Legal Representative at PI.
Forbes approached Clearview AI for a comment.
What to look out for
The data protection authorities have up to three months in accordance with EU rules to respond to the complaints. PI said it expected them to band together to “decide that Clearview’s practices have no place in Europe”. Should European regulators determine that Clearview’s practices do not comply with the strict GDPR, fines could amount to up to 4% of the company’s annual global sales and its operations in the block could be severely restricted.
“Just because something is online doesn’t mean it’s a fair game to be appropriated by others in any way they want – neither morally nor legally,” said Alan Dahi, a data protection attorney at noyb.
Clearview seemed to appear out of nowhere when a 2020 New York Times investigation revealed its surprisingly wide reach. The investigation uncovered more than 600 law enforcement agencies who were usually in hiding and lacking the public scrutiny usually associated with the development of new technology in its app, which can use a picture of a person to reveal other public photos of them to find. The secret company founded by Hoan Ton-That had some big supporters, including New York politician and former aide to Rudy Giuliani Richard Schwartz, and venture capitalist Peter Thiel, best known for Palantir and Facebook. Since the Times exposed its practices and widespread law enforcement engagement, Clearview has drawn a widespread reprimand for its covert surgery and use of facial recognition, a technology known to be biased against marginalized communities (the growth rate for new customers in) law enforcement customers however, reportedly on the rise. Tech companies including Facebook, Microsoft and Twitter have objected to Clearview’s practices, and activists have called this a “privacy nightmare”. In the face of a host of legal disputes and regulatory scrutiny, Clearview decided to stop selling to private companies, although it is reportedly used by nearly 2,000 public entities in the US, many of which are law enforcement agencies. Clearview’s friendly approach to law enforcement is in stark contrast to several large tech companies, many of which have stopped selling to agencies with no regulations in place to control their use.
Clearview has repeatedly defended its practices, stating that it has an initial right of adaptation to public data.
Usage of Clearview’s facial recognition app rose 26% the day after the riot at the Capitol, the NYT CEO said.
Clearview AI hit by wave of European privacy complaints (Bloomberg)
The Secret Company That Could End Privacy As We Know It (NYT)
Clearview AI, the company whose database amassed 3 billion photos, hacked (Forbes)
A U.S. Government Study Confirms Most Facial Recognition Systems Are Racist (MIT Tech Review)