French Watchdog CNIL: Clearview AI, a facial recognition company, violated the law on privacy


Clearview AI, a facial recognition company that has collected 10 billion images worldwide, has been ordered by France's data privacy watchdog CNIL to stop collecting and using data from residents of the country.

The CNIL stated in a formal demand released on Thursday that Clearview's collection of publicly available facial images on social media and the Internet had no legal basis and was in violation of European Union data privacy rules.

The software company, which is used as a face search engine to aid law enforcement and intelligence agencies in their investigations, failed to obtain the prior consent of those whose images it collected online, according to the regulator.

"These biometric data are particularly sensitive," the authority said in a statement, "particularly because they are linked to our physical identity (who we are) and allow us to be identified in a unique way."

It went on to say that the New York-based firm failed to provide those affected with proper access to their data, limiting access to twice a year without justification and limiting this right to data accumulated in the 12 months prior to any request.

Citizens can request that their personal data be removed from a privately owned database under EU law. Clearview had two months to comply with the CNIL's demands or face sanctions, according to the CNIL.

The decision was made in response to several complaints, including one filed by the advocacy group Privacy International. It comes after a similar directive from Clearview's Australian counterpart, who told the company to stop collecting images from websites and destroy data gathered in the country.

Last month, the UK Information Commissioner's Office, which collaborated with the Australians on the Clearview investigation, announced it planned to fine Clearview 17 million pounds ($22.59 million) for alleged data protection law violations.

Translate