Clearview AI slammed for breaching Australians’ privacy on numerous fronts

Last Updated on January 9, 2023 by Admin

[ad_1]

Australia’s Information Commissioner has found that Clearview AI breached Australia’s privacy laws on numerous fronts, after a bilateral investigation uncovered that the company’s facial recognition tool collected Australians’ sensitive information without consent and by unfair means.

The investigation, conducted by the Office of the Australian Information Commissioner (OAIC) and the UK Information Commissioner’s Office (ICO), found that Clearview AI’s facial recognition tool scraped biometric information from the web indiscriminately and has collected data on at least 3 billion people.

The OAIC also found that some Australian police agency users, who were Australian residents and trialled the tool, searched for and identified images of themselves as well as images of unknown Australian persons of interest in Clearview AI’s database.

By considering these factors together, Australia’s Information Commissioner Angelene Falk concluded that Clearview AI breached Australia’s privacy laws by collecting Australians’ sensitive information without consent and by unfair means. In her determination [PDF], Falk explained that consent was not provided, even though facial images of affected Australians are already available online, as Clearview AI’s intent in collecting this biometric data was ambiguous.

“I consider that the act of uploading an image to a social media site does not unambiguously indicate agreement to collection of that image by an unknown third party for commercial purposes,” the Information Commissioner wrote.

“Consent also cannot be implied if individuals are not adequately informed about the implications of providing or withholding consent. This includes ensuring that an individual is properly and clearly informed about how their personal information will be handled, so they can decide whether to give consent.”

Read more: ‘Booyaaa’: Australian Federal Police use of Clearview AI detailed

Other breaches of Australia’s privacy laws found by Falk were that Clearview AI failed to take reasonable steps to either notify individuals of the collection of personal information or ensure that personal information it disclosed was accurate.

She also slammed the company for not taking reasonable steps to implement practices, procedures, and systems to ensure compliance with the Australian Privacy Principles.

These breaches were due to Clearview AI removing access to an online form for Australians to opt out from being searchable on the company’s facial recognition platform. 

The form itself also contained privacy issues as it required Australians to submit a valid email address and an image of themselves which would then be converted into an image vector, which Falk said allowed Clearview AI to collect additional information about Australians.

The form was created at the start of 2020, but now Australians can only make opt-out requests via email, Falk said.

After making these findings, Falk has ordered Clearview AI to destroy existing biometric information it has collected from Australia. She has also ordered for the company to cease collecting facial images and biometric templates from individuals in Australia.

“The covert collection of this kind of sensitive information is unreasonably intrusive and unfair,” Falk said.

“It carries significant risk of harm to individuals, including vulnerable groups such as children and victims of crime, whose images can be searched on Clearview AI’s database.”

Despite the investigation being finalised, the exact number of affected Australians is unknown. Falk expressed concern that the number was likely to be very large given that it may include any Australian individual whose facial images are publicly accessible on the internet.

Providing an update on another Clearview AI-related investigation, Falk said she was currently in the process of finalising a separate investigation into the Australian Federal Police (AFP) trialling Clearview AI’s facial recognition tool.

In April last year, the AFP admitted to trialling the Clearview AI platform from October 2019 to March 2020. State police from Victoria and Queensland also trialled the tool, with all three law enforcement agencies admitting to successfully conducting searches using facial images of individuals located in Australia with the tool.

Falk said she would provide a determination regarding whether the AFP breached the Australian Government Agencies Privacy Code to assess and mitigate privacy risks soon.

Related Coverage

[ad_2]

Source link