Australian Federal Police (AFP) officers trialled controversial facial recognition technology Clearview AI from late 2019, despite the agency initially denying any association with the company.
Founded by Australian Hoan Ton-That, the New York-based start-up claims to have created an unprecedented database that contains billions of photos scraped from platforms like Facebook and Instagram and even employment websites.
The tool allows those with an account to scan a photo of an unknown person and locate additional images and identifying information about them from across the internet.
In response to a question on notice from Shadow Attorney-General Mark Dreyfus in February, the law enforcement agency admitted on Tuesday that officers had used the face-matching software.
Between November 2, 2019 and January 22, 2020, members of the AFP-led Australian Centre to Counter Child Exploitation (ACCCE) registered for free trials and undertook searches, raising questions about how their activities were supervised and why AFP leadership were not aware.
Labor leaders including Mr Dreyfus called on Home Affairs Minister Peter Dutton to explain whether he knew AFP officers were using what they called "a deeply problematic service".
"The Home Affairs Minister must explain whether the use of Clearview without legal authorisation has jeopardised AFP investigations into child exploitation," they said in a statement.
"The use by AFP officers of private services to conduct official AFP investigations in the absence of any formal agreement or assessment as to the system's integrity or security is concerning."
Seven AFP officers used Clearview AI
This is the first official confirmation that any Australian law enforcement body has taken up Clearview AI's software.
Mr Ton-That told the ABC in January he had "a few customers in Australia who are piloting the tool, especially around child exploitation cases", but at the time state and federal police forces either denied they used the technology or declined to comment.
The company issued nine invitations to AFP officers for a "limited pilot", according to the agency. Seven activated the trial and conducted searches.
"These searches included images of known individuals and unknown individuals related to current or past investigations relating to child exploitation," the AFP said.
In February, internal company documents obtained by BuzzFeed News reportedly showed several registrations linked to AFP email addresses, as well as other state police forces.
Concerns grow about facial-recognition transparency
The New York Times reported Clearview AI was in use by police forces across the United States in January, starting a conversation about privacy and the collection of images shared by users of Facebook and other companies for face matching systems without consent.
Digital rights groups have also voiced concerns about the lack of regulation and limited transparency around police use of facial recognition technology in Australia.
The AFP rejected several Freedom of Information requests related to Clearview AI, saying no relevant documents existed, before now reporting the ACCCE did, in fact, hold information about the company.
David Paris, campaign manager at Digital Rights Watch, said it was "deeply concerning" the AFP was not previously aware of the conduct of its own staff.
It is not clear how the trial was overseen, or whether a privacy or security assessment of the software was undertaken.
Despite the trial undertaken by AFP officers, the agency said it had not formally adopted the software more broadly.
"The AFP seeks to balance the privacy, ethical and legal challenges of new technology with its potential to solve crime and even save victims," it said in its response to Mr Dreyfus.
The Office of the Australian Information Commissioner (OAIC) is making inquiries with Clearview AI about whether it holds personal information about Australians, and if it is being used in Australia.
The AFP, the OAIC and Clearview AI were approached for comment.