Google reports scale of complaints about AI deepfake terrorism content to Australian regulator

Google has informed Australian authorities it received more than 250 complaints globally over nearly a year that its artificial intelligence software was used to make deepfake terrorism material.

The Alphabet-owned tech giant also said it had received dozens of user reports warning that its AI program, Gemini, was being used to create child abuse material, according to the Australian eSafety Commission.

Under Australian law, tech firms must supply the eSafety Commission periodically with information about harm minimisation efforts or risk fines. The reporting period covered April 2023 to February 2024.

Since OpenAI’s ChatGPT exploded into the public consciousness in late 2022, regulators around the world have called for better guardrails so AI can’t be used to enable terrorism, fraud, deepfake pornography and other abuse.

Read more: Reuters