Accuracy of Deepfake Detection Technologies and Differentiating Between Harmful Deepfakes and Legitimate Political Satire or Memes
Debated in Parliament on 7 Aug 2024.
Summary
- He Ting Ru inquired about the accuracy of the Government's deepfake detection technologies, how it distinguishes harmful deepfakes from legitimate content, and the consequences of wrongly identifying videos as deepfakes.
- Minister Josephine Teo stated that the Government uses various tools for detecting manipulated content but does not disclose their accuracy rates to prevent exploitation by malicious actors.
- Action can be taken against online falsehoods under the Protection from Online Falsehoods and Manipulation Act (POFMA) if they harm public interest, with satire or parody not automatically qualifying unless they contain falsehoods.
- The Government is evaluating the need for additional safeguards against the malicious use of AI and deepfakes, particularly during elections, and will provide updates on this assessment in due course.
Summary written by AI (edit)
Full Transcript
He Ting Ru
Ms He Ting Ru asked the Minister for Digital Development and Information (a) what is the current accuracy rate of the Government’s deepfake detection technologies for AI-generated content; (b) how will the Government differentiate between harmful deepfakes and legitimate political satire or memes using similar technologies; and (c) what happens if videos are wrongly identified as deepfakes.
Mrs Josephine Teo
There are a variety of tools and techniques available to the Government to detect, identify and assess manipulated content, including artificial intelligence (AI)-generated content such as deepfakes. These may be sourced commercially, developed in-house or in partnership with researchers such as those at the Centre for Advanced Technologies in Online Safety. We do not publish their accuracy levels as our tools are constantly being updated to keep up with technology. It is also not in the public interest to reveal the full extent of capabilities as malicious actors may exploit it.
The Government can take action against online falsehoods when certain thresholds are met, including falsehoods generated with the help of AI. Action may be taken under the Protection from Online Falsehoods and Manipulation Act (POFMA) if such content is false and against the public interest. Satire or parody do not by themselves meet the criteria for POFMA action, unless they contain falsehoods that harm public interest. Individuals who disagree with POFMA directions issued to them, including those for deepfake content, can file an appeal in court.
Many countries have recognised the need to mitigate the harms and risks from AI use and application, including the malicious use of deepfakes. Some countries have already put in place safeguards, especially during elections, in order to protect the integrity of the electoral process. We are studying if further safeguards are required and will provide an update when ready.