The report also highlighted a growing phenomenon of bad actors creating entire websites generated by AI and operating with little to no human oversight.
By Nurudeen Akewushola, FactCheck Hub
The rise of artificial intelligence in 2023 has transformed the misinformation landscape as it provided new tools for bad actors to create manipulative articles, images, audio, videos, and websites, a misinformation monitor report by News Guard has revealed.
The report released on December 27, 2023, revealed how AI tools were used to push Russian, Chinese, and Iranian propaganda, healthcare hoaxes, and false claims about the wars in Ukraine and Gaza.
Part of the report read, “NewsGuard analysts conducted the early red teaming to test how language models, such as OpenAI’s ChatGPT, could be prompted (or weaponized) to generate falsehoods by bypassing safeguards. In August, for example, NewsGuard tested ChatGPT-4 and Google’s Bard with a random sample of 100 leading prompts derived from NewsGuard’s database of falsehoods, known as Misinformation Fingerprints. ChatGPT-4 generated 98 out of the 100 myths while Bard produced 80 out of the 100.”
The report also highlighted a growing phenomenon of bad actors creating entire websites generated by AI and operating with little to no human oversight.
“To date, NewsGuard’s team has identified 614 such Unreliable AI-generated news and information websites, labeled “UAINS,” spanning 15 languages: Arabic, Chinese, Czech, Dutch, English, French, German, Indonesian, Italian, Korean, Portuguese,” it stated.
The report further highlighted some of the most impactful uses of AI to generate falsehoods in 2023 which include a TikTok video featuring an AI-generated “Obama” reading a fake “statement” on Campbell’s death and the use of ChatGpt to rewrite contents without detection.
It further revealed that Unreliable AI-generated News Sites (UAINS), have increased from 49 domains in May 2023 to more than 600 as of December 2023.