34 C
Mumbai
Sunday, February 23, 2025

AI can generate volumes of misleading health content say researchers

AI can create over 100 false blogs, 20 deceptive photos, and a convincing deep-fake film on vaping and vaccines in an hour, according to Flinders University, Australia. The researchers generated massive health-related falsehoods using OpenAI’s GPT Playground and generative AI platforms. The report emphasizes AI alertness and healthcare workers’ proactive risk monitoring….

A recent research found that AI algorithms can create over 100 false blogs, 20 deceptive photos, and a convincing deep-fake movie on vaping and vaccines in an hour to disseminate health disinformation. Medical experts from Flinders University, Australia, warned that the movie may be translated into 40 languages, increasing its risk.

“Our study demonstrates how easy it is to use currently accessible AI tools to generate large volumes of coercive and targeted misleading content on critical health topics, complete with hundreds of fabricated clinician and patient testimonials and fake, yet convincing, attention-grabbing titles,” said Bradley Menz, a university researcher and first author of the JAMA Internal Medicine study.

The researchers tested OpenAI’s GPT Playground, a large language model (LLM), for its ability to generate health-related misinformation. The LLM is an AI system trained on enormous textual datasets that can recognize, translate, forecast, and generate text.

The team also studied publicly accessible generative AI tools like DALL-E 2 and HeyGen.

Their survey found that GPT Playground created 102 blog entries with almost 17,000 words of vaccination and vaping falsehoods in 65 minutes.

They stated the researchers used AI avatar technology and natural language processing to create a disturbing deepfake video of a health professional spreading vaccination misinformation in under five minutes. Video might be modified in 40+ languages.

The study found alarming situations and stressed the necessity for strong AI supervision, experts added.

They said that the data showed how healthcare workers should proactively reduce and monitor AI-generated false health information dangers.

“The implications of our findings are clear: society currently stands at the cusp of an AI revolution, yet in its implementation governments must enforce regulations to minimise the risk of malicious use of these tools to mislead the community,” Menz.

Conclusion


AI algorithms can create over 100 false blogs, 20 deceptive photos, and a convincing deep-fake film on vaping and vaccines in an hour, according to Flinders University, Australia. OpenAI’s GPT Playground, a large language model (LLM), generated massive health-related falsehoods. In natural language processing, the LLM can detect, translate, predict, and produce text. Under five minutes, the researchers created 102 blog posts, 17,000 words, and a troubling deepfake movie. The research emphasizes the need of AI awareness and how healthcare practitioners may reduce and monitor AI-generated false health information dangers. To prevent misinformation from these technologies, governments must regulate them.

Taushif Patel
Taushif Patelhttps://taushifpatel.com
Taushif Patel is a Author and Entrepreneur with 20 years of media industry experience. He is the co-founder of Target Media and publisher of INSPIRING LEADERS Magazine, Director of Times Applaud Pvt. Ltd.

Related Articles

Latest Articles