AI for Detecting Misinformation: A Discussion on the Case of COVID-19 in Indonesia

Authors

  • Santi Indra Astuti The Faculty of Communication Science, Universitas Islam Bandung (UNISBA), Indonesia http://orcid.org/0000-0003-1776-8182
  • Dyaning Pangestika Integrated Marketing Communication Department, School of Communication, Universiti Sains Malaysia (USM) Kuala Lumpur, Malaysia.

Keywords:

artificial intelligence, pandemic, verifying, health issues, information

Abstract

The advent of generative Artificial Intelligence (AI) was viewed as a threat to the information ecosystem due to generative AI's ability to create 'stories' that might easily be twisted as misinformation. Generative AI was alleged to be a high-risk tool for fact checkers, journalists, public officials, and others responsible for verifying and sharing correct information, increasing the possibility of widespread misinformation and disinformation in the society. However, as technology evolves, so does humans' comprehension of the new machines. There are many benefits that could be explored from generative AI, as the platform offers a plethora of potential applications. To explore the possibility, this research investigates the potential of AI to detect health misinformation. Focusing on COVID-19 misinformation in Indonesia, the study employing Qualitative Content Analysis (QCA) to examine the result of AI machines, namely Copilot, ChatGPT, and Gemini, by using specific prompts in two languages (Indonesia and English) which applied in two different times (August and September 2024). This study concludes that the aforementioned AI platforms are capable of detecting misinformation while providing supporting claims to substantiate their reasoning. However, while generative AI has the potential to be utilized as a tool for detecting misinformation, further improvements should be made to refine the output. Furthermore, due to the nature of generative AI learning through deep learning, users who wish to utilize generative AI platforms to debunk hoaxes have to perform additional research to complement their findings.

Author Biographies

Santi Indra Astuti, The Faculty of Communication Science, Universitas Islam Bandung (UNISBA), Indonesia

Lecturer at the Faculty of Communication Science, Bandung Islamic University (UNISBA), Indonesia. Digital Literacy enthusiast. Co Founder JAPELIDI (Indonesia Digital Activist Network).

Dyaning Pangestika, Integrated Marketing Communication Department, School of Communication, Universiti Sains Malaysia (USM) Kuala Lumpur, Malaysia.

Graduated from Master Degree Program at Integrated Marketing Communications in Universiti Sains Malaysia's branch in Kuala Lumpur (USM@KL). Currently working as a Communication Specialist at Brightminds Communication, Indonesia.

References

Akhtar, P., Ghouri, A. M., Khan, H. U. R., Amin ul Haq, M., Awan, U., Zahoor, N., ... & Ashraf, A. (2023). Detecting fake news and disinformation using artificial intelligence and machine learning to avoid supply chain disruptions. Annals of operations research, 327(2), 633-657.

Amnesty International. (2024). Unravelling a Murky Network of Spyware Exports To Indonesia a Web of Surveillance.

Assarroudi, A., Heshmati Nabavi, F., Armat, M. R., Ebadi, A., & Vaismoradi, M. (2018). Directed qualitative content analysis: the description and elaboration of its underpinning methods and data analysis process. Journal of research in nursing, 23(1), 42-55.

Avey, C. (2024). AI Misinformation: Concerns and Prevention Methods. GlobalSign Blog. https://www.globalsign.com/en/blog/ai-misinformation-concerns-and-prevention

Bengtsson, M. (2016, January). How to plan and perform a qualitative study using content analysis. NursingPlus Open, 2, 8–14.

Bereskin, C. (2023). Commonwealth Parliamentary Association. (2023, December). Parliamentary Handbook On Disinformation, AI and Synthetic Media. Commonwealth Parliamentary Association (CPA)

Bontridder, N., & Poullet, Y. (2021). The role of artificial intelligence in disinformation. Data & Policy, 3, e32.

Bozkurt, A., & Sharma, R. C. (2023). Generative AI and prompt engineering: The art of whispering to let the genie out of the algorithmic world. Asian Journal of Distance Education, 18(2), i-vii.

Dalkir, K. (2021). Fake News and AI: Fighting Fire with Fire. In CEUR Workshop Proc (Vol. 2942, pp. 112-115).

Fatimah, R., Mumtaz, A., Fahrezi, F. M., & Zakaria, D. (2024). AI-Generated Misinformation: A Literature Review. Indonesian Journal of Artificial Intelligence and Data Mining, 7(2), 241-254.

Gifu, D. (2023). An intelligent system for detecting fake news. Procedia Computer Science, 221, 1058-1065.

Imtiaz, A., Pathirana, N., Saheel, S., Karunanayaka, K., & Trenado, C. (2024). A Review on the Influence of Deep Learning and Generative AI in the Fashion Industry. Journal of Future Artificial Intelligence and Technologies, 1(3), 201-216.

Irwansyah. (2024). ASEAN Guideline On Management Of Government Information In Combating Fake News and Disinformation In The Media. Ministry of Communications and Informatics Republic of Indonesia.

Khan, S., Hakak, S., Deepa, N., Prabadevi, B., Dev, K., & Trelova, S. (2022). Detecting covid-19-related fake news using feature extraction. Frontiers in Public Health, 9, 1–9. https://doi.org/10.3389/fpubh.2021.788074

Khasanah, U. (2024). 7 AI Terpopuler 2024, Mana yang Paling Banyak Digunakan?. IDN Times. https://www.idntimes.com/tech/gadget/ai-terpopuler-2024-00-vs37w-8fchm6

Knoth, N., Tolzin, A., Janson, A., & Leimeister, J. M. (2024). AI literacy and its implications for prompt engineering strategies. Computers and Education: Artificial Intelligence, 6, 100225.

Kreps, S., McCain, R. M., & Brundage, M. (2022). All the News That’s Fit to Fabricate: AI-Generated Text as a Tool of Media Misinformation. Journal of Experimental Political Science, 9(1), 104–117. doi:10.1017/XPS.2020.37

Monteith, S., Glenn, T., Geddes, J. R., Whybrow, P. C., Achtyes, E., & Bauer, M. (2024). Artificial intelligence and increasing misinformation. The British Journal of Psychiatry, 224(2), 33-35.

Nazar, S., & Bustam, M. R. (2020). Artificial intelligence and new level of fake news. In IOP conference series: materials science and engineering (Vol. 879, No. 1, p. 012006). IOP Publishing.

Purnat, T. D., Vacca, P., Czerniak, C., Ball, S., Burzo, S., Zecchin, T., Wright, A., Bezbaruah, S., Tanggol, F., Dubé, È., Labbé, F., Dionne, M., Lamichhane, J., Mahajan, A., Briand, S., & Nguyen, T. (2021). Infodemic signal detection during the COVID-19 pandemic: Development of a methodology for identifying potential information voids in online conversations. JMIR Infodemiology, 1(1). https://doi.org/10.2196/30971

Robertson, J., Ferreira, C., Botha, E., & Oosthuizen, K. (2024). Game changers: A generative AI prompt protocol to enhance human-AI knowledge co-construction. Business Horizons, 67(5), 499-510.

Santos, F. C. C. (2023). Artificial Intelligence in Automated Detection of Disinformation: A Thematic Analysis. Journalism and Media 4, 2 (2023), 679–687.

The Ethical Role of AI in Media: Combating Misinformation. (2024). Omdena. https://www.omdena.com/blog/the-ethical-role-of-ai-in-media-combating-misformation

Xu, D., Fan, S., & Kankanhalli, M. (2023). Combating misinformation in the era of generative AI models. In Proceedings of the 31st ACM International Conference on Multimedia (pp. 9291-9298).

Downloads

Published

2025-12-15