Disinformation in the era of digital transformation
Generative AI as a tool for generating high-risk synthetic media – case study and strategic solutions
Vol.16,No.1(2024)
Purpose – The contribution examines the growing phenomenon of generative AI and its impact on the creation and dissemination of disinformation, fake news, and deepfake content in the online environment. The aim is to provide a comprehensive view of the challenges associated with the unethical use of generative AI and to present strategic solutions to address this issue.
Design / methodology / approach – The main method used was a descriptive case study with elements of explanation and exploration, with content analysis of individual cases, induction, and interpretation of findings as partial methods.
Results – The case study demonstrated that generative AI represents a significant tool in the hands of disseminators of disinformation and misinformation. The ability of AI to generate convincing content that is increasingly difficult to distinguish from reality allows for unprecedented manipulation of information. The findings of the case study have important theoretical and practical implications. Theoretically, they expand the understanding of the dynamics between technological advancement and its impact on society. Practically, they highlight the urgent need for the application of technological, legal, and intellectual solutions. The limitations of the study relate to potential subjectivity in the selection of cases and interpretation of data.
Originality / value – The contribution provides a comprehensive view of the issue of unethical use of generative AI for creating and spreading disinformation and deepfake content in the online environment. In addressing the issue, the contribution suggests the need for the implementation of an approach focused on combining technological solutions, legal regulation, and intellectual approaches.
generative artificial intelligence; disinformation; fake news; deepfake; synthetic media
Tomáš Mirga
Katedra knižničnej a informačnej vedy, Univerzita Komenského v Bratislave
Tomáš Mirga je interný doktorand na Katedre knižničnej a informačnej vedy na Univerzite Komenského v Bratislave. V kontexte svojej akademickej činnosti sa orientuje na problematické oblasti online prostredia, akými sú informačné bubliny, dezinformácie, fake news a deepfake obsahy. Z hľadiska širšieho zamerania sa venuje informačnej etike, informačnej a mediálnej gramotnosti, sociálnej psychológii, kognitívnym skresleniam, sociálnym médiám a umelej inteligencii.
AFFSPRUNG, D. (2023). The ELIZA defect: constructing the right users for generative AI. Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 945–946. https://dl.acm.org/doi/10.1145/3600211.3604744
ALLYN, B. (2022). Deepfake video of Zelenskyy could be 'tip of the iceberg' in info war, experts warn. NPR. https://www.npr.org/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-war-manipulation-ukraine-russia
ALPHONSO, A. (2023). Photo of Pope Francis with two women in a hot tub is an AI-generated fake. Boom. https://www.boomlive.in/fact-check/world/fake-news-viral-photo-pope-francis-with-two-women-in-a-bathtub-midjourney-factcheck-21551
BAIDOO-ANU, D. & OWUSU, L. (2023). Education in the era of generative artificial intelligence (AI): understanding the potential benefits of ChatGPT in promoting teaching and learning. SSRN. http://dx.doi.org/10.2139/ssrn.4337484
BARANI, M. & DYCK, P. (2023). Generative AI and the EU AI Act - a closer look. Allen & Overy. https://www.allenovery.com/en-gb/global/blogs/tech-talk/generative-ai-and-the-eu-ai-act-a-closer-look
BARREDO, D., JAMIL, S. & MONTEMAYOR, D. (2023). Disinformation and artificial intelligence: the case of online journalism in China. Estudios sobre el Mensaje Periodístico, 29(4), 761-770. https://revistas.ucm.es/index.php/ESMP/article/view/88543
BAXTER, K. & Y. SCHLESINGER (2023). Managing the risks of generative AI. Harvard Business Review. https://hbr.org/2023/06/managing-the-risks-of-generative-ai
BRANDON (2024). Taylor Swift became a victim of deepfake porn 13:04. Social Bites. https://socialbites.ca/culture/480158.html
BUȘINCU, C. & A. ALEXANDRESCU. (2023). Blockchain-based platform to fight disinformation using crowd wisdom and artificial intelligence. Applied Sciences, 13(10). https://www.mdpi.com/2076-3417/13/10/6088
C2PA. (2023). Wait, where did this image come from?. https://contentcredentials.org/
CABINET OFFICE. [s. a.]. Society 5.0. https://www8.cao.go.jp/cstp/english/society5_0/index.html
CENTRE FOR DATA ETHICS AND INNOVATION. [s. a.]. About us. https://www.gov.uk/government/organisations/centre-for-data-ethics-and-innovation/about
Coalition for content provenance and authenticity. (2024). https://c2pa.org/
DAHLSTROM, M. F. (2020). The narrative truth about scientific misinformation. SSRN. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3497784
Department of industry, science and resources. [s. a.]. Australia’s AI ethics principles. https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework/australias-ai-ethics-principles
European Commission. (2024). AI Act. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
European Commission. (2018). A multi-dimensional approach to disinformation: report of the independent High level Group on fake news and online disinformation.. https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=50271
Európska komisia. [s. a.]. Akt o digitálnych službách. https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/digital-services-act_sk
Ethics4EU Consortium. (2021). European values for ethics in digital technology. https://arrow.tudublin.ie/scschcomrep/12/
European parliament. (2023). Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI. https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai
Európsky parlament — kancelária na slovensku. (2023). Martin Hojsík na tému pravidiel riadenia umelej inteligencie (jún 2023). YouTube. https://www.youtube.com/watch?v=PhylE0ve4fU
FAGUY, A. (2023). Taylor Swift—who became a billionaire—caps year as time's person of the year. Forbes. https://www.forbes.com/sites/anafaguy/2023/12/06/taylor-swift-named-time-magazines-person-of-the-year/?sh=5dbfb3b91539
FÁTIMA C. (2023). Artificial intelligence in automated detection of disinformation: a thematic analysis. Journal of Media, 679-687. https://www.mdpi.com/2673-5172/4/2/43
FAZLIOGLU, M. (2023). US federal AI governance: laws, policies and strategies. iapp. https://iapp.org/resources/article/us-federal-ai-governance/
FERRARA, E. (2024). GenAI against humanity: nefarious applications of generative artificial intelligence and large language models. Computers and Society. https://arxiv.org/abs/2310.00737
FEUERRIEGEL, S., DIRESTA, R., GOLDSTEIN, J. A., KUMAR, S., LORENZ-SPREEN, P., TOMZ, M. & PRÖLLOCHS, N. (2023). Research can help to tackle AI-generated disinformation. Nature Human Behaviour, 1818–1821. https://www.nature.com/articles/s41562-023-01726-2
GAURAV, P., ZHANG, R. & ZHU, J. (2022). On aliased resizing and surprising subtleties in GAN evaluation. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11400-11410. https://ieeexplore.ieee.org/document/9880182
GIORDANI, J. (2023). Unmasking the dark side of generative AI: protecting your data from security threats. LinkedIn. https://www.linkedin.com/pulse/unmasking-dark-side-generative-ai-protecting-your-data-john-giordani/
GOODFELLOW, I., BENGIO, Y. & COURVILLE, A. (2016). Deep Learning. MIT Press. https://www.deeplearningbook.org/
Google DeepMind. [s. a.]. SynthID. https://deepmind.google/technologies/synthid/
GORDON, R. (2023). Using AI to protect against AI image manipulation. MIT News. https://news.mit.edu/2023/using-ai-protect-against-ai-image-manipulation-0731
Government of canada. ([2022]). The Artificial Intelligence and Data Act (AIDA) – Companion document. https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document
GOWAL, S. & KOHLI, P. (2023). Identifying AI-generated images with SynthID. Google DeepMind. https://deepmind.google/discover/blog/identifying-ai-generated-images-with-synthid/
GUARNERA, L., GIUDICE, O., NASTASI, C. & BATTIATO S. (2020). Preliminary forensics analysis of deepfake images. 2020 AEIT International Annual Conference (AEIT). https://ieeexplore.ieee.org/document/9241108/authors#authors
GUPTA, R. P. (2023). Prompt hacking - what we should know. LinkedIn. https://www.linkedin.com/pulse/prompt-hacking-what-we-should-know-ravi-prakash-gupta/
HAIGH, M, HAIGH, T. & MATYCHAK, T. (2019). Information literacy vs. fake news: the case of Ukraine. Open Information Science. https://www.degruyter.com/document/doi/10.1515/opis-2019-0011/html
HASANAIN, M., ALAM, F., MUBARAK, H., ABDALJALIL, S., ZAGHOUANI, W., NAKOV, P., MARTINO G. & FREIHAT, A. A. (2023). ArAIEval shared task: persuasion techniques and disinformation detection in arabic text. Computation and Language. https://arxiv.org/abs/2311.03179
HEIKKILÄ, M. (2023). This new tool could protect your pictures from AI manipulation. MIT Technology Review. https://www.technologyreview.com/2023/07/26/1076764/this-new-tool-could-protect-your-pictures-from-ai-manipulation/
HEIKKILÄ, M. (2023a). This new data poisoning tool lets artists fight back against generative AI. MIT Technology Review. https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/
HELMUS, T. C. (2022). Artificial intelligence, deepfakes, and disinformation. Rand. https://www.rand.org/content/dam/rand/pubs/perspectives/PEA1000/PEA1043-1/RAND_PEA1043-1.pdf
HENDL, J. (2008). Kvalitativní výzkum: základní teorie, metody a aplikace. Portál.
HENDRICKSON, L. (2024). Deepfake AI: how verified credentials enhance media authenticity. identity. https://www.identity.com/deepfake-ai-how-verified-credentials-enhance-media-authenticity/#What_Are_Verifiable_Credentials
HIGGINS, E. (2023). Making pictures of Trump getting arrested while waiting for Trump's arrest. X. https://twitter.com/EliotHiggins/status/1637927681734987777
CHEN, Y., PAN, X., LI, Y., DING, B. & ZHOU, J. (2023). EE-LLM: large-scale training and inference of early-exit large language models with 3D parallelism. Machine Learning. https://arxiv.org/abs/2312.04916
Infoz. ([s. a.]). Catfishing. https://www.infoz.cz/catfishing/
INGRAM, M. (2024). Taylor Swift deepfakes could be the tip of an AI-generated iceberg. Columbia Journalism Review. https://www.cjr.org/the_media_today/taylor_swift_deepfakes_ai.php
JEAN-PIERRE, K. (2024). White House ‘alarmed’ by AI deepfakes of Taylor Swift. YouTube. https://www.youtube.com/watch?v=-YIxFW9DoS4
KAI, S., SLIVA, A., SUHANG, W., JILIANG, T. & HUAN, L. (2017). Fake news detection on social media: a data mining perspective. ACM SIGKDD Explorations Newsletter, 19(1). https://dl.acm.org/doi/10.1145/3137597.3137600
KANUNGO, P. (2024). Who is ZvBear? Memes erupt on Twitter amid viral Taylor Swift AI pictures scandal. SK POP. https://www.sportskeeda.com/pop-culture/news-who-zvbear-memes-erupt-twitter-amid-viral-taylor-swift-ai-pictures-scandal
Kaspersky. (2024). What is Biometrics? How is it used in security?. https://usa.kaspersky.com/resource-center/definitions/biometrics
KAZAKHSTANDAZHASALGAN. (2024). [Deepfake obrázky Eiffelovej veže]. TikTok. https://www.tiktok.com/@kazakhstandazhasalgan/photo/7325512973080874246
KELLER, E. (2023). Pope Francis in Balenciaga deepfake fools millions: ‘Definitely scary’. New York Post. https://nypost.com/2023/03/27/pope-francis-in-balenciaga-deepfake-fools- Anmol Alphonso
KIM, Y., XU, X., MCDUFF, D., BREAZEAL, C. & PARK, H. W. (2023). Health-LLM: Large language models for health prediction via wearable sensor data. Computation and Language. https://arxiv.org/abs/2401.06866
KOTHARI, A., ORAMA, A., MILLER, R., PEEKS, M., BAILEY, R. & ALM, C. (2023). News consumption helps readers identify model-generated news. 2023 IEEE Western New York Image and Signal Processing Workshop (WNYISPW), 1-10. https://ieeexplore.ieee.org/document/10349588
KÜÇKERDOĞAN, B. & TURĞAL, L. (2023). Artificial intelligence as a disinformation tool: analyzing news photos on climate change in the example of Bing search engine. İletişim Ve Diplomasi, (11), 57-82. https://dergipark.org.tr/tr/pub/iletisimvediplomasi/issue/81401/1376404
LECUN, Y., BENGIO, Y. & HINTON, G. (2015). Deep learning. Nature, 521(7553), 436-444. https://www.scirp.org/reference/referencespapers?referenceid=1911084
LEONARD, N. E. & LEVIN, S. A. (2022). Collective intelligence as a public good. Online. Collective Intelligence, 1(1). https://journals.sagepub.com/doi/10.1177/26339137221083293#tab-contributors
LOON, B. (2017). Why we should hold ourselves responsible for fake news. Center for Digital Ethics & Policy. https://www.luc.edu/digitalethics/researchinitiatives/essays/archive/2017/whyweshouldholdourselvesresponsibleforfakenews/
LUO, Y., DAN, X. & SHEPHERD, N. (2023). China proposes draft measures to regulate generative AI. Inside Privacy. https://www.insideprivacy.com/artificial-intelligence/china-proposes-draft-measures-to-regulate-generative-ai/
MANCO, G., RITACCO, E., RULLO, A., SACCÁ, D. & SERRA, E. (2022). Generating synthetic discrete datasets with machine learning. SEBD 2022 Italian Symposium on Advanced Database Systems. https://dblp.org/rec/conf/sebd/0001RRSS22.html
MASTERSON, V. (2023). What is Nightshade – the new tool allowing artists to ‘poison’ AI models?. World Economic Forum. https://www.weforum.org/agenda/2023/11/nightshade-generative-ai-poison/
MICALLEF, N., AVRAM M., MENCZER, F. & PATIL, S. (2021). Game intervention to improve news literacy on social media. Proc. {ACM} Hum. Comput. Interact, 5. https://dblp.org/rec/journals/pacmhci/MicallefAMP21.html
MILLAR, B. (2021). Misinformation and the limits of individual responsibility. Social Epistemology Review and Reply Collective, 10(12), 8-21. https://philarchive.org/rec/MILMAT-22
Ministerstvo investícií, regionálneho rozvoja a informatizácie SR. (2024). Bezpečnejší digitálny svet: do účinnosti vstupuje akt o digitálnych službách. https://mirri.gov.sk/aktuality/digitalna-agenda/bezpecnejsi-digitalny-svet-do-ucinnosti-vstupuje-akt-o-digitalnych-sluzbach/
MITCHELL, G. (2024). Taylor Swift AI pictures Twitter controversy. Inside Inquiries. https://insightinquiries.com/taylor-swift-ai-pictures-twitter-2/
NOVAK, M. (2024). Pope Francis warns of AI dangers, citing fake image of him that went viral. Forbes. https://www.forbes.com/sites/mattnovak/2024/01/24/pope-francis-warns-of-ai-dangers-after-fake-image-of-himself-went-viral/?sh=6450b59f5aa0
O médiách. (2024). Objavilo sa deepfake video s tvárou a hlasom Pellegriniho a moderátora RTVS (VIDEO). https://www.omediach.com/internet/26046-objavilo-sa-deepfake-video-s-tvarou-a-hlasom-
OPENAI. (2023). ChatGPT-4. https://chat.openai.com/
PARK, S. (2023). Use of generative artificial intelligence, including large language models such as ChatGPT, in scientific publications: policies of KJR and prominent authorities. Korean Journal of Radiology, 24(8), 715-718. https://kjronline.org/DOIx.php?id=10.3348/kjr.2023.0643
PBS. (2023). Fake AI images of Putin, Trump being arrested spread online. https://www.pbs.org/newshour/politics/fake-ai-images-of-putin-trump-being-arrested-spread-online
PEARSON, J. & ZINETS, N. (2022). Deepfake footage purports to show Ukrainian president capitulating. Reuters. https://www.reuters.com/world/europe/deepfake-footage-purports-show-ukrainian-president-capitulating-2022-03-16/
PEÑA-FERNÁNDEZ S., MESO-AYERDI, K., LARRONDO-URETA, A. & DÍAZ-NOCI, J. (2023). Without journalists, there is no journalism: the social dimension of generative artificial intelligence in the media. Profesional de la información, 32(2). https://revista.profesionaldelainformacion.com/index.php/EPI/article/view/87329/63392
PODELL, D., ENGLISH, Z., LACEY, K., BLATTMANN, A., DOCKHORN, T., MÜLLER, J., PENNA, J. & ROMBACH, R. (2023). SDXL: improving latent diffusion models for high-resolution image synthesis. Computer Vision and Pattern Recognition. https://arxiv.org/abs/2307.01952
Polícia Slovenskej republiky. (2023). Varovanie: voľby do národnej rady sprevádza zneužívanie umelej inteligencie. Meta. https://www.facebook.com/policiaslovakia/posts/pfbid0G6mNQyP7e3AoQD17FPM8yDUnpNiFXX8hyrMqfyMDHYHaES5sQJXLRFyEWUqn1Wk6l
Pravda. (2024). Pellegrini varuje pred AI a deep fake videami. Tvrdí, že zásadne môžu ovplyvniť aj prezidentskú kampaň. https://spravy.pravda.sk/prezidentske-volby-2024/clanok/698225-pellegrini-varuje-pred-ai-a-deep-fake-videami-tvrdi-ze-zasadne-mozu-ovplyvnit-aj-prezidentsku-kampan/
PULLELLA, P. (2024). Pope Francis, victim of AI, warns against its 'perverse' dangers. Reuters. https://www.reuters.com/world/pope-francis-victim-ai-warns-against-its-perverse-dangers-2024-01-24/
RABINDER, H. (2019). Role of artificial intelligence in new media. CSI Communications, 23-25.https://www.academia.edu/49150704/Role_of_Artificial_Intelligence_in_New_Media
Reddit (2023). Pope Francis bike week.https://www.reddit.com/r/midjourney/comments/134dybi/pope_francis_bike_week/
Regulation (EU) 2022/2065 of the European parliament and of The Council. (2022). Official Journal of the European Union. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32022R2065
REMOLINA, N. & SEAH, J. (2019). How to address the AI governance discussion? What can we learn from Singapore's AI strategy? SMU Centre for AI & Data Governance Research Paper No. 2019/03. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3444024
RICHARDSON, J. & MILOVIDOV, E. (2019). Digital citizenship education handbook. Council of Europe Publishing. https://rm.coe.int/digital-citizenship-education-handbook/168093586f
ROMAIN, S. (2023). Sentinel AI: the new frontier in deepfake detection. Romain Berg. https://www.romainberg.com/blog/artificial-intelligence/sentinel-ai-your-ultimate-deepfake-detection-solution/
RUSSELL, S. & NORVIG, P. (2010). Artificial intelligence: a modern approach. Prentice Hall. https://people.engr.tamu.edu/guni/csce421/files/AI_Russell_Norvig.pdf
SENTINEL (2024). Thesentinel. https://thesentinel.ai/
SHAN, S., WENGER, E., ZHANG, J., LI, H., ZHENG, H. & ZHAO, B. Y. [s. a.]. Image "cloaking" for personal privacy. SAND Lab. https://sandlab.cs.uchicago.edu/fawkes/
SHIMPO, F. (2020). The importance of ‘smooth’ data usage and the protection of privacy in the age of AI, IoT and autonomous robots. Global Privacy Law Review, 1(1), 49-54. https://kluwerlawonline.com/journalarticle/Global+Privacy+Law+Review/1.1/GPLR2020006
SHOAIB, M. R., WANG, Z., AHVANOOEY, M. T. & ZHAO, J. (2023). Deepfakes, misinformation, and disinformation in the era of frontier AI, generative AI, and large AI models. 2023 International Conference on Computer and Applications (ICCA), 1-7. https://ieeexplore.ieee.org/document/10401723
SIMON, F. M., ALTAY, S. & MERCIER, H. (2023). Misinformation reloaded? Fears about the impact of generative AI on misinformation are overblown. Harvard Kennedy School (HKS) Misinformation Review, 4(5). https://misinforeview.hks.harvard.edu/wp-content/uploads/2023/10/simon_generative_AI_fears_20231018.pdf
STANLEY-BECKER, I. & NIX, N. (2023). Fake images of Trump arrest show ‘giant step’ for AI’s disruptive power. The Washington Post. https://www.washingtonpost.com/politics/2023/03/22/trump-arrest-deepfakes/
STOKEL-WALKER, C. (2023). We spoke to the guy who created the viral AI image of The Pope that fooled the world. buzzfeednews. https://www.buzzfeednews.com/article/chrisstokelwalker/pope-puffy-jacket-ai-midjourney-image-creator-interview
STRICKLAND, E. (2023). Content credentials will fight deepfakes in the 2024 elections. IEEE Spectrum. https://spectrum.ieee.org/deepfakes-election
TASR. (2023). Europoslanci rokovali o pravidlách pre bezpečnú umelú inteligenciu. https://www.teraz.sk/priame-prenosy-a-videa-tasr-tv/europoslanci-rokovali-o-pravidlach-pre/722992-clanok.html
The associated press. (2023). AI-generated images of Trump being arrested circulate on social media. https://apnews.com/article/fact-check-trump-nypd-stormy-daniels-539393517762
TravelWise. (2024). Deepfake about the Eiffel tower fire went viral: what's wrong with it and why people believed it. https://www.travelwiseway.com/section-news/news-deepfake-about-the-eiffel-tower-fire-went-viral-whats-wrong-with-it-and-why-people-believed-it-28-01-2024.html
TREDINNICK, L. & LAYBATS, C. (2023). The dangers of generative artificial intelligence. Business Information Review, 40(2). https://journals.sagepub.com/doi/10.1177/02663821231183756
TRUMP, J. D. (2023). [Zmanipulovaný obrázok]. Truth Social. https://truthsocial.com/@realDonaldTrump/posts/110475191660845818
TURING, A. (1950). Computing machinery and intelligence. Mind, New Series, 59(236), 433-460. https://phil415.pbworks.com/f/TuringComputing.pdf
ULMER, A. & TONG, A. (2023). With apparently fake photos, DeSantis raises AI ante. Reuters. https://www.reuters.com/world/us/is-trump-kissing-fauci-with-apparently-fake-photos-desantis-raises-ai-ante-2023-06-08/
VASWANI, A., SHAZEER, N., PARMAR, N., USZKOREIT, J., JONES, L., GOMEZ, A. N., KAISER, Ł. & POLOSUKHIN, I. (2017). Attention is all you need. Conference on Neural Information Processing Systems. https://arxiv.org/pdf/1706.03762.pdf
WAKEFIELD, J. (2022). Deepfake presidents used in Russia-Ukraine war. BBC. https://www.bbc.com/news/technology-60780142
WALKER, J., THUERMER, G., VICENS, J. & SIMPERL, E. (2023). AI art and misinformation: approaches and strategies for media literacy and fact checking. AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 26-37. https://dl.acm.org/doi/10.1145/3600211.3604715
WEAVER, A. (2024). [falošné video]. https://www.facebook.com/reel/696473435970931
2023. Donald Trump became a household name by FIRING countless people *on television*. https://twitter.com/DeSantisWarRoom/status/1665799058303188992?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1665799058303188992%7Ctwgr%5E8f403b7dc5f370ef80c618848409a300fa4f884d%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.reuters.com%2Fworld%2Fus%2Fis-trump-kissing-fauci-with-apparently-fake-photos-desantis-raises-ai-ante-2023-06-08%2F
XIN, W., HUI, G., SHU, H., MING-CHING, C. & SIWEI, L. (2022). GAN-generated faces detection: a survey and new perspectives. Frontiers in Artificial Intelligence and Applications. 372, 2533-2542. https://ebooks.iospress.nl/doi/10.3233/FAIA230558
XU, D., FAN, S. & KANKANHALLI, M. (2023). Combating misinformation in the era of generative AI models. Proceedings of the 31st ACM International Conference on Multimedia, 9291–9298. https://dl.acm.org/doi/10.1145/3581783.3612704
ZACH & WALKER, A. (2024). Eiffel tower on fire AI hoax. Know Your Meme. https://knowyourmeme.com/memes/eiffel-tower-on-fire-ai-hoax
This work is licensed under a Creative Commons Attribution 4.0 International License.
Copyright © 2024 Tomáš Mirga