Fast-evolving artificial intelligence offers plenty of powerful tools. As with any device and technology, however, AI can be easily abused in a way that often generates unfathomable results.
A striking case in point is a fake photo of an explosion near the US Pentagon that went viral on Monday. The concocted image, likely generated by AI, triggered a brief dip in the US stock market, as some media and individual accounts on Twitter picked up the post and shared it with their followers.
The messy development is partly -- if not largely -- due to Twitter’s new verification system, introduced by Elon Musk. Twitter used to award its blue badges to public figures, government organizations and news media following a manual verification procedure. Users and media outlets could be skeptical about viral reports from unverified accounts.
Now, anyone can buy the blue badge for $8 a month and potentially impersonate authoritative Twitter accounts. This makes it extremely difficult to rely on verified Twitter accounts and refute hoaxes in time, especially when the doctored report goes viral within minutes.
Another factor that compounds the job of filtering out fake news is the advent of more sophisticated generative AI tools. ChatGPT can instantly create fake texts by mixing facts and fiction. AI image tools like Midjourney and Stable Diffusion allow even novice users to create incredibly realistic images with ease.
Fake news and photos are nothing new on Twitter. What matters now is that more viral images are being created by AI tools and their impact on crucial sectors such as the financial market are feared to get amplified.
The possibility that a simple hoax can wreak havoc on social and economic infrastructure should be taken seriously, as easy-to-access generative AI programs are bound to spark more chaotic responses by churning out like-like forgery.
The implications of misinformation powered by AI can be much deeper and wider than one might assume. More fake videos are popping up on YouTube, making it hard to recognize a visual propaganda campaign produced with malicious intent. As major search engines adopt chatbot programs, there is a concern that a torrent of misinformation can flood the online communities in a way that disrupts elections and affects the stock market.
Last Friday, NewsGuard, a US firm that tracks online misinformation, said it had identified 125 websites that produce content entirely or mostly with AI tools, and the number of news and information sites generated by AI with little to no human oversight had more than doubled in two weeks.
As AI-generated content farms continue to proliferate and pose threats to the reliability of news and online posts, regulators around the world are increasingly seeking ways to put laws and regulations in place in a bid to minimize the potential risks of AI and hold developers accountable for their programs.
South Korea is just beginning to recognize the perils of AI-based fake content and taking an initial legislative action. On Monday, Rep. Lee Sang-heon of the main opposition Democratic Party of Korea proposed a revision to the Content Industry Promotion Act, requiring content producers to clearly state that certain content is created through AI technology.
Korea is particularly vulnerable to the risks of viral news forged by AI programs. The nation boasts of an advanced IT industry and runs extensive wired and mobile networks, with a huge number of people sharing large amounts of news and data on online platforms. Not only social media such as Twitter, but also KakaoTalk, the country’s biggest mobile messenger app, could be exploited as key channels through which AI-generated fake news can circulate at an unstoppable pace.
There are already a number of online chat rooms based on KakaoTalk and other major platforms where users exchange information on online banking and stock trading. It is increasingly easy to create fake texts, images and videos with AI tools and possibly spread false content.
Regulating rapidly evolving AI tools is no easy task. Too many regulations could stifle innovation in the local AI industry. Both the government and lawmakers are urged to set up proper regulations to prevent AI-generated hoaxes and draft a set of guidelines on the development and applications of generative AI technology.