The Korea Herald

피터빈트

[Herald Interview] 'AI success hinges on local nuances and governance, not just silicon'

Futurist Richard Yonck says Korea's AI ecosystem should focus on harmonizing innovation with local context, ethics and anticipatory governance

By Moon Joon-hyun

Published : Sept. 11, 2023 - 14:24

    • Link copied

Futurist Richard Yonck (Richard Yonck) Futurist Richard Yonck (Richard Yonck)

The evolution of generative AI must equally weigh up technological, ethical and regulatory complexities, said Richard Yonck, an American futurist with over 25 years of experience in computer systems and program analysis.

Yonck expressed concerns over the mounting global enthusiasm for AI while sharing insights on the challenges of cultivating a localized AI ecosystem in South Korea in a recent interview with The Korea Herald.

"The pressure to yield immediate results is tangible, especially in dynamic IT hubs like Korea," he said.

Discussing the common urge to mirror global successes, Yonck said he often fields questions about replicating Silicon Valley's model. He brought attention to the importance of context-specific policies. "Silicon Valley germinated at the crossroads of geography, history and varied socio-political conditions," he said.

For nations like Korea, success isn't merely technological, he noted. It also involves harmonizing with local nuances.

Localized AI shifted the conversation to Korea's Naver Clova X, touted as an answer to OpenAI's GPT. Yonck advised restraint in making early comparisons. He asserted that equating OpenAI models, which may utilize potentially copyright-bound web data, to Naver's potentially strict ethical and intellectual property standards, might be premature. Prominent foreign media outlets such as the New York Times and the Guardian have already blocked OpenAI’s web crawlers from accessing their content.

Yonck also recognized the appeal and also the fear of platforms like ChatGPT due to their human-like conversational interface, as he detailed the evolution of the AI-human interface from basic GUIs to modern touch and voice interactions.

"However, beneath this facade of human intelligence lies what is not that much different from sophisticated pattern-matching," he said. Yonck insisted on clearly distinguishing the two, especially as AI becomes embedded in the daily life of tech-forward nations like Korea.

Yonck also warned of generative AI's susceptibility to misuse, ethical dilemmas and IP issues, which is greater than those many any prior technologies. Citing OpenAI’s past practices of employing low-wage workers to filter toxic content, the moral challenges presented by such models are just as pressing as their technological promise, he said.

As a potential solution, he suggested Korea adopt the concept of anticipatory governance, where experts in governance use foresight to mitigate risks. The Office of Technology Assessment, operational in the US from 1974 to 1995, exemplified this approach by forecasting threats from climate change to nuclear proliferation.

"A robust AI future demands expertise not just in the commercial sector, but critically in the rest of the supporting innovation ecosystem. Overvaluing immediate technological gains at the expense of other stakeholders, including anticipatory governance can endanger freedoms and equity in ways we might not recover from," he said.

Yonck recently headlined the Digital Economy Forum 2023 in Seoul in a presentation titled "Leveraging Cutting-edge Technologies to Build Tomorrow’s Innovation Ecosystem." He is the author of two bestselling books about the future of artificial intelligence, "Future Minds" and "Heart of the Machine."