Artificial intelligence experts from around the world have called off their plans to boycott South Korea’s top science and technology research university KAIST over concerns that it was collaborating with a local arms maker over the development of autonomous “killer robots.”
KAIST said Monday that the group of AI researchers who had signed on to the boycott had withdrawn its plans to sever ties with the Korean university, after the institute gave clear, public assurance that AI-based weapons would not be developed.
Last week, over 50 of the world’s leading AI and robotics experts from 30 countries announced a boycott of KAIST after the university opened what they claimed was an AI weapons lab in partnership with Hanwha Systems, one of Korea’s top manufacturers of cluster munitions.
|A building at KAIST`s main campus in Daejeon, South Korea (123RF)|
They claimed that KAIST’s Research Center for the Convergence of National and Artificial Intelligence would accelerate the “global competition to develop autonomous arms” which would “search for and eliminate targets without human control.”
KAIST President Shin Sung-chul released a statement reaffirming that “KAIST does not have any intention to engage in the development of lethal autonomous weapons system and killer robots” and that the university was well-aware of the ethical concerns regarding the application of AI technology.
The university also pledged to “not conduct any research activities counter to human dignity including autonomous weapons lacking meaningful human control.”
As the boycott has been called off, the researchers who were signatories to the boycott will “once again visit and host researchers from KAIST, and collaborate on scientific projects,” KAIST said.
The boycott had been led by Toby Walsh, a professor of artificial intelligence at the University of New South Wales in Sydney. It had come ahead of this week’s United Nations meeting in Geneva discussing the challenges posed by lethal autonomous weapons, often called “killer robots.”
The meeting set to be held Monday will consider the military applications of AI and options for addressing the humanitarian and international security challenges posed by lethal autonomous weapons systems.
By Sohn Ji-young (firstname.lastname@example.org)