Back To Top

[Herald-Pioneer Essay Contest] Artificial Intelligence: Friend or Foe?


By Erin Jeong Markussen

Atherton International School

Below is a winning essay from The Herald-Pioneer Essay Contest. -- Ed.

Imagine a world in which artificial intelligence (AI) was used to assist military systems, perhaps gaining control over autonomous drones, or missile launching platforms. The rapid development of AI technology in recent years has been subject to much skepticism and concern, largely surrounding the debate as to whether or not AI poses a threat to humanity. This is partially due to the notion that the mass implementation of AI, which can be seen in the use of inventions such as humanoid robots, may reduce the availability of jobs and careers in the future, as well as potentially endanger human lives. However, it is an incontrovertible fact that AI has the potential to alleviate a host of 21st century problems.


Humanoid robots (AI controlled machines designed to resemble human beings) provide numerous advantages to modern society. They have several applications in the healthcare, construction, and space industries, among many others. The market for humanoid robots is currently worth US$1.5 billion (Humanoid Robot Market, 2022) and is expected to increase to at least US$3.9 billion by 2023 (Service Robots: Humanoid Robots, 2022). Within the medical field, humanoid robots are an invaluable resource, partly because they can complete tasks that are typically performed by humans—at a fraction of the time and cost—allowing healthcare workers to focus on other, more crucial tasks. Furthermore, humanoid robots are also used for the development of sophisticated prosthetics: the Waseda Bipedal Humanoid No.2 Refined (WABIAN-2R) is a new medical robot invented to operate as a human motion simulator (Biped Humanoid Robot WABIAN-2R, 2013). It provides researchers with quantitative data for creating prosthetics for patients requiring lower limb rehabilitation. It is therefore no longer wishful thinking to assert that humanoid robots will soon provide companionship to the sick and elderly, performing as robotic nurses.


Despite all of the beneficial applications of AI-controlled humanoid robots, there are also numerous drawbacks. One of the main concerns regarding the mass adoption of this advanced technology is the potential adverse impact on employment. According to the World Economic Forum’s (WEF) Future of Jobs Report, AI machinery may replace 85 million jobs worldwide by 2025 (Ascott, 2021), massively reducing the amount of available jobs in a world where unemployment is already a major dilemma: roughly 214 million people were unemployed as of 2021 (Statista, 2022). To compound this, a rapidly growing world population could cause millions of people to fiercely compete for positions in highly competitive industries. This could lead to an even greater increase in unemployment, as well as worsen other critical global issues such as poverty, debt, and homelessness. The WEF, however, acknowledges that we should avoid the misconception that AI will inevitably lead to a dramatic increase in unemployment. They indicate how the introduction of AI in the workforce will instead prompt a significant rise in new positions. According to the same report, an estimated 97 million new jobs will be created, since increasing demand for AI in the workplace will also create a higher demand for jobs such as robotics engineers, machine-learning experts and data scientists. This implies that AI will take over many laborious and dangerous jobs, but allow humans to work in other careers with safer working environments.


Another major concern surrounding the advancement of AI is the amount of influence AI will receive and how this will impact societal safety. Isaac Asimov famously proposed the ‘Three Laws of Robotics’ (1942) designed to protect humans from rogue robots. These three laws beautifully encapsulate robots’ proper role: robots must not injure humans or allow them to be harmed; robots must obey all orders given by humans except ones that would conflict with the First Law; and finally, robots must protect their own existence as long as this does not conflict with the first two laws. Asimov understood that the most advantageous traits of AI technology may also very well be the most dangerous.


Ghost Robotics, a US-based company specializing in state-of-the-art military solutions, have experimented with strapping guns to robot dogs (Vincent, 2021). This is a truly alarming development and reveals how AI can easily be weaponized. Other companies are also producing robots to be made out of bulletproof material, lift colossal weights, or reach high speeds relative to humans (Hambling, 2021). Throughout human history, authoritarian governments, terrorist groups, and heinous individuals have attacked human liberty. In the hands of those with malicious intent, weaponized AI could be deployed with calamitous consequences. Rogue states are also exploiting these AI tools to suppress individuals. This is already being done, albeit to a much lesser extent, where remote systems like video surveillance are monitoring humans using government databases of personal information (Video-surveillance, 2022). It is argued that this is to “keep the people safe,” but who exactly, is it that we need to be kept safe from?


This being said, by taking over simple but time-consuming tasks, humanoid robots can simplify the lives of healthcare workers and allow them to focus on more significant duties. These robots can also aid in many other projects such as the development of lower limb prosthetics. Moreover, the advancement of AI can lead to a spike in available jobs and relieve people of working hazardous jobs, despite the popular belief that including this technology in the workforce will have the reverse effect. The menace of AI is that it could be exploited by those with power and influence, giving them even greater control and dominion over others. With the potential power AI can hold, the violation of just one of Asimov’s Three Laws could entail absolute catastrophe for humanity. We must agree on rigorous controls and appropriate measures to ensure that AI creates a more harmonious and abundant world for all, without endangering humanity.


* The competition was mutually organized by The Korea Herald and Pioneer Academics. This essay was submitted to and selected by the Pioneer Academics Research Program, the world’s only fully-accredited online research program for high school students. The essay was reviewed by Mr. Brian Cooper, who was the former the director of Duke University’s Talent Identification Program (TIP), and currently leads the Research & Development department at Pioneer Academics. The Pioneer Academics Research Program has world’s only online academic system that is trusted and recognized by the most selective universities and colleges.

By Korea Herald (