The Korea Herald

피터빈트

[Mohammad Hosseini, Kristi Holmes] Beware inherent biases and inequities in AI tools

By Korea Herald

Published : Dec. 12, 2023 - 05:28

    • Link copied

A year ago, OpenAI released ChatGPT -- a free generative artificial intelligence chatbot that creates text in response to user prompts.

With its launch, millions of people started using ChatGPT for tasks such as writing school essays, drafting emails and personal greetings, and retrieving information. Increasingly, more people and public offices are using ChatGPT to improve productivity and efficiency, conducting sophisticated tasks instantaneously that are typically beyond human abilities.

Publicly available reports show that in this year alone, 21 federal departments have used ChatGPT or similar systems to serve Americans, with the Departments of Energy, Health and Human Services, and Commerce being the top three users. Governmental uses of these systems may benefit the public by reducing costs or improving services. For example, US Customs and Border Protection has improved speed and trustworthiness of data entry and analysis, and the Department of Veterans Affairs has developed physical therapy support tools.

ChatGPT and similar systems can change work processes and human interactions across many domains -- and with it, create ethical, legal, social and practical challenges. One challenge involves an unequal distribution of benefits and burdens.

Companies such as the Silicon Valley giants that develop these systems, or integrate them into existing workflows, continue to benefit most of all. Even in the case of users whose daily tasks can be completed faster, their employers benefit more from ChatGPT in the long run. This is because with more efficiency comes lower labor costs. More importantly, when these technologies are fully incorporated, they can even replace workers with cheap and reliable robots, which is already happening in spaces such as Amazon warehouses.

A typical response to these changes is that new technologies have disrupted work throughout history and humans have always adapted. This is a straw man argument that doesn’t directly address the nuances of the current situation. Furthermore, it distracts us from better understanding the impacts on people and holding responsible those who contribute to and will benefit the most from this new dynamic.

Speaking of benefits, by helping us write clearer and faster or offering some assistance with digital tasks, ChatGPT and similar tools afford more time for interesting tasks such as ideation and innovation. Here, the short-term impact seems to be positive.

However, the very use of this technology creates ethical challenges. Ultimately, the companies advancing these technologies are disproportionately better off because they not only collect valuable user data -- which can make users vulnerable in the future -- but also, their system is trained with free labor.

These gains will allow them to offer specialized secondary services, for example, to companies that employ workers for office-based jobs. This lucrative future should explain why share prices for Microsoft, which now owns OpenAI, increased 52.26 percent over the last year -- from $247.49 on Nov. 25, 2022, to $377.44 on Nov. 20, 2023 -- and were not affected even after OpenAI’s bitter power struggle that led to the firing of its CEO, Sam Altman, on Nov. 17. In another twist, Altman is returning as CEO and will answer to a new board.

In the past year, domestic and international organizations have made moves toward encouraging responsible use of these systems. For example, the White House recently issued an executive order on AI with directives to guide AI use. Likewise, the United Kingdom’s Organization for Economic Cooperation and Development recently launched the OECD AI Policy Observatory. This observatory offers information about trustworthy AI policies and data-driven analysis through linked resources and country-level dashboards.

However, governments’ inability to mandate developers to disclose the data used to train these systems clearly demonstrates that policies can go only so far. Transparency and equity are key issues when it comes to enhancing generative AI tools because how can we know how to improve these models without knowing the inputs?

Indeed, training data should be inclusive and come from reputable and reliable sources, and algorithms should be unbiased and continually scrutinized. Such measures are essential for future development of these tools and can be achieved only with more transparency from all involved parties.

Furthermore, if developers of these tools hope to make major and meaningful social impacts, they should carefully consider the influence of these systems on current and future affairs. Some of our biggest global challenges include climate change, pandemics, immigration and conflicts in Africa, the Middle East and Europe, to name but a few. Although ChatGPT can evaluate existing data or generate new textual content to help us analyze and communicate about these critical issues, it can also be used nefariously.

Use of ChatGPT to create misinformation is challenging democratic values. The deluge of misinformation has already compromised our ability to understand and engage meaningfully with global challenges and will likely grow in severity.

For example, AI-generated images and troves of false texts have contributed to misinformation about global conflicts, which has influenced the public’s perception of events and contributed to polarizing opinions. When misinformation and fake news are created about climate change, pandemics, immigration, heath topics, and more, it becomes increasingly difficult to unite people and mobilize them to address these issues.

Whether or not we personally use ChatGPT and other AI systems, our lives will be affected by them. We may wonder what we can do to be informed in this new age of AI. We can begin by advocating for information and media literacy and using technologies such as ChatGPT critically while keeping in mind the inherent biases and inequities that exist in these tools as well as the data used to train them.

Generative AI is here to stay, and to realize the full promise of these systems, we must leverage them safely and responsibly.

Mohammad Hosseini, Kristi Holmes

Mohammad Hosseini and Kristi Holmes are assistant professor and professor, respectively, in the Department of Preventive Medicine at Northwestern’s Feinberg School of Medicine. They wrote this for the Chicago Tribune. -- Ed.

(Tribune Content Agency)