Every day, it seems there’s fresh news about artificial intelligence: self-driving cars, cloud-based services, generative AI that can produce art and text, and even robots with synthetic muscles. The pace of change is dizzying, filling us with hope for a better future as well as worries about deepfakes, misinformation and ethical lapses. It can feel like we’re driving on a foggy highway or drifting on a vast, uncharted ocean. In this column, I hope to clear some of that haze by looking at how our digital technology has evolved — and what might lie ahead. Broadly, there have been three pivotal chapters in this unfolding story: the rise of computing and programming, the era of machine learning and other AI systems, and now the dawn of AI agents.

The story begins when personal computers (with Bill Gates as the poster boy) first entered our lives. Back then, we learned not just how to operate these machines, but also how to harness them for problem-solving. In 2006, Jeannette Wing of Carnegie Mellon University described Computational Thinking, highlighting how breaking down problems into smaller tasks, identifying patterns, abstracting the core principles and devising systematic algorithms can help us navigate complexity. At a glance, CT mirrors the logic of programming and software development, but it also reflects our broader desire to adopt a machine-like mindset for tackling complex, routine problems.

Then came the next wave: machine learning. Thanks to Dr. Geoffrey Hinton and other researchers in data science — particularly in the field of deep learning — computers could be trained to recognize speech/images, and make conversations, tasks once assumed to be uniquely human. Deep learning, with its multi-layered neural networks, propelled these breakthroughs by enabling AI systems to handle massive datasets, detect patterns, and refine their algorithms over time. The AI4K12 initiative offers a useful framework for understanding this phase. Its “Five Big Ideas in AI” (perception, representation and reasoning, learning, natural interaction, and social impact) demonstrate how machines learn from data and mimic human intelligence.

Fredrik Heintz, a professor at Linkoping University, offers an insight into the relationship between CT and AI. While AI focuses on teaching machines to “think” through strategies like declarative programming and learning from examples, CT teaches us to solve problems more systematically, drawing inspiration from how computers operate. Put another way, in traditional programming we tell computers each step, but in AI, we give them examples (that is, data) or define goals (that is, algorithms), and they figure out the best approach themselves (through training and prediction).

Now we arrive at the third chapter: the era of AI agents. Built on large language models and other advanced algorithms, AI agents go beyond simple Q&A. They can plan tasks, make decisions, and carry out actions largely on their own. If we look at the office tasks, many of us perform — some creative (though not often), and others shaped by routine patterns, workflows and basic business sense — we can imagine an AI agent stepping in to handle the bulk of our daily grind. Picture a reliable coworker or personal secretary who seamlessly manages tedious tasks. These agents work like avatars — embodied conversational agents — but without actual physical bodies. They must be tireless, consistent and quick to adapt to our feedback, data updates and specific demands. Once such an agent gets the hang of a task, it should run with it autonomously, leaving us free to focus on creativity and decision-making.

Of course, this isn’t the first time technology has automated tasks that once consumed countless hours of human labor. Gutenberg’s printing press mechanized the spread of knowledge, and Henry Ford’s assembly lines revolutionized manufacturing. Today, AI is poised to automate forms of human thinking — whether that means composing music, synthesizing research or recommending strategies. Historically, every such seismic shift has wreaked havoc and sparked controversy. Will jobs vanish? Who stands to profit the most? And how will we ensure ethical standards or fair wealth distribution? Yet if history is our crystal ball, we eventually buy in and adapt. Despite the upheaval these transitions can bring, societies endure and often find new cultural, artistic and economic pathways — leading to unexpected opportunities.

If we can steer this transformation wisely, the steady stream of AI developments may feel less like chaos and more like a deliberate progression of human ingenuity. The next frontier may be one where technology doesn’t just serve as a tool, but as a collaborator just like we say, “I’m too busy; I wish I had a clone” — helping us reach new heights of innovation, efficiency, and a richer sense of what it means to be human. Of course, the ruthless irony in all of this is whether you could prove your worth to your boss — and keep your job.

Lim Woong

Lim Woong is a professor at the Graduate School of Education at Yonsei University in Seoul. The views expressed here are the writer’s own. -- Ed.