Everyone is worried about Artificial Intelligence, particularly what concerns jobs. From writers in Hollywood to computer programmers, recent technological advances are causing concern about what Generative AI will mean for the future of work, our society, and the wider world. Is there nothing machines will not be able to do and will they permanently change our jobs?

Expert Jobs Will Be Automated, but Not Because of the Latest Generative AI

Artificial intelligence, machine learning, large language models – artistic interpretation. Image credit: Gerd Altmann via Pixabay, free license

We have spent a decade researching the impacts of AI.  Ten years ago, we wrote a paper estimating that some 47% of US-based jobs could be automated in principle as AI and mobile robotics expanded the scope of tasks that computers can do.

Our estimates of how jobs would change were based on the premise that, while computers might eventually be able to do most tasks, humans would continue to hold the comparative advantage in three key domains: creativity, complex social interactions, and interaction with unstructured environments (such as your home).

However, it is important to acknowledge meaningful progress in these domains, with Large Language Models (LLMs), such as GPT4, capable of producing human-like text responses to a very wide range of queries. In the age of Generative AI, a machine might even write your love letters.

Yet if GPT4 does write your love letters, your in-person dates will become even more important. The bottom line is, as virtual social interactions are increasingly aided by algorithms, the premium on in-person interactions, which machines cannot replicate, will become even greater.

Expert Jobs Will Be Automated, but Not Because of the Latest Generative AI


SpaceX office – illustrative photo. In the world, there are many jobs that cannot be fully automated. Image credit: Steven Brown via Flickr, CC BY-NC 2.0

Second, although AI can produce a letter in the style of Shakespeare, this is only because Shakespeare’s works already exist, and on which an AI can be trained.

AI is generally good at tasks and jobs which have clear data and a clear goal, such as maximizing the score in a video game, or the similarity to the language of Shakespeare. But if you want to create something genuinely new, rather than rehashing existing ideas, what should you optimize? Answering the question of the true goal is where much human creativity resides.

Third, as we noted in our 2013 paper, there are many jobs that can be automated, but Generative AI – a subfield of the broader field of AI – is not yet an automation technology. It needs prompting from a human, and it needs a human to select, fact-check, and edit the output.

Finally, Generative AI generates content that mirrors the quality of its training data – ‘garbage in results in garbage out’. These algorithms require training on expansive datasets, such as extensive segments of the internet, as opposed to smaller, refined datasets developed by experts.

Consequently, LLMs are inclined to create text that aligns with the average, rather than the extraordinary, portions of the internet. Average input yields average output.

What implications does this hold for the future of employment and the jobs market? For one thing, the latest generation of AI will persist in necessitating human intervention. What is more, workers with less specialized skills stand to gain disproportionately, as they can now generate content that aligns with the ‘average’ benchmark.

Could the hurdles outlined above be overcome shortly, paving the way for widespread automation of creative and social tasks and jobs? In the absence of a major breakthrough, we think it is unlikely.

Firstly, the data already ingested by LLMs is likely to comprise a considerable fraction of the internet, making it unlikely that training data can be significantly expanded to power further progress. Furthermore, there are legitimate grounds to expect a surge of inferior AI-crafted content on the Web, progressively degrading its quality as a source of training data.

Second, while we have become accustomed to Moore’s Law – the observational law that declares the number of transistors in an integrated circuit (IC) doubles approximately every two years – many anticipate this trend will lose momentum, owing to physical constraints, around 2025.

Third, it is estimated that the energy to create GPT4 cost a large fraction of its $100 million training cost – even before the price of energy went up. With the climate challenge looming large, there are questions over whether this approach can continue.

What is needed, in other words, is AI that is capable of learning from smaller, curated datasets, drawing upon expert samples, rather than the average population. But when such innovation will come is notoriously hard to predict. What we can do is create better incentives for data-saving innovation.

Consider this: around the turn of the 20th century, there was a genuine contest – would electric vehicles or the combustion engine prevail in the burgeoning car industry? At first, both contenders were on par, but massive oil discoveries tipped the balance in favor of the combustion engine.

Now imagine that we leveraged a tax on oil back then: we might have shifted the balance in favor of the electric car, sparing us plenty of carbon emissions. In a similar fashion, a tax on data would create incentives for innovation to make AI less data-intensive.

Going forward, as we have argued elsewhere, many jobs will be automated, but not because of the latest wave of Generative AI.  In the absence of major breakthroughs, we expect the bottlenecks we outlined in our 2013 paper to continue to constrain automation possibilities for the foreseeable future.

Written By Professor Carl-Benedikt Frey, Dieter Schwarz Associate Professor of AI & Work, Oxford Internet Institute & Director, Future of Work Programme, Oxford Martin School, and

Professor Michael Osborne, Professor of Machine Learning, Department of Engineering Science and co-Director, Oxford Martin AI Governance Initiative.

Source: University of Oxford


Categorized in: