Will we lose jobs to Artificial Intelligence? Are such fears well founded? IIT Madras professor explains – The Indian Express

Posted: April 27, 2023 at 2:53 pm

(A Lesson from IIT is a weekly column by an IIT faculty member on learning, science and technology on campus and beyond. The column will appear every Friday.)

Sutanu Chakraborty

The quest for building machines that think and act like us has propelled significant advances in Artificial Intelligence (AI). While we now have systems that do restricted tasks very well, the vision of Artificial General Intelligence (AGI), where machines can seamlessly learn and do anything that a human does, continues to elude us. But ChatGPT is around and it seems magical, right? Does it have AGI? Not quite.

For the uninitiated, ChatGPT is powered by technology that belongs to the family of Large Language Models (LLMs). A language model can tell us that a cat is sitting on the mat is better English than a cat is sitting in the mat. If given the last few words in a sentence, a language model can also predict which word is most likely to come next. You can think of a language model as a black box with a lot of numbers, called parameters. These parameters implicitly capture diverse aspects of language such as grammar, word usage and even world knowledge (I like noodles with sauce is more likely than I like noodles with pizza, for instance).

Any sentence that you type in at the ChatGPT prompt is converted into a set of numbers that interact with the parameters of the language model to finally yield another set of numbers, which are rendered as output text. Large Language Models have on the order billions of parameters that are learnt from really large volumes of data. An example is all of the textual content that can be scraped from the entire web.

As part of training ChatGPT, it was also ensured that the system learns from human feedback. As a consequence, it got rewards for doing its job very well and was punished otherwise. Consequently, the end result is impressive: this new tribe of AI technologies is undoubtedly disruptive in more ways than we could have imagined. ChatGPT excels at many jobs humans traditionally take pride in writing poems, code, online web content and so on. Do many of us then end up losing our jobs? Are such fears well founded?

First things first, LLMs, at their very core, are not as smart as they appear to be. Consider the case of two kindergarten kids arguing on whether the movie Titanic is romantic or tragic. Neither of them has experienced either tragedy or romance. The debate is literally a war of words, purely based on what they heard their parents talk about. As they grow up, they realise that both of those things were right after all Titanic is tragic and romantic at the same time. In a way not very different from the kindergarten kids, LLMs spits out words. However, the meanings of those are not grounded in experiences.

We must, therefore, not lose sight of the fact that LLMs lack a robust theory of the world. We should not be surprised if a six-year old beats ChatGPT in tasks that demand common-sense reasoning and basic logical inferencing. To quote the noted linguist Noam Chomsky, LLMs are incapable of distinguishing the possible from the impossible. Consequently, they have the propensity to fabricate things and generate factually incorrect or biased responses that are not meant for serious professional consumption.

In the context of software jobs, LLM models can write functions or boilerplate code given a well-defined goal, but may not be able to factor vaguely-specified high level business goals into components that need to be designed. They may also not be able to analyse how these components should interact, prescribe how best to leverage competencies of the existing workforce to get the whole job executed in a given timeframe, and suggest ways of recovering from aberrations in case plans do not get executed as expected.

Edsger Dijkstra, a stalwart in the field of computing, had famously observed that computer science should be called computing science for the same reason why surgery is not called knife science. In the context of programming, ChatGPTcan help us code faster and thereby get our tools ready, but we cannot effectively use them unless we have a good grasp of the pathology of the problem we have set out to solve.

The post-LLM age will trigger the shift from a tool-centric approach to a problem-centric one. Stuart Russell, a leading AI expert, observes that despite all advances, AI systems need to be explicitly provided with an objective. As humans, not only are we aware of all we need to do to get a job done, we carry strong normative assessments of all that should not be done.

We need people to figure out what to do. Machines can help us with the how as long as people are at the wheels. And in this new age, those with a good mix of wisdom to make best use of the technological resources at hand and strong interpersonal skills will be well sought after.

History is testimony to the fact that technological innovation initially displaces workers but creates fresh avenues for employment in the long run. A recent study by economist David Autor and others reveal that more than half of workers today are employed in occupations that did not exist in 1940.Amidst growing concerns of layoffs in major software industries in the near future, Geoffrey Hinton, one of the pioneers of the deep learning revolution that led to the creation of LLMs, opines that we could alternately retain the same workforce, and target achieving a lot more, by leveraging the leap in productivity.

Over time, we are likely to see a flurry of new jobs that do not exist today. In my childhood, I would fancy winning quiz competitions by memorising facts. With Google around, such faculties are no longer held in high esteem. The yardstick of competence has evolved today, students are assessed on the basis of their critical thinking, creativity and argumentative skills instead. As technology evolves, we will have to adapt to newer ways of re-evaluating ourselves.

In this age of fierce competition, it is important to remind ourselves that each of us is uniquely gifted. Career choices need to be made carefully so that they align with ones natural instincts and are not merely driven by societal pressures. This will make sure we enjoy the job we do at the very least, this can surely set us apart from ChatGPT and its future incarnations.

A story goes that Albert Einsteins chauffeur who had heard Einstein lecturing so many times over that he felt confident he could do the job of Einstein. The legendary physicist offered the chauffeur an opportunity to lecture, put on the chauffeurs attire and occupied one of the rear benches. The chauffeur did a fabulous job and skillfully answered a few questions as well. However, when a rather esoteric question on anti-matter that seemingly digressed away from the main theme came up, the chauffeur replied, Sir, this is so simple, Ill let my chauffeur seated at the back answer it on my behalf.

Like the chauffeur, LLMs are exposed to content very much a product of human thought but cannot substitute an expert who has first-hand experience of the process by which such content came into being. On the other hand, a single human beings range of expertise is miniscule compared to the wide expanse of content that fuels ChatGPT. The future is about exploring interesting ways in which machines can complement and augment our abilities, not substitute them. The new generation will adapt a lot faster to this change, since they would not carry the baggage of how things were done in the past this seamless coevolution of humans with technology, would be, for them, the norm.

We must embrace the new age with the readiness not only to do things differently but also to do different things.

(The writer is a professor at the department of Computer Science and Engineering at IIT Madras. He is part of the Artificial Intelligence and Databases (AIDB) Lab.)

See the original post here:

Will we lose jobs to Artificial Intelligence? Are such fears well founded? IIT Madras professor explains - The Indian Express

Related Posts