The cognitive gap caused by machines is set to widen
Makarand R Paranjape Makarand R Paranjape | 07 Jul, 2023
(Illustration: Saurabh Singh)
IN MY PREVIOUS column, I showed just how revolutionary artificial intelligence (AI) already is when it comes to blurring the distinction not only between pseudo and real cognition but between human and alien intelligence. In the field of pedagogy, especially in liberal arts, AI already poses unique and far-reaching challenges. Much of our effort as educators was to train students to think and write clearly. But if AI can do that faster and better than humans, then will the latter, paradoxically, be defined by being ungrammatical, muddle-headed, and incoherent?
The crisis is of endemic proportions, with many postgraduate students being unable to string together five sentences cogently, let alone produce longer, well-researched, and solidly substantiated research work. Those who complain of the unfair advantage of the English-educated in India fail to realise that many in this so-called category would be unable to read a play by Shakespeare or even a novel by Charles Dickens. Without calling them neo or barely literate, one would have to accept that their language skills are severely limited.
Unfortunately, this is equally true of the mothertongue-wallas. It is not that they are automatically better equipped to think, write, or speak clearly in their native tongues. The same problem of linguistic incompetence persists across languages. This becomes especially poignant when we come, rather dimly, to recognise that deep cognition may be among the few crucial characteristics of a real human being. But with most human beings incapable of it, where does that leave us?
Those who don’t understand or underestimate the capabilities of AI take heart in its dismissal by the likes of prominent linguist Noam Chomsky. Chomsky scornfully pooh-poohed large language models (LLMs) as lacking in basic intelligence. He was sceptical about AI’s capabilities to truly understand human language. In multiple interviews and articles, he argued that LLMs simply mimic patterns without understanding the meaning behind them. While he acknowledges the success of neural networks in pattern recognition tasks, he asserts that this does not equate to human-level comprehension. Instead, he claims that LLMs fundamentally lack the necessary intelligence to engage with language at a level akin to humans. True. But the issue, as I have argued earlier, is different. The problem is that it will increasingly be harder to tell the difference.
AI has exhibited an unprecedented ability to generate coherent, relevant content much faster than humans. By drawing on extensive databases, AI text generators construct intricate responses based on linguistic patterns acquired through machine-learning processes. These breakthroughs not only streamline information production but also enhance accuracy and stylistic consistency. While it is valid to question whether current AI models can grasp the intricacies of language at a human level, dismissing their potential as “superficial and immature” overlooks appreciable advancements in the field. For example, AI technologies have been applied to various tasks such as translation services, content summarisation, and even the generation of academic essays, which are even better than what most of us can produce.
I STILL REMEMBER the very first class I taught. Rhet 105 at the University of Illinois at Urbana-Champaign, where I had just entered as a graduate student in Fall, 1980. Rhetoric was the art of speaking and writing clearly, considered mandatory for all students of the university. Even engineers. My course was actually subtitled “Rhetoric for Engineers.” It fell to my lot as a newly hired teaching assistant (TA). I was not yet 20. Many of my students were older. The course was called “the pits.” You cut your pedagogic milk teeth in the pits before you graduated to teaching introductory literature courses.
My students, though not hostile, were resistant and resigned to suffering foreign TAs, whom they usually considered below par. Initially, I, too, felt rather unsure of myself. I discovered, however, that there was a longer history of English in India than in the families of many of my students, whose forebears had come to the US within the last 50 years from non-English speaking parts of the world.
Just as mugging up tables became obsolete once we had calculators, thinking, writing, and creating unique essays, poems, short stories and the like will also become outdated
By the end of that semester, it was clear to me that however frightening the prospect of teaching rhetoric to overworked and indifferent engineering students, it was something they needed badly. Most couldn’t write intelligently and intelligibly, let alone develop a well-researched and substantiated essay that conveyed their own, if not original, arguments. What is more, by grading their assignments week after week and guiding them to finish their longer term-paper at the end of the semester, I myself learnt not just how to write, but also how to read and think.
Returning to India, I found that plagiarism, even in universities such as Jawaharlal Nehru University, was rampant. The idea that you had to acknowledge other people’s ideas was not only alien but copying exact sentences and passing them off as your own was also not considered unethical. Right through high school and undergrad years, students routinely cut and pasted, mistakenly and conveniently believing that words, sentences, ideas, and thoughts were common property. It was all just data in their minds; there was little difference between information and knowledge.
In my research methodology classes, I often asked, “Would you take money from your roommate’s wallet or purse in their absence?” Most students said, “Of course, not.” “But suppose,” I countered, “you had to, in an emergency. What would you do?” They usually replied, “We would ask permission or inform them of the reasons for doing so.” “Precisely. And, I hope, pay back later?” I asked. Almost everyone agreed. “That is what you need to do when you borrow not only exact words and sentences but ideas and arguments too.” Just as you aren’t thieves, you shouldn’t be plagiarists.
Nothing is totally original, I told my students, but acknowledging and engaging with the ideas of your predecessors is what research is all about. Catching out instances of plagiarism and teaching students how to think and write clearly is much of what a liberal arts education is about. In a nutshell, creativity and critical thinking. But now, all that has changed. The rapid advancement of AI technologies, which has sparked concerns and debates in various fields, poses special challenges to educators and students. As AI-based tools such as automatic essay generators accelerate in efficiency and quality, teachers face new challenges in ensuring academic integrity and assessing student work.
THE MOST IMMEDIATE concern arising from AI’s influence on education is the potential proliferation of automated plagiarism. Historically, educators have relied on specialised software to detect instances of plagiarism in student writing. However, sophisticated AI algorithms now enable anyone to generate unique essays instantaneously, limiting the effectiveness of traditional plagiarism detection methods. Worse, even a halfway clever student doesn’t have to plagiarise at all. An AI writer will produce just as good an essay, references included, as a human being can. Perhaps, even better. For those who go in for the paid programmes, the choices are legion.
Just as mugging up tables became obsolete once we had calculators, thinking, writing, and creating unique essays, poems, short stories and the like will also become outdated. To return to Chomsky’s original critique of AI, what’ll happen to Chomsky if AI does an even better job at showing its own limitations than Chomsky can even imagine? But there’s the rub. Imagination, feeling, sentience, inspiration, and ultimately, consciousness. Machines don’t have that now. But LLMs can simulate much of that too even without experiencing it. Probably even sooner than we can think, and better.
The cognitive divide will become much vaster and more dangerous than the class, race, gender, national, or economic divisions. Even if AI does not destroy the human race as some fear and predict, smart machines serving a very small section of smarter humans are definitely likely to control the rest of us. Will that trigger a fundamental alteration in our existing social and power relations, but also more dismally, the end of humanity as we know it?
More Columns
Old Is Not Always Gold Kaveree Bamzai
For a Last Laugh Down Under Aditya Iyer
The Aurobindo Aura Makarand R Paranjape