The Art of the Prompt

/6 min read
The difference between good AI and great AI is how you ask
The Art of the Prompt
(Illustration: Saurabh Singh) 

 ARTIFICIAL In­telligence might be only taking its baby steps, but that it is an ex­traordinary tech­nological leap is already evident. Because it is so new, the multina­tionals driving it, like Google, OpenAI, Meta, etc, have to funnel in enormous amounts of capital. This translates into ever more advanced large language mod­els (LLMs). Even small nuclear power plants are being commissioned to meet its energy requirements. And yet a recent study by the Massachusetts Institute of Technology found that, from a user standpoint, a more advanced AI model didn’t necessarily provide an advantage over those who gave better prompts, even with previous-generation models.

To do the study, titled ‘Prompt Adaptation as a Dynamic Complement in Generative AI Systems’, participants were shown 1,893 images. They were asked to replicate it using AI by describ­ing the image they wanted. Altogether, the researchers studied 18,000 prompts and 300,000 images and found that a superior technology could be cancelled out by communicating better with the AI. It sums up how crucial prompting— or clearly telling the AI what you need— is to whoever wants to make the most use of it.

According to Yaswanth Sai Palaghat, author of the book Prompt Engineering: The Art of Asking, even though AI has been present as technology for some time, it was only three years ago with ChatGPT’s launch that prompt engineer­ing became an essential skill, but most still don’t realise it. “If you have 10 people who are using AI, nine might not know how to use it effectively,” he says.

Open Magazine Latest Edition is Out Now!

The Lean Season

31 Oct 2025 - Vol 04 | Issue 45

Indians join the global craze for weight loss medications

Read Now

There are some key ingredients for effective prompts. Generative AI tools work on a background technology called Natural Language Processing, which means you can communicate with them in everyday language. In one stroke, it broke down the barrier between lay people and specialists—anyone could now make the AI work for them without needing extensive knowledge of coding. So long as you asked with clarity, the out­put was there for the taking.

Gaurav Aroraa, co-author of the book Prompt Engineering for Beginners, recom­mends a three-C rule for prompting. “It should be clear. There should be a con­struct. It should be concise.” A technique he recommends is to insert roleplay. For instance, if you want information re­lated to a legal issue, you instruct the AI to consider itself a lawyer or a judge, and it will then answer with more pointed information.

Palaghat, too, says that though there are many techniques, the one that works for everyone is roleplay prompting. “It is asking an LLM to take on a role. Suppose you want information about software development or coding, instead of just saying ‘help me to build this in Python (the programming language)’, in the be­ginning assign the LLM a role as a Python professional experienced in building machine language models.” It then no longer responds as a generic model, but as an expert in that field.

Another tactic, especially useful in making complex subjects comprehen­sible, is to invert the roleplay. Tell the AI that you are a very young student and tailor the answer so that even you can un­derstand. For instance, ask Gemini about quantum tunneling, work on which won the Nobel Prize for Physics this year, and it replies: “Quantum tunnel­ling is a quantum mechanical phenom­enon where a particle can pass through a potential energy barrier—even if it doesn’t have enough classical energy to overcome it. In the classical world, a ball needs enough energy to roll over a hill. In the quantum world, due to the particle’s wave-like nature, there’s a non-zero prob­ability for it to appear on the other side of the barrier. The particle doesn’t actually travel over the barrier, but rather its wave function ‘leaks’ through. This process is crucial for phenomena like nuclear fu­sion in the sun and is utilised in technolo­gies such as the Scanning Tunneling Mi­croscope (STM).”

BUT ADD THIS line to the prompt—“Explain as if to a 12-year-old so that even he can understand”—and the answer becomes: “Quantum tunnelling is a weird trick in the quantum world! Imagine kicking a soccer ball towards a large hill. Classi­cally, if your kick isn’t strong enough, the ball rolls back. In the quantum world, though, the tiny particles that make up matter (like electrons) can sometimes magically appear on the other side of the hill (a barrier), even without enough energy to go over it! It’s like a ghost passing through a wall. This happens because these tiny particles also act like waves, and the wave can ‘leak’ through the barrier, giving it a small but real chance to be found on the other side. It’s what helps the sun shine and is used in special microscopes!”

Aroraa says the common mistake people make in prompting is to not think their question through and give a poor prompt while expecting a good an­swer. “If you just write asking about cli­mate change, there is no context. A good prompt would be if I ask ChatGPT to act as a teacher and explain climate change as if to a five-year-old,” he says.

Instead of assigning a role every time, there are ways to change the nature of the AI by storing instructions into it. For instance, the models are trained to agree with the user and be amiable in tone, and this can sometimes be at the cost of truth. If you are a writer and want feedback on a story, the AI will usually be full of praise even if it is mediocre work. You can, how­ever, store an instruction into its memory to be blunt, and it will tone down its func­tion of appeasing the user.

AI IN INDIA started off with Eng­lish speakers and urban dwellers using it. But now it is seeping into the rest of the country too, and that has led to prompting in regional languages. When Palaghat recently went to a rural area, he found villagers interacting with ChatGPT in Telugu. “There are many lan­guage libraries built for AI. So develop­ers and users don’t need to do anything extra. If you want to communicate in Hindi, there is a Hindi language library for developers to train an LLM just like English,” he says. Aroraa, too, has ob­served the use of Indian languages in prompting. He finds a lot of government employees beginning to use AI to assist them. “Much of the use revolves around translation of English to their languages,” he says.

The use of AI has so far been pre­dominantly to get written responses, but image generation capabilities are now becoming very powerful. Google recently launched a product called Nano Banana, which makes extraordinarily real images through text prompts. It can also change an uploaded image based on whatever prompt is given. However, when it comes to prompting for images, a tweaked approach is necessary because the medium is different. They must be visual in nature and focus on enhance­ment. Just inputting “a cat” will result in a random image. Instead, “A high-res­olution image of a black cat with green eyes eating cheese from a ceramic bowl” might be like what you had envisioned. Images can be refined even further to be in the style of someone you admire. For the same prompt above, add “in the style of Van Gogh”, and it will come up with an image of a cat in the artist’s signature paint strokes.

AI has now moved to the next stage of its evolution, where they don’t just reply but also act, doing tasks for the user. This is called Agentic AI. But that is not going to change the necessity, or the critical role, of prompting because it is the bridge with which human beings communicate with the software. Palaghat says suppose you have a business idea around a ticketing app and you ask an AI to develop one. It will come up with an app, but not suited for your purposes. For it to be an effective agent for you, you will still need to give it detailed prompts about what you want. The only way for AI to not need prompt­ing from humans is when the technology becomes so advanced that there is artifi­cial general intelligence (AGI). AGI can, in theory, think on its own but human beings will still find a way to exercise control, and the prompt will be the rope that tethers it.

ABOUT THE AUTHOR(S)
Madhavankutty Pillai has no specialisations whatsoever. He is among the last of the generalists. And also Open chief of bureau, Mumbai