Prompt crafting: the art of simplicity

Talking with AI models such as ChatGPT or Claude seems easy. You would think conversations would be similar to ones between two people, discussing a topic or providing instructions about a task to be done. Unfortunately, it isn’t always the case.

Credits: DALL-E

Sometimes, the output we seek requires complex processing or multiple steps. The way to communicate the “rules” and “boundaries” of what we need to the AI model can be tricky and thus takes way more time than anticipated. To help you steer towards the path to success, I will provide some tips and guidelines I usually give to my engineers when they first jump into the foray of prompt engineering.

"Keep things simple"

I can't stress enough about this basic but very useful piece of advice. With the power they have, AI models are capable of processing so much information in so little time, it's stunning (even frightening). Our first reflex would be to think that everything we write is understood the way we want, with a specific meaning at that exact moment in our mind. Unfortunately, like with normal conversations, when you communicate your thoughts to someone else, you have to be clear, precise, and pick the right words that would best translate the idea you want to express. I'm sure you can remember countless situations where you (or someone speaking to you) tried to explain something, went left and right and in the end, had the listener get the wrong idea about what should be understood.

I often see this mistake made by junior prompt engineers, especially when they iterate on a task they want accomplished. They create a basic prompt, add or update a chunk, then another, and another up to the point where some parts are conflicting with others. These conflicts can be subtle and easy to miss.

To prevent these things from happening, I prefer to approach prompting the same way I tackle a web/software situation where a new functionality needs to be created. I start by listing the basic needs to be accomplished. Something like a bullet list of priorities that needs to be answered for it to work. After making sure each point is distinct and unique, I try to convert them into instructions with as few words as possible. This means I have to come up with the ones that pull the most weight towards the idea I'm trying to describe. If some of the words are too light, I usually tend to drop them and continue. The trick is to say everything that needs to be said without shortening it too much to the point where meaning is lost. Here is an example:

You are using a conversation AI model to power a custom assistant in your product where you need to provide the right feedback to the user based on the provided data:

I will provide you with data in JSON format about the latest 10 movies out in theaters in the US with a summary of each movie, the 10 most viewed critics and the top 10 comments made by users. I want you to provide a list of recommendations about those movies. For each recommendation I want you to write a reason why it was a good movie and why the user should see it. Also if you consider the movie to be bad, write why it was a bad movie and why the user should see it. I need you to write a recommendation based on what the critics said and based on the comments provided with the data but try to be spoiler-free as we don't want the user to know the content of the movies before watching them. [DATA, STRUCTURE: -summary(paragraph), -critics(list), -comments(list)]

As you can see, this prompt is very repetitive and can be confusing. It asks the AI model to write why it was good but also why it was bad. Since it's a simple example, in most cases the AI model should be smart enough to properly give a recommendation but this kind of messy structure opens the door to unexpected results thus diminishing the quality of the product we are trying to build. If we apply the strategy I talked about, let's make a list of what's important here:

With these ideas in mind, let's rewrite the prompt to something simpler

I give you JSON data about latest movies in US theaters. Review and output these information about each movie:
Is it a good or bad movie?
In a few sentences, explain why
Very important to be spoiler-free.

[DATA, STRUCTURE: -summary(paragraph), -critics(list), -comments(list)]

You can see that the new prompt is much shorter and each word bears more weight in the meaning of the task to be done. Even if it seems weird when read aloud, the task is much clearer now plus it contains less tokens, thus being less costly to run at a large scale.

Closing thoughts

This kind of exercise seems obvious but we (myself included, from time to time) tend to get easily lost the more a prompt gets iterated over time. Especially with larger prompts that require accomplishing a complex task with many rules and boundaries the AI model should not cross. Taking a step back and getting back to the basics helps us achieve greatness.

Possibilities are endless but remember, creativity is the key.

Disclaimer: No AI models were used in the writing of this article. The text content was purely written by hand by its author. Human generated content still has its place in the world and must continue to live on. Only the image was generated using an AI model.

Articles you might be interested

The intern relationship, me and the AI

In the last 10 years, AI technology has boomed. It has become more and more accessible and many people now use it in their daily life. Have you ever w ondered why people can achieve greatness using it, but others seem to struggle to come up with something good?

May 12, 2024

A layer of security to your prompts

Since AI models are designed to react their own way when given a user input, they tend to be unpredictable. This unpredictability can lead to emergent situations for both the users and the developers but can also expose security flaws, if mishandled.

April 7, 2024

The usefulness of prompt templates

In an era where AI technology is getting integrated in many aspects of our lives, being able to properly externalize what we need is a must. The time when AI models can read our brain to know exactly what we want hasn’t come yet and it’s crucial to be able to craft meaningful prompts to achieve the task at hand.

March 17, 2024