Hallucinations, the curse of the modern age
In our day and age, people are getting easier access to information, now more than ever. With the coming of the internet, search engines, social media and artificial intelligence, almost anything can be found in a mere couple of seconds.
With the help of artificial intelligence, you don't even have to execute the act of searching anymore. You only need to ask, like you would do with an acquaintance, about any subject you need info on. This ease of use comes with a cost, hallucinations. Hallucinations are when AI models are giving an answer to something based on thin air. They use their acquired knowledge through training to elaborate an answer they deem fine, unfortunately filled with falsehood, and present it as a true thing. What's more disturbing is that they bring it in such a deceptive way that it makes you think it's true, which is the danger of hallucinations.
Unfortunately, these can't be avoided. At some point, while dealing with an AI model, you will encounter hallucinations along the way. If you take a look at major models on the market, you can see they mostly hallucinate at a rate between 3% and 10%. This rate represents the amount of hallucination when summarizing documents. This means for an AI model like ChatGPT, 3 out of 100 answers contain false information. It is important to note that for other situations, rates can differ.
In the case of AI powered applications, hallucinations can harm the product you are trying to build. Imagine cases where you are building a system where users can get information about the latest movies in theaters but instead of providing real data, the AI model answers with non-existent movies and describes them as being currently in play. This can lead to confusion, frustration and even legal repercussions. While being unavoidable, there are techniques and tools to help you reduce the hallucination to lower rates. While not perfect, they can greatly contribute to increasing the quality of your offering.
One of the tricks I used in multiple projects I worked on is to dynamically inject relevant information directly in the prompt before sending it to the AI model for processing. This technique is often called Retrieval Augmented Generation (RAG). The major downside of this technique is how to get the data to inject. More often than not, you need a data bank stored in a way that it can be searched based on user input. Vector database engines such as Pineconetend to do a very good job for that kind of task.
For a more practical explanation, let's take a look at the example below. Imagine a user interacting with a movie assistant and the user is trying to get information about a specific movie in theaters:
My daughter told me about the latest children cartoon that came out this month. The one about a samurai trying to become a ninja. Is it suited for an 8 year old girl?
The system would then fetch data about the cartoon in question, parse the results and inject them into the prompt alongside the user's question. The AI model then generates an answer based on the provided data. The main advantage of this technique is to force the usage of very precise information when generating the answer, thus reducing the risk of hallucinations but at the cost of maintaining and querying a database.
There are also tools on the market specifically tailored to help you implement guardrails around what's input and what's output during the processing of a user input. They not only filter acceptable inputs from users but also check the AI model is providing acceptable answers. Such a tool is NeMo Guardrails created by NVIDIA. It is an open-source toolkit designed to add programmable rails around your AI powered app. It can preprocess user prompts to filter out harmful or bad inputs and also filter out hallucinations and answers output by the AI model that can harm the user or the brand of the app. Some programmation knowledge is required in the implementation of these kinds of tools.
Closing thoughts
Hallucinations are considered, in my opinion, the curse of the modern age. More and more people are relying on artificial intelligence to accomplish their daily tasks. Unfortunately, by doing so, they frequently have to face hallucinations, sometimes without noticing. The falsehoods provided are so convincing, you start to believe them. Techniques and tools can mitigate the risk but it's still there even if we like it or not. In the end, always fact check what's given to you, regardless if it seems legit or not.
Possibilities are endless but remember, creativity is the key.
Disclaimer: No AI models were used in the writing of this article. The text content was purely written by hand by its author. Human generated content still has its place in the world and must continue to live on. Only the image was generated using an AI model.