Skip to content

LLMs

Minimize LLM Hallucinations with Pydantic Validators

In our previous post we introduced Pydantic as a tool to steer language models.

This post, however, shifts focus on how we can leverage Pydantic's validation mechanism to minimize hallucinations. We'll explain how validation works and explore how incorporating context into validators can enrich language model result.

The intention is by the end of this article, you'll see some examples of how we can use Pydantic to minimize hallucinations and gain more confidence in the model's output.

But before we do that, lets go over some validation basics.

Steering Large Language Models with Pydantic

In the last year, there's been a big leap in how we use advanced AI programs, especially in how we communicate with them to get specific tasks done. People are not just making chatbots; they're also using these AIs to sort information, improve their apps, and create synthetic data to train smaller task-specific models.

What is Prompt Engineering?

Prompt Engineering is a technique to direct large language models (LLMs) like ChatGPT. It doesn't change the AI itself but tweaks how we ask questions or give instructions. This method improves the AI's responses, making them more accurate and helpful. It's like finding the best way to ask something to get the answer you need. There's a detailed article about it here.

While some have resorted to threatening human life to generate structured data, we have found that Pydantic is even more effective.

In this post, we will discuss validating structured outputs from language models using Pydantic and OpenAI. We'll show you how to write reliable code. Additionally, we'll introduce a new library called instructor that simplifies this process and offers extra features to leverage validation to improve the quality of your outputs.