Build a GPT-Powered Translator with LangChain
Everyone is talking about prompt engineering and how to write efficient prompts. A prompt refers to the input of the model. Many people think of this as the input line of ChatGPT, but prompt engineering is so much more. In general, specificity and simplicity often lead to better results when writing prompts.
You can write prompts for ChatGPT, or you write prompts during programming LLMs for applications. For this, you can use the framework LangChain. It provides several functions for working with prompts and prompt templates.
The focus of this article is on the use of prompt templates in LangChain. If you are not familiar with LangChain, we recommend our introduction article about LangChain.
In the following article, we’ll show you how to use a chat prompt template to implement a translator functionality. Let’s start!
What is Prompt Templating?
An LLM takes a text as input, and that’s what we call a prompt. Usually, this is not just a hard-coded string but a combination of templates, examples and inputs. A prompt template is a reproducible way to create a prompt and parameterize a model. It contains a text string that can take several parameters from the user.
In general, a prompt template consists of:
Instruction: A specific task to be performed by the model.
Context: Additional information so that the model can respond better, for example, a set of few-shot examples.
Input: A question that we ask the model.
The prompt has a massive influence on the output. So, it’s very important to write good prompts. Now, let’s look at a simple prompt template.
from langchain import PromptTemplate
template = """ You are a consultant for new companies.
What is a good name for a company that does the following: {human_prompt}? """
prompt = PromptTemplate.from_template(template)
prompt.format(human_prompt="We build a translator app based on Large Language Models")
We want to create a prompt template for an AI assistant that can generate names for new companies based on the product portfolio. First, we define the template with the placeholder human_prompt
for the user’s input. Then, we create a PromptTemplate with the function from_template(...)
. Finally, we format the prompt by inserting the placeholder. In the next step, we could use the prompt in an LLM. That was the definition of a very simple prompt template. Now, let us jump into the prompt templating for a translator.
🎓 Our Online Courses and recommendations
Build a Translator with Prompt Templates
Before we start, we have to set up our environment.
Setup
First of all, you must install Python, conda and pip. In addition, we need a terminal for setting up the environment. You can use the following commands to create a virtual environment:
Create a conda environment:
conda create -n langchain python=3.9.12
Activate the environment:
conda activate langchain
Great! Now, we can install all required dependencies. For this, you can use the following command:
pip install langchain openai
Next, you must set your OpenAI API key. If you don’t have a key, please follow the instructions in our previous article. Then, you can set your API key with the following command:
# macOS and linux
$ export OPENAI_API_KEY=[Insert your API Key here.]
# windows
$ set OPENAI_API_KEY=[Insert your API Key here.]
Paste your API key at the marked point and delete the square brackets as well. Okay, let’s jump into the implementation!
Implementation
First, we need to import all relevant libraries.
from langchain.prompts import (
ChatPromptTemplate,
PromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
from langchain import LLMChain
from langchain.chat_models import ChatOpenAI
Next, we define the prompt templates. In this article, we use a chat model. This is the reason why we define templates for chat prompts and not a normal prompt template as in the example above. Chat models use language models under the hood but they have a bit different interface. Instead of a text-in text-out API, they offer an interface where chat messages are the inputs and outputs. These chat messages are not raw strings like the strings you pass into a LLM model. These messages have a specific role as system messages or human messages. For our translator, we can use a system message prompt template to define what our model should do. The model follows the instruction from system messages more closely.
# option 1
system_template = PromptTemplate(
input_variables=["input_language", "output_language"],
template="You are a helpful assistant that translates {input_language} to {output_language}.")
system_message_prompt = SystemMessagePromptTemplate(prompt=system_template)
# option 2
system_template="You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(system_template)
You can define a system message prompt template in two ways:
Option 1: You create a normal prompt template using the
PromptTemplate
class. For this, you define your input variables and your template. After that, you can create a system message prompt using theSystemMessagePromptTemplate
class.Option 2: If you don’t want to specify the input variables manually, you can use the
from_template()
class method. LangChain automatically parses the input variables based on the defined template string.
You can decide by yourself which option you want to use. Both options do the same. Next, we define the template for the human prompt.
# option 1
human_template = PromptTemplate(
input_variables=["text"],
template="{text}")
human_message_prompt = HumanMessagePromptTemplate(prompt=human_template)
# option 2
human_template="{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
There are also two options. In the first option, we define the input variables directly and in the second option LangChain parses the input variables automatically. The only difference in the code for the system message prompt templates is that we now use the HumanMessagePromptTemplate
class.
In the next step, we have to connect the two templates. For this, we use the ChatPromptTemplate
class.
# option 1
chat_prompt = ChatPromptTemplate(messages=[system_message_prompt,
human_message_prompt])
# option 2
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt,
human_message_prompt])
There are also two options. In the first option, we use the class directly and in the second option, we use the class method. Both options do the same thing.
Next, we define our chat model.
chat = ChatOpenAI(
temperature=0,
model="gpt-3.5-turbo-0613")
chain = LLMChain(llm=chat, prompt=chat_prompt)
For this, we use the OpenAI model gpt-3.5-turbo-0613
. Then, we create a chain with our chat prompt and the model. Great, we have defined everything we need to run our translator. Let’s test it with some sample data.
input_lang = "German"
output_lang = "Croatian"
text_to_translate = "Ich beschäftige mich gerne mit Finanzthemen!"
result_ai = chain.run(input_language=input_lang,
output_language=output_lang,
text=text_to_translate)
print(result_ai)
# output: Volim se baviti financijskim temama!
First, we create three variables that we can configure. Maybe you want to integrate this code into a web application. For this, you can connect these variables to UI elements. Then, we put the variables into the function chain.run(...)
and get the translation from German to Croatian. Congratulations, you have built the backend functionality of a simple translator app!
Conclusion
In this article, we introduced you to prompt templating in LangChain. In this context, we have implemented the functionality for a translator. For this, we used the OpenAI API. In addition, we showed you different options to define prompt templates in LangChain. As a result, we have developed a translator that can translate text from one language to another.
Thanks so much for reading. Have a great day!
Subscribe to my newsletter
Read articles from Tinz Twins directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Tinz Twins
Tinz Twins
Hey, we are the Tinz Twins! 👋🏽 👋🏽 We both have a Bachelor's degree in Computer Science and a Master's degree in Data Science. In our blog articles, we deal with topics around Data Science.