Skip to main content

LangChain Code Journey From Basics PART-2 PROMPT TEMPLATES

 Prompt Templates:

Prompt templates are predefined blocks of text designed to create a fixed format for prompts sent to a generative AI model. They allow you to structure prompts consistently, making it easier to pass user inputs into a standard template. For example, if you frequently ask the AI to debug code, you can design a prompt template that always formats the request in a specific way.

Python
from langchain.prompts import ChatPromptTemplate
from langchain_core.messages import HumanMessage
messages = [
    ("system", "You are a developer helps to finds error in the python code given topic for logic {topic}."),
    ("human","Here is the code {code}"),
]
prompt_template = ChatPromptTemplate.from_messages(messages)
prompt = prompt_template.invoke({"topic": "sum of two numbers", "code": 'print(a*b)'})

print(prompt)

Explanation

  1. Importing Modules: We import ChatPromptTemplate to create a custom template and HumanMessage to define the message content.

  2. Defining the Template: The messages list contains a system message that sets the AI’s role (e.g., a developer who debugs code) and a human message where the user input (the code to be debugged) is passed.

  3. Creating the Prompt Template: ChatPromptTemplate.from_messages(messages) generates the template based on the predefined messages.

  4. Invoking the Template: By calling prompt_template.invoke() with specific values for {topic} and {code}, the placeholders in the template are replaced with actual data.

This approach simplifies the process of generating structured prompts, ensuring that the AI receives well-formatted input every time.


Combining Prompt Templates with LLM:

Here's how you can combine prompt templates with an LLM (Language Learning Model) in LangChain, similar to what we discussed earlier, but now using a prompt template instead of directly typing prompts:
Python
from langchain.prompts import ChatPromptTemplate
from dotenv import load_dotenv
from langchain_google_genai import ChatGoogleGenerativeAI
load_dotenv()
llm = ChatGoogleGenerativeAI(model="gemini-1.5-flash")
messages = [
    ("system", "You are a developer helps to finds error in the python code given topic for logic {topic}."),
    ("human","Here is the code {code}"),
]
prompt_template = ChatPromptTemplate.from_messages(messages)
prompt = prompt_template.invoke({"topic": "sum of two numbers", "code": 'print(a*b)'})

print(llm.invoke(prompt))
print("\n\n Content : \n\n")
print(llm.invoke(prompt).content)

Explanation

  1. Define the Prompt Template: We define a prompt template using placeholders like {topic} and {code}. This template ensures a consistent format every time you need to debug Python code.

  2. Invoke the Prompt Template: The prompt_template.invoke() function replaces the placeholders with actual values (e.g., "sum of two numbers" and 'print(a*b)').

  3. Use the Prompt with LLM: The generated prompt is then passed to the LLM (in this case, using the Gemini model) via llm.invoke(prompt).

  4. Output the Result: The model's full response and the specific content are printed out.

Key Benefits

  • Reusability: Using prompt templates makes your code reusable and reduces the risk of errors from inconsistent prompt formatting.
  • Flexibility: Easily swap out different topics or code snippets without changing the overall prompt structure.

This method streamlines the process of generating structured prompts and seamlessly integrates with LLMs for efficient interaction.

Comments