Prompt Templates:
Prompt templates are predefined blocks of text designed to create a fixed format for prompts sent to a generative AI model. They allow you to structure prompts consistently, making it easier to pass user inputs into a standard template. For example, if you frequently ask the AI to debug code, you can design a prompt template that always formats the request in a specific way.
from langchain.prompts import ChatPromptTemplate from langchain_core.messages import HumanMessage messages = [ ("system", "You are a developer helps to finds error in the python code given topic for logic {topic}."), ("human","Here is the code {code}"), ] prompt_template = ChatPromptTemplate.from_messages(messages) prompt = prompt_template.invoke({"topic": "sum of two numbers", "code": 'print(a*b)'}) print(prompt)
Explanation
Importing Modules: We import
ChatPromptTemplate
to create a custom template andHumanMessage
to define the message content.Defining the Template: The
messages
list contains a system message that sets the AI’s role (e.g., a developer who debugs code) and a human message where the user input (the code to be debugged) is passed.Creating the Prompt Template:
ChatPromptTemplate.from_messages(messages)
generates the template based on the predefined messages.Invoking the Template: By calling
prompt_template.invoke()
with specific values for{topic}
and{code}
, the placeholders in the template are replaced with actual data.
This approach simplifies the process of generating structured prompts, ensuring that the AI receives well-formatted input every time.
Combining Prompt Templates with LLM:
from langchain.prompts import ChatPromptTemplate from dotenv import load_dotenv from langchain_google_genai import ChatGoogleGenerativeAI load_dotenv() llm = ChatGoogleGenerativeAI(model="gemini-1.5-flash") messages = [ ("system", "You are a developer helps to finds error in the python code given topic for logic {topic}."), ("human","Here is the code {code}"), ] prompt_template = ChatPromptTemplate.from_messages(messages) prompt = prompt_template.invoke({"topic": "sum of two numbers", "code": 'print(a*b)'}) print(llm.invoke(prompt)) print("\n\n Content : \n\n") print(llm.invoke(prompt).content)
Explanation
Define the Prompt Template: We define a prompt template using placeholders like
{topic}
and{code}
. This template ensures a consistent format every time you need to debug Python code.Invoke the Prompt Template: The
prompt_template.invoke()
function replaces the placeholders with actual values (e.g.,"sum of two numbers"
and'print(a*b)'
).Use the Prompt with LLM: The generated prompt is then passed to the LLM (in this case, using the Gemini model) via
llm.invoke(prompt)
.Output the Result: The model's full response and the specific content are printed out.
Key Benefits
- Reusability: Using prompt templates makes your code reusable and reduces the risk of errors from inconsistent prompt formatting.
- Flexibility: Easily swap out different topics or code snippets without changing the overall prompt structure.
This method streamlines the process of generating structured prompts and seamlessly integrates with LLMs for efficient interaction.
Comments
Post a Comment