Chains
Chains in LangChain automate processes by connecting different components, such as prompt templates, language models (LLMs), and output parsers, into a pipeline. Instead of manually handling each step—creating a prompt, passing it to the LLM, extracting the content, and displaying it—chains streamline this workflow.
from langchain_community.chat_models import ChatOpenAI from langchain_core.prompts import ChatPromptTemplate from langchain_core.output_parsers import StrOutputParser import streamlit as st import os from dotenv import load_dotenv load_dotenv() os.environ["OPENAI_API_KEY"]=os.getenv("OPENAI_API_KEY") # os.environ["LANGCHAIN_API_KEY"]=os.getenv("LANGCHAIN_API_KEY") # os.environ["LANGCHAIN_TRACING_V2"]="true" prompt = ChatPromptTemplate.from_messages([ ('system', "please assist the user based on his needs and never go beyond 30 words with your answer."), ('human', "Question:{question}") ]) st.title("demochat") input_text=st.text_input("Ask based on your Need") llm=ChatOpenAI(model="gpt-3.5-turbo") output_parser=StrOutputParser() chain=prompt|llm|output_parser if input_text: st.write(chain.invoke({'question':input_text}))
Explanation
Importing Modules: We import necessary components, including
ChatOpenAI
for the LLM,ChatPromptTemplate
for creating prompt templates,StrOutputParser
to parse output, andStreamlit
for the web interface.Environment Setup: We load environment variables and set the OpenAI API key.
Prompt Template: A prompt template is created, ensuring that the AI’s responses are concise (within 30 words). The template also includes placeholders like
{question}
for dynamic input.Streamlit Interface:
st.title("DemoChat")
sets the title of the web page.st.text_input("Ask based on your need")
creates a text input box where users can enter their queries.
LLM and Output Parser: The GPT-3.5 Turbo model is initialized, along with the
StrOutputParser
, which automatically extracts the relevant content from the LLM’s response.Creating the Chain: We connect the prompt, LLM, and output parser using a pipeline (the
|
operator). This chain automates the process: when a user inputs a query, the chain processes it through the prompt, passes it to the LLM, and then extracts and displays the response.Triggering the Chain: If the user enters a query, the chain is invoked, and the response is displayed on the Streamlit interface using
st.write()
.
Benefits
- Automation: Chains simplify the workflow by automating the sequence of actions, reducing the need for manual coding at each step.
- User Interface: Streamlit provides a simple way to build a web interface, allowing users to interact with the LLM without needing to know the underlying code.
This approach demonstrates how chains can be effectively used to create a streamlined, interactive application with minimal code.
Extending the output:
RunnableLambda
. This allows us to include additional processing steps in the chain, making it more flexible and powerfulfrom langchain_community.chat_models import ChatOpenAI from langchain_core.prompts import ChatPromptTemplate from langchain.schema.runnable import RunnableLambda from langchain_core.output_parsers import StrOutputParser import streamlit as st import os from dotenv import load_dotenv load_dotenv() os.environ["OPENAI_API_KEY"]=os.getenv("OPENAI_API_KEY") os.environ["LANGCHAIN_API_KEY"]=os.getenv("LANGCHAIN_API_KEY") os.environ["LANGCHAIN_TRACING_V2"]="true" prompt = ChatPromptTemplate.from_messages([ ('system', "please assist the user based on his needs and never go beyond 30 words with your answer."), ('human', "Question:{question}") ]) def cat(x): return ("*"*10+x+"#"*10) def dfg(x): return("This is the output of gpt 3.5 turbo \n\n"+x) o1 = RunnableLambda(lambda x:cat(x)) o2= RunnableLambda(lambda x:dfg(x)) st.title("demochat") input_text=st.text_input("Ask based on your Need") llm=ChatOpenAI(model="gpt-3.5-turbo") output_parser=StrOutputParser() chain=prompt|llm|output_parser|o1|o2 if input_text: st.write(chain.invoke({'question':input_text}))
Explanation
Prompt Template: We define a prompt template with a system message and a placeholder for the user’s question.
Custom Functions:
cat(x)
adds asterisks and hash symbols around the LLM's output.dfg(x)
adds a custom prefix to the output.- Both functions are wrapped in
RunnableLambda
, which allows them to be added to the chain.
Streamlit Interface:
st.title("DemoChat")
sets the title of the web page.st.text_input("Ask based on your need")
provides an input box for user queries.
LLM and Output Parser: The GPT-3.5 Turbo model is initialized, along with the
StrOutputParser
, which extracts the relevant content from the LLM’s response.Extended Chain: The chain now includes the prompt, LLM, output parser, and the two custom functions. This chain processes the user’s input, generates a response, and then modifies that response using the custom functions before displaying it.
Dynamic Processing: The chain is invoked whenever the user inputs a question. The input is processed through the chain, and the final output is displayed on the Streamlit interface.
Real-World Application
While this example uses simple functions for demonstration, the concept can be extended to more complex tasks. For instance, you could chain together multiple LLMs to verify code, generate code, or create a feedback loop between different models. This allows for sophisticated workflows, enabling you to build more advanced AI-driven applications by chaining together different processes.
Parallel Chaining:
Parallel chaining is a powerful concept that allows you to execute multiple tasks simultaneously and then combine their results for further processing. In the example below, we demonstrate how to run two different LLMs in parallel, generate Python code based on a given prompt, and then use another LLM to select the best code.
from langchain_community.chat_models import ChatOpenAI from langchain_core.prompts import ChatPromptTemplate from langchain.schema.runnable import RunnableLambda,RunnableParallel from langchain_anthropic import ChatAnthropic from langchain_google_genai import ChatGoogleGenerativeAI from langchain_core.output_parsers import StrOutputParser import os from dotenv import load_dotenv load_dotenv() model = ChatAnthropic(model="claude-3-opus-20240229") llm = ChatGoogleGenerativeAI(model="gemini-1.5-flash") llm2=ChatOpenAI(model="gpt-3.5-turbo") messages = [ ("system", "You are a developer helps to build code in the python given topic."), ("human","help me with the code {topic}"), ] messages1 = [ ("system", "You are a developer helps to Identify right and best code given two codes."), ("human","help me with the code {code}"), ] def combine(goog, chg): return f"Code 1:\n{goog}\n\Code2:\n{chg}" prompt_template = ChatPromptTemplate.from_messages(messages) prompt_template1 = ChatPromptTemplate.from_messages(messages1) g_branch_chain=(RunnableLambda(lambda x: llm|StrOutputParser())) c_branch_chain=(RunnableLambda(lambda x: llm2|StrOutputParser())) chain = ( prompt_template | RunnableParallel(branches={"goog": g_branch_chain, "chg": c_branch_chain}) | RunnableLambda(lambda x: combine(x["branches"]["goog"], x["branches"]["chg"])) | prompt_template1 | model |StrOutputParser() ) print(chain.invoke({"topic": "Code to print 3 line height triangle inside a 5 radius circle with all stars"}))
Explanation
Initialization:
- We start by importing the necessary modules and initializing three different LLM models: Claude (via ChatAnthropic), Gemini (via ChatGoogleGenerativeAI), and GPT-3.5 Turbo (via ChatOpenAI).
Prompt Templates:
- The first prompt template asks the first two LLMs (Gemini and GPT-3.5 Turbo) to generate Python code based on a given topic.
- The second prompt template asks Claude to evaluate the two generated codes and determine which one is better.
Parallel Execution:
RunnableParallel
is used to execute both the Gemini and GPT-3.5 Turbo models in parallel. This means both LLMs work on generating code simultaneously.
Combining Results:
- A custom function
combine
is used to format the outputs from both LLMs, labeling them as "Code 1" and "Code 2."
- A custom function
Evaluation:
- The combined output is passed to Claude, which analyzes both pieces of code and determines the better one. The final result is parsed and displayed.
Key Takeaways
- Parallel Processing: This approach allows you to run multiple models simultaneously, saving time and enabling more complex workflows.
- Flexible Integration: You can chain together different LLMs and custom functions, creating dynamic and adaptable pipelines.
- Real-World Applications: In practice, this could be used to generate, compare, and refine code, text, or other outputs across multiple models, ensuring the best possible result.
This example showcases the power of parallel chaining, enabling more sophisticated AI-driven applications by leveraging multiple LLMs in tandem.
Branching Chains:
Branching chains in LangChain allow you to execute different workflows based on specific conditions. This approach lets you create more dynamic and responsive AI-driven applications by defining multiple paths that can be taken based on the outcomes of previous steps.
In the example below, we use branching to decide whether to appreciate or correct a piece of code generated by an LLM. If the code is correct, one branch sends an appreciation message. If the code is wrong, another branch triggers a process to fix the code.
from langchain_core.prompts import ChatPromptTemplate from langchain.schema.runnable import RunnableLambda,RunnableParallel,RunnableBranch from langchain_anthropic import ChatAnthropic from langchain_google_genai import ChatGoogleGenerativeAI from langchain_core.output_parsers import StrOutputParser import os from dotenv import load_dotenv load_dotenv() llm = ChatGoogleGenerativeAI(model="gemini-1.5-flash") model = ChatAnthropic(model="claude-3-opus-20240229") messages1 = [ ("system", "You are a developer helps to build code in the python given topic."), ("human","help me with the code {topic}"), ] messages2 = [ ("system", "You are a tester helps to determine weather the code is correct. If not mention word 'Wrong' and tell Reason in 2 lines along with code. If it is correct mention word 'Correct'."), ("human","help me with the code {code}"), ] messages3=[ ("system","You are a developer helps to fix the code given the reason and the code"), ("human","help me fix this {code}"), ] messages4=[ ("system","You are a satisfied developer with the answer from your junior"), ("human"," appreciate Him with the codefeed {code}"), ] messages5=[ ("system","say sorry to user for the code"), ("human","say sorry to user for the code {code}"), ] m1=ChatPromptTemplate.from_messages(messages1) m2=ChatPromptTemplate.from_messages(messages2) m3=ChatPromptTemplate.from_messages(messages3) m4=ChatPromptTemplate.from_messages(messages4) m5=ChatPromptTemplate.from_messages(messages5) branches = RunnableBranch( ( lambda x: "correct" in x.lower(), m4|model|StrOutputParser() ), ( lambda x: "wrong" in x.lower(), m3 | model | StrOutputParser() ), m5 | model | StrOutputParser() ) cc = m2 | model | StrOutputParser() chain = m1|llm|StrOutputParser()|cc|branches print(chain.invoke({"topic": "Code to print 3 line height triangle inside a 5 radius circle with all stars"}))
Comments
Post a Comment