Basic Calling:
from langchain_community.chat_models import ChatOpenAI from dotenv import load_dotenv load_dotenv() llm=ChatOpenAI(model="gpt-3.5-turbo") res=llm.invoke("Sum of 2 numbers is even then what will their product even or odd?") print("\n") print("FULL ANSWER IS: \n") print(res) print("\n\n") print(res.content)
Explanation
Let's break down what's happening in the code:
Importing the Model: We import
ChatOpenAI
from thelangchain_community
package, allowing access to ChatGPT models like GPT-3.5 Turbo.Loading Environment Variables: The OpenAI access key, required to use models like GPT-3.5 or GPT-4, is stored in an environment file (e.g.,
.env
). We load it usingload_dotenv()
.Creating the Model Instance: An instance of the LLM is created using
ChatOpenAI
, specifyinggpt-3.5-turbo
as the model.Invoking the Model: We pass a prompt to the model, which in this case asks about the product of two numbers whose sum is even.
Printing the Full Response: The initial print statement outputs the entire response object, which includes the content and additional metadata like model name and token usage.
Extracting the Content: To retrieve just the answer, we access the
content
attribute usingres.content
, which extracts only the relevant part of the response.
Understanding the Output
When invoked, the model returns a response object containing:
- Content: The generated answer.
- Tokens: Information about token usage.
- Model Details: Metadata like the model name.
- System Information: Details like the reason for stopping and system fingerprints.
- Log ID: A unique identifier for the prompt.
Typically, only the content
is needed, which simplifies the output by focusing on the answer itself.
Now let's move toward the Conversation model:
Prompting a Conversation with LangChain
In this approach, rather than prompting a single question, we use a conversation model by employing different message types: SystemMessage
, HumanMessage
, and AIMessage
.
- SystemMessage: This sets up the role or instructions for the AI, like telling it that it's acting as a software developer or providing guidance for a task.
- HumanMessage: This represents the question or input from the user, essentially what you want to ask the model.
- AIMessage: This is the response the AI, such as ChatGPT, generates to the input it receives.
Using these different types of messages can create a more dynamic conversation instead of just asking a single question. Below is a code snippet that demonstrates how to prompt a conversation using this technique.
from langchain_community.chat_models import ChatOpenAI from langchain_core.messages import AIMessage,HumanMessage,SystemMessage from dotenv import load_dotenv load_dotenv() llm=ChatOpenAI(model="gpt-3.5-turbo") msg=[SystemMessage(content="Solve the follwoing Math trick questions"), HumanMessage(content="Sum of 2 numbers is even then what will their product even or odd?") ] res=llm.invoke(msg) print("\n") print("FULL ANSWER IS: \n") print(res) print("\n\n") print(res.content) msg=[SystemMessage(content="Solve the follwoing Math trick questions"), HumanMessage(content="Sum of 2 numbers is even then what will their product even or odd?"), AIMessage(content="f the sum of two numbers is even, then their product can be either even or odd.For example, consider the numbers 2 and 4. Their sum is 6 (even) and their product is 8 (even).But, if we consider the numbers 3 and 4, their sum is 7 (odd) and their product is 12 (even).So, the product of two numbers whose sum is even can be either even or odd."), HumanMessage("Then could be the result of division and subtration answer me in one line") ] res=llm.invoke(msg) print("\n") print("FULL ANSWER IS: \n") print(res) print("\n\n") print(res.content)
Alternate Models Like Claude and Gamini:
There are several alternatives to ChatGPT, such as Claude and Gemini. These AI models can perform similar tasks, but there are differences in pricing and accessibility. For instance, Claude and ChatGPT are paid models, while Gemini is an open-source model that can be accessed with a simple API key. Although Claude offers a minimal credit of $5 for users to explore its features, you can choose the AI model that best fits your needs. Among the advanced options, ChatGPT stands out, but you are free to experiment with others. Below is an example code demonstrating how to use these alternative models.
from langchain_google_genai import ChatGoogleGenerativeAI from langchain_anthropic import ChatAnthropic from langchain_openai import ChatOpenAI from dotenv import load_dotenv from langchain_core.messages import HumanMessage, SystemMessage,AIMessage load_dotenv() msg=[SystemMessage(content="Solve the follwoing Math trick questions"), HumanMessage(content="Sum of 2 numbers is even then what will their product even or odd?"), AIMessage(content="f the sum of two numbers is even, then their product can be either even or odd.For example, consider the numbers 2 and 4. Their sum is 6 (even) and their product is 8 (even).But, if we consider the numbers 3 and 4, their sum is 7 (odd) and their product is 12 (even).So, the product of two numbers whose sum is even can be either even or odd."), HumanMessage("Then could be the result of division and subtration answer me in one line") ] model = ChatAnthropic(model="claude-3-opus-20240229") result = model.invoke(msg) print(f"Answer from Anthropic: {result.content}") model = ChatGoogleGenerativeAI(model="gemini-1.5-flash") result = model.invoke(msg) print(f"Answer from Google: {result.content}")
Explanation
In the above code:
Importing Required Modules: We start by importing the necessary modules. The
ChatGoogleGenerativeAI
module allows us to use the Gemini models, and theChatAnthropic
module enables access to Claude. These imports set up our ability to use these alternative AI models.Loading Environment Variables: We load the environment variables using
load_dotenv()
to ensure that the API keys and other configurations are in place.Creating a Conversation: Similar to the previous example, we create a conversation using
SystemMessage
,HumanMessage
, andAIMessage
to simulate a dialogue with the AI model.Using the Claude Model: We initialize the Claude model using
ChatAnthropic
with the versionclaude-3-opus-20240229
. The conversation is then passed to the model, and the result is printed.Using the Gemini Model: Next, we switch to the Gemini model by initializing
ChatGoogleGenerativeAI
with the versiongemini-1.5-flash
. The same conversation is passed to this model, and its response is also printed.
Key Points
- Model Selection: You can choose between various AI models depending on your needs and budget. Claude and ChatGPT are paid options, while Gemini offers a free version.
- Cost Considerations: Each model may have different costs associated with the number of tokens generated, so it’s essential to choose one that aligns with your budget.
- Flexibility: The code demonstrates how easy it is to switch between different models, allowing you to experiment with various AI tools and find the best fit for your use case.
This setup allows you to explore different AI models and compare their responses to the same conversation, helping you decide which model works best for your specific needs.
Creating Dynamic Memory Component with List:
In this approach, we'll maintain context in a conversation by storing interaction history in a list. This method is simple but has limitations, as it involves appending each question and response to a list, which is then passed along with each new query to the model.
For example, if you initially ask the model to generate Python code for summing two numbers, the response and the prompt are stored in the list. When you ask a follow-up question, like "What if one number is 4 and the other is 5? What would the result be?", the entire conversation history is included in the input to the model. This helps generate contextually relevant responses.
However, this approach can be inefficient since most models have a token limit (often around 1000 tokens). Continuously appending conversation history can quickly exceed this limit, making it less practical for long-term use. Below is the code demonstrating how to implement this using the Gemini model, a free option that avoids the costs associated with increasing token counts.
from dotenv import load_dotenv from langchain_google_genai import ChatGoogleGenerativeAI from langchain_core.messages import HumanMessage, SystemMessage,AIMessage load_dotenv() llm = ChatGoogleGenerativeAI(model="gemini-1.5-flash") Memory = [] Memory.append(SystemMessage(content="You are a helpful AI assistant.")) while True: query = input("You: ") if query.lower() == "exit": break Memory.append(HumanMessage(content=query)) result = llm.invoke(Memory) response = result.content Memory.append(AIMessage(content=response)) print(f"AI: {response}") print("*********Memory*********") print(Memory)
Explanation
Initialization: We create an empty
Memory
list to store the conversation, starting with aSystemMessage
to define the AI's role.Interactive Loop: The loop prompts the user for input until "exit" is typed. Each input is added to
Memory
as aHumanMessage
, and the model generates a response, which is added as anAIMessage
.Token Limitation: This approach can become inefficient as the conversation grows, due to token limits, making it impractical for longer interactions.
This method allows simple, memory-like conversation management, though it's not optimal for lengthy dialogues.
Storing Conversation in the Cloud:
from dotenv import load_dotenv from langchain_google_genai import ChatGoogleGenerativeAI from langchain_google_firestore import FirestoreChatMessageHistory from google.cloud import firestore load_dotenv() llm = ChatGoogleGenerativeAI(model="gemini-1.5-flash") PROJECT_ID="deepamanlangchain" SESSION_ID = "session_1" COLLECTION_NAME = "Memory" client = firestore.Client(project=PROJECT_ID) Memory = FirestoreChatMessageHistory( session_id=SESSION_ID, collection=COLLECTION_NAME, client=client, ) print("Current Chat History:", Memory.messages) while True: human_input = input("User: ") if human_input.lower() == "exit": break Memory.add_user_message(human_input) ai_response = llm.invoke(Memory.messages) Memory.add_ai_message(ai_response.content) print(f"AI: {ai_response.content}")
Explanation
Importing Modules: We import necessary modules, including
ChatGoogleGenerativeAI
for using the Gemini model andFirestoreChatMessageHistory
for storing chat history in Firestore.Setting Up Parameters:
PROJECT_ID
is your Firebase project ID, which can be found in the Firebase Console.SESSION_ID
is a unique identifier for each user session, andCOLLECTION_NAME
is the Firestore collection where the chat history will be stored.
Initializing Firestore Client:
- We initialize the Firestore client using the
firestore.Client
class with thePROJECT_ID
. FirestoreChatMessageHistory
is initialized withsession_id
,collection
, andclient
, which sets up the session in Firestore to store chat history.
- We initialize the Firestore client using the
Interactive Chat Loop:
- The user inputs are stored as
HumanMessage
in Firestore usingMemory.add_user_message
. - The model generates a response, which is then stored as
AIMessage
usingMemory.add_ai_message
. - The loop continues until the user types "exit."
- The user inputs are stored as
Firestore Storage:
- All conversations are stored in Firestore under the specified session and collection, allowing persistent storage that can be retrieved and continued in future sessions.
This approach not only stores chat history securely in the cloud but also ensures that conversations are accessible across devices and sessions, leveraging Firestore's real-time syncing and scalability. This method is efficient for applications where persistent and synchronized data storage is essential.
Comments
Post a Comment