Chatformers
  • Chatformers
  • Usage
  • Example v<=1.0.3
  • Example v>=1.0.4
Powered by GitBook
On this page
  • Getting Started
  • Usage Example
  • Optional Features
  • Complete Code-

Example v>=1.0.4

Getting Started

Step 1: Importing the Library

To begin, import the necessary modules and set up the API key.

from chatformers.chatbot import Chatbot
import os
from openai import OpenAI

Set your GROQ_API_KEY environment variable:

os.environ["GROQ_API_KEY"] = "<API_KEY>"
GROQ_API_KEY = os.getenv("GROQ_API_KEY")

Step 2: Initializing the OpenAI Client

Configure the OpenAI client to communicate with the GROQ LLM service.

groq_base_url = "https://api.groq.com/openai/v1"
client = OpenAI(base_url=groq_base_url, api_key=GROQ_API_KEY)

Step 3: Setting Up Chatbot Character

You can configure the chatbot character with specific attributes.

character_data = {
    "name": "Julia",
    "description": "You are on an online chatting website, chatting with strangers."
}

Step 4: Configuring Chatformers

config = {
    "vector_store": {
        "provider": "chroma",
        "config": {
            "collection_name": "test",
            "path": "db"
        }
    },
    "embedder": {
        "provider": "ollama",
        "config": {
            "model": "nomic-embed-text:latest"
        }
    },
    "llm": {
        "provider": "groq",
        "config": {
            "model": "llama-3.1-8b-instant",
            "temperature": 0.1,
            "max_tokens": 4000
        }
    }
}

Step 5: Creating the Chatbot Instance

Initialize the chatbot with the OpenAI client, model name, character configuration, and memory configuration.

chatbot = Chatbot(
    llm_client=client,
    model_name="llama-3.1-8b-instant",
    character_data=character_data,
    config=config
)

Usage Example

Basic Chat Interaction

Below is an example of a chatbot conversation where the user asks the bot a question and receives a response based on previous chats.

# Define user ID and conversation history
user_id = "Sam-Julia"
message_history = [
    {"role": "user", "content": "where r u from?"},
    {"role": "assistant", "content": "I am from CA, USA"}
]

# User's current question
query = "what is my name?"

# Get response from the chatbot
response = chatbot.chat(query=query, message_history=message_history, user_id=user_id, print_stream=True)
print("Assistant: ", response)

Output

The chatbot responds based on previous conversations stored in memory and any additional context provided by the user.

Assistant: Your name is Sam!

Optional Features

Adding Memories

Chatformers allow you to embed memories directly into the vector database, making future interactions more contextual.

memory_messages = [
    {"role": "user", "content": "My name is Sam, what about you?"},
    {"role": "assistant", "content": "Hello Sam! I'm Julia."}
]
chatbot.add_memories(memory_messages, user_id=user_id)

Retrieving Memories

You can retrieve the memories associated with a specific user to understand the context better.

memories = chatbot.get_memories(user_id=user_id)
for memory in memories:
    print(memory)

You can also query for related memories based on specific prompts.

related_memories = chatbot.related_memory(user_id=user_id, query="yes I am Sam? what is your name")
print(related_memories)

Complete Code-

from chatformers.chatbot import Chatbot
import os
from openai import OpenAI

os.environ["GROQ_API_KEY"] = "<API_KEY>"
GROQ_API_KEY = "<API_KEY>"
groq_base_url = "https://api.groq.com/openai/v1"

# Unique ID for conversation between Sam (User) and Julia (Chatbot)
user_id = "Sam-Julia"

# Name of the model you want to use
model_name = "llama-3.1-8b-instant"

# Initialize OpenAI client with API key and base URL, we are using LLM from GROQ here, this is required for having conversation with LLM
client = OpenAI(base_url=groq_base_url,
                api_key=GROQ_API_KEY,
                )

# You can provide character to your chatbot, the type should be dictionary with key value pairs of your choice we will integrate in system prompt or you can leave it empty dictionary
character_data = {"name": "Julia",
                  "description": "You are on online chatting website, chatting with strangers."}

# Configuration: for configuration you can refer https://docs.mem0.ai/overview, hence chatformers use mem0 for memory and llm management
# Example: https://docs.mem0.ai/examples/mem0-with-ollama
# These configurations will be used for embedded the chats, handling memory creation automatically
config = {
    "vector_store": {
        "provider": "chroma",
        "config": {
            "collection_name": "test",
            "path": "db",
        }
    },
    "embedder": {
        "provider": "ollama",
        "config": {
            "model": "nomic-embed-text:latest"
        }
    },
    "llm": {
        "provider": "groq",
        "config": {
            "model": model_name,
            "temperature": 0.1,
            "max_tokens": 4000,
        }
    },
    # "llm": {
    #     "provider": "ollama",
    #     "config": {
    #         "model": model_name,
    #         "temperature": 0.1,
    #         "max_tokens": 4000,
    #     }
    # },
}

# Initialize Chatbot with LLM client, model name, character data, and configuration
chatbot = Chatbot(llm_client=client, model_name=model_name, character_data=character_data, config=config)

# Optional, if you want to add any memory into vector database at any point, uncomment this line
# memory_messages = [
#     {"role": "user", "content": "My name is Sam, what about you?"},
#     {"role": "assistant", "content": "Hello Sam! I'm Julia."}
# ]
# chatbot.add_memories(memory_messages, user_id=user_id)

# query is your current question that you want LLM to answer
query = "what is my name"

# message_history is a list of messages in openai format, this can be your conversation buffer window memory, you can manage it yourself
message_history = [{"role": "user", "content": "where r u from?"},
                   {"role": "assistant", "content": "I am from CA, USA"}]
response = chatbot.chat(query=query, message_history=message_history, user_id=user_id,
                        print_stream=True)

# Final response from LLM based on message_history, and memory you have added if any and whatever chats happened with user_id
print("Assistant: ", response)

# Optional, Uncomment this line to get all memories of a user
# memories = chatbot.get_memories(user_id=user_id)
# for m in memories:
#     print(m)
# print("================================================================")
# related_memories = chatbot.related_memory(user_id=user_id,
#                                           query="yes i am sam? what us your name")
# print(related_memories)

PreviousExample v<=1.0.3

Last updated 6 months ago

Chatformers uses mem0 for memory management. Refer to the for more details.

mem0 documentation