This project implements a simple chat bot using Streamlit and the Groq API. The bot can take user inputs and respond with messages generated by the Groq API's chat completion feature.
- Python 3.7 or higher
- Streamlit
- Groq API key
Before install i am select the llama3 70b model if you want change the model change it in here in the code
chat_completion = client.chat.completions.create(
messages=conversation,
model="llama3-70b-8192",
)
-
Clone the repository:
git clone https://github.com/NSTHEHACKER/chatbot_with-groq.git cd chatbot_with-groq
-
Create a virtual environment:
python -m venv venv source venv/bin/activate # On Windows, use `venv\Scripts\activate`
-
Install dependencies:
pip install streamlit groq
- Set up your Groq API key:
Replace
"your_groq_api_key_here"
in the code with your actual Groq API key.
-
Run the Streamlit app:
streamlit run app.py
-
Interact with the chat bot:
- Open the URL provided by Streamlit in your web browser.
- Type a message in the input box and press Enter.
- The chat bot will respond with a message generated by the Groq API.
The main components of the code are:
-
Imports: Import necessary libraries.
import streamlit as st from groq import Groq
-
Initialize Groq Client: Set up the Groq client with the provided API key.
client = Groq(api_key="your_api_key")
-
Initialize Conversation: Create an initial conversation list.
conversation = [ { "role": "user", "content": "", } ]
-
Streamlit App Title: Set the title of the Streamlit app.
st.title("Chat Bot by NS")
-
Session State Initialization: Initialize the session state to store messages.
if "messages" not in st.session_state: st.session_state.messages = []
-
Display Messages: Loop through the messages in the session state and display them.
for message in st.session_state.messages: with st.chat_message(message["role"]): st.markdown(message["content"])
-
User Input: Create an input box for user messages.
userinput = st.chat_input("Type something")
-
Handle User Input: If there is user input, display it and get the assistant's response.
if userinput: with st.chat_message("user"): st.markdown(userinput) st.session_state.messages.append({"role": "user", "content": userinput}) conversation.append({"role": "user", "content": userinput}) chat_completion = client.chat.completions.create( messages=conversation, model="llama3-70b-8192", ) if chat_completion.choices: assistant_message = chat_completion.choices[0].message.content with st.chat_message("assistant"): st.markdown(assistant_message) st.session_state.messages.append({"role": "assistant", "content": assistant_message}) else: st.warning("Assistant did not provide a response.")
This project is licensed under the MIT License. See the LICENSE file for details.
- Thanks to Streamlit for providing an easy-to-use framework for building web apps.
- Thanks to Groq for their powerful API for generating chat responses.