Only 30 lines Create a Simple ChatBot with Gemini Pro API and Streamlit

2 min readJan 11, 2024

In this post, I will introduce how to create a simple chat bot with Gemini Pro API and Streamlit


1. Install packages

First, we will need to install packages that I mentioned in the requirements. Please make sure your python version is 3.9+.

pip install streamlit google-generativeai

2. Write a script

import os
import streamlit as st
import google.generativeai as genai

gemini_api_key = os.environ['GOOGLE_GEMINI_KEY']
model = genai.GenerativeModel('gemini-pro')

if "chat" not in st.session_state: = model.start_chat(history=[])
st.title('Gemini Pro Test')

def role_to_streamlit(role: str) -> str:
if role == 'model':
return 'assistant'
return role

for message in
with st.chat_message(role_to_streamlit(message.role)):

if prompt := st.chat_input("I possess a well of knowledge. What would you like to know?"):
response =
with st.chat_message("assistant"):
except Exception as e:
st.error(f'An error occurred: {e}')

Now we can try a chat bot.

streamlit run

3. Optional Push code to the Hugging Face Space

Go to and Create new Space.

Please make sure you select Streamlit when you create a new space.

After creating a new space, you will need to clone the repository for your space.

Then add you created to your local repository. Hugging Face Space has installed Streamlit for you alread. But it won’t install any packages. We will need to tell packages we need for the application to the space. For doing that we need to create requirements.txt.

But it’s only one line since iois built-in package and is already installed by Hugging Face. We only need to install google-generatifveai.



Then you just need to push your change to Hugging Face space repo. This may take a few minutes. Then access your app.




software engineer works for a Biotechnology Research startup in Brooklyn. #CreativeCoding #Art #IoT #MachineLearning #python #typescript #javascript #reactjs