The ultimate guide to integrating ChatGPT, Telegram bot, and Python
In the age of information and technology, fast and reliable access to online tools is essential. However, sometimes we encounter obstacles such as the saturation of popular websites. This was the case for ChatGPT, an artificial intelligence-based text generation tool that was often inaccessible due to high traffic. To solve this problem, I decided to make use of OpenAI API keys and at first make my queries from my Windows terminal, later on I got the need to make queries anywhere and I didn’t always have my terminal, there I got the idea to integrate it to a Telegram bot and consult it at all times. In this post, I will share how I did it and what technologies I used along the way.
List of technologies to use:
- Python: Python is a programming language widely used in web applications, software development, data science and machine learning.
- Flask: Flask is a “micro” Framework written in Python and conceived to facilitate the development of Web Applications.
- Pythonanywhere: Pythonanywhere is a hosting dedicated only to Python-based applications, contains a very friendly environment, bash access via the web, contains database management, virtual environment, among many other advantages, it is free for low traffic applications.
- Telegram Bots: Telegram bots are a series of third-party applications that run inside the messaging application. You do not need to install them or do anything different to use them, as they are integrated in such a way that they are used as if they were a real person with whom you interact.
Pythonanywhere
- The first step is to access https://www.pythonanywhere.com/, where we will create an account in the portal.
- Then we access the Web tab where on the left side we click on the link (Add a new web app).
- In the popup that appears, we will configure our work environment. We define our application type (Flask), the Python version (the version you have installed in your local environment).
- Go to the consoles option and create a new bash.
- Open the bash you just created.
Set up your virtual environment:
Inside the bash run the following command:
<# check python version #>
python --version
<# Install pip #>
apt-get install python-pip
<# Install venv #>
sudo apt install python3-virtualenv
<# Create a new virtual env #>
python3 -m venv venv
Once our virtual environment has been created, we activate it with the following command:
source venv/bin/activate
It should look like this
Within our environment we install the following packages:
pip install requests
pip install openai
pip install flask
We create our app:
Now go back to pythonanywhere and go to the web option:
In the code section select the option:
Access the file flask_app.py
Enter the following code and save:
import requests
import openai
from flask import Flask, request, Response
app = Flask(__name__)
keys = {
'api': 'Your OPENAI API key',
'org': 'Your OPENAI Organization Key',
'telegramtoken': 'Your telegram token'
}
telegram_token = keys['telegramtoken']
url = f'https://api.telegram.org/bot{telegram_token}/'
openai.api_key = keys['api']
openai.organization = keys['org']
def parse_message(message):
user_id = message['message']['chat']['id']
text = message['message']['text']
return user_id, text
def send_message(user_id, text):
payload = {
'user_id': user_id,
'text': text
}
url = url + 'sendMessage'
response = requests.post(url, json=payload)
return response
conversation = [{"role": "system", "content": "You are a helpful assistant."}]
def chatgpt_response_turbo(text):
try:
conversation.append({"role": "user", "content": text})
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=conversation,
temperature=2,
max_tokens=250,
top_p=0.9
)
chat_response = response['choices'][0]['message']['content']
conversation.append({"role": "assistant", "content": chat_response})
return chat_response
except Exception as e:
return e
@app.route('/', methods=['GET', 'POST'])
def index():
# we check the use of POST method
if request.method == 'POST':
# we receive the Telegram message
message = request.get_json()
# we assign the values returned by the function
user_id, text = parse_message(message)
# we are seeking gpt's response
data = chatgpt_response_turbo(text)
# we send the answer to the user
send_message(user_id, data)
return Response('OK!', status=200)
return 'Server online'
if __name__ == '__main__':
app.run(debug=True, port=8002)parse_message: This function receives a JSON from Telegram, from this JSON response is extracted the “user_id” to know to whom to return the response and the “text” which is the text or message sent by the user.
This data can be obtained from the following links:
OPENAI API key and OPENAI Organization Key:
- Create your account on https://platform.openai.com/signup.
- Go to the name of your account.
- Look for the option view API keys.
- Create and new secret key.
Telegram token
- look for the BotFather bot https://t.me/BotFather, in the menu press start.
- Select /newbot, will prompt you to enter the bot name.
- You will now be prompted for the bot’s username, follow the prompts.
- Completed the process will send you the bot link and token.
Functions:
- send_message: This function receives the ID of the user to whom we are going to send the message and the reply text.
- chatgpt_response_turbo: This function receives the text of the query and will generate the answer, the configured parameters are the following:
- model: Specifies the language model to be used. In this case, the “gpt-3.5-turbo” model is being used.
- messages: A list of messages representing the conversation so far. The model will use this information to generate a coherent and relevant response.
- temperature: Controls the randomness of the generated response. A higher value produces more diverse and creative responses, while a lower value produces more conservative and predictable responses.
- max_tokens: Specifies the maximum number of tokens (words) that the generated response can have.
- top_p: Controls the diversity of the generated responses by selecting only the options with a cumulative probability greater than this value.
Virtualenv Config
- Now go back to pythonanywhere -> web and go to the section virtualenv, in this section you must configure as follows:
/home/{your username}/.virtualenvs/{virtualenv name}
- Returns to the top and reload changes.
Pythonanywhere and Telegram
Now our application is ready, we just need to connect it with the Telegram API.
To connect the Telegram API and your host in pythonanywhere you must have the following:
- Your Telegram API(example: 57785662:AHYQ3smCZ4hgjgh1QzMc_lp3D8ugYOPpT-R)
- Your hostname(in my case http://leninelio.pythonanywhere.com/)
With these two requirements in our browser we will write the following:
- https://api.telegram.org/botYOURTELEGRAMAPI/setWebhook?url=YOURHOSTNAME
- Example: https://api.telegram.org/bot57785662:AHYQ3smCZ4hgjgh1QzMc_lp3D8ugYOPpT-R/setWebhook?url=http://leninelio.pythonanywhere.com
Like this:
This will return a JSON response confirming the connection.
Thank you
- You can try my bot in this link: https://t.me/le_respuestas_bot
- If you have any questions or suggestions, do not hesitate to comment.
- I share with you some screenshots of the bot.