Understanding Function Tool Calls in GenAI: Building Smarter Chatbots
In the world of Generative AI (GenAI), one of the most exciting capabilities lies in its ability to integrate with external systems and tools. Let's for instance take a look at the use case of an e-commerce chatbot. We might want to give the chatbot the ability to fetch product details, order status, or user details while interacting with the user.
This tutorial will show you how to implement function tool calls for a chatbot in Python using Azure OpenAI. We'll explore the concept, and its practical implementation.
What Are Function Tool Calls?
Function tool calls enable large language models (LLMs) to interact with external systems by invoking predefined functions. These functions act as bridges between the AI model and external services, databases, or APIs, allowing the chatbot to:
Fetch specific data, such as user details or product information.
Perform actions, like updating an order status or fetching transactional data.
Make decisions based on real-time information.
This mechanism transforms the chatbot from a passive responder into an interactive, task-oriented assistant.
Why Use Function Tool Calls in E-Commerce?
In e-commerce, customers often require detailed information about products, orders, or their accounts. Let’s break down a few scenarios where function tool calls are indispensable:
Fetching Product Details: Customers may ask, "What are the dimensions of product X?" A tool call fetches the relevant details from a product database.
Order Status Queries: A customer inquires, "When will my order be delivered?" The chatbot retrieves order information in real-time.
User-Specific Queries: A returning customer might say, "Show me my past orders." With function tool calls, the bot ensures the response is personalized and secure.
By integrating these functionalities, the chatbot becomes a reliable virtual assistant rather than a generic query engine.
Database
Before we dive into the implementation of the function tool calls themselves, we first need to look at at the database that we'll be using.
CREATE TABLE users (
user_id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
username TEXT NOT NULL UNIQUE,
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE products (
product_id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
product_name TEXT NOT NULL,
product_description TEXT,
thumbnail_url TEXT,
dimension TEXT,
weight DECIMAL,
price DECIMAL(10,2) NOT NULL,
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE orders (
order_id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID REFERENCES users(user_id),
ordered_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
delivery_status TEXT NOT NULL,
delivery_time TIMESTAMP WITH TIME ZONE
);
CREATE TABLE order_items (
order_id UUID REFERENCES orders(order_id),
product_id UUID REFERENCES products(product_id),
quantity INTEGER NOT NULL,
price_at_time DECIMAL(10,2) NOT NULL,
PRIMARY KEY (order_id, product_id)
);
This database schema is very simple and contains only a few tables with users, products and orders. In a real-world scenario, you would likely have a much more complex database schema with many more tables and relationships but for the sake of this tutorial, we'll keep it simple.
Anatomy of a Function Tool Call
Now that we have our database schema, we can start looking at how we can implement function tool calls. Let’s break down the components of a tool call in our e-commerce chatbot implementation:
Function Registration
Each function is registered with the LLM alongside its name, description, and parameters. Here’s an example for fetching all orders of a user:
def get_orders_by_user_id_function_tool():
return {
"type": "function",
"function": {
"name": "get_orders_by_user_id",
"description": "Get all orders of a user by the user ID",
"parameters": {
"type": "object",
"properties": {
"user_id": {
"type": "string",
"description": "The ID of the user",
},
},
"required": ["order_id", "user_id"],
},
},
}
Invocation and Execution
When the user query requires external data, the LLM invokes the appropriate function. For example, when a user asks for their order history, the model identifies the relevant function and supplies the necessary parameters.
Here’s how invocation works in Python:
functions = [
get_orders_by_user_id_function_tool(),
]
response = client.chat.completions.create(
model=model,
messages=messages,
tools=functions,
)
Function Implementation
The actual logic for the function executes the query and processes the data. For example, fetching orders of a user by the user ID:
class Order:
def __init__(
self,
order_id: str,
user_id: str,
delivery_status: str,
ordered_at: datetime,
delivery_time: Optional[datetime],
):
self.order_id = order_id
self.user_id = user_id
self.delivery_status = delivery_status
self.ordered_at = ordered_at
self.delivery_time = delivery_time
def get_orders_by_user_id(cursor: psycopg2.extensions.cursor, user_id: str) -> List[Order]:
cursor.execute(
"SELECT order_id, user_id, delivery_status, ordered_at, delivery_time FROM orders WHERE user_id = %s",
(user_id,),
)
orders = cursor.fetchall()
return [
Order(
order_id=order[0],
user_id=order[1],
delivery_status=order[2],
ordered_at=order[3],
delivery_time=order[4],
)
for order in orders
]
Response Handling
Once the function is executed, the chatbot integrates the retrieved data into its reply.
# Check if there are tool calls
if response.choices[0].message.tool_calls:
tool_calls = response.choices[0].message.tool_calls
for tool_call in tool_calls:
result = route_function_call(conn, tool_call)
messages.append(
{
"role": "tool",
"tool_call_id": tool_call.id,
"name": tool_call.function.name,
"content": json.dumps(result),
}
)
Example: Fetching Product Details
Imagine a customer asks, "Tell me more about my last order." Here’s how the process unfolds:
User Query
{"role": "user", "content": "Tell me more about my last order."}
Function Call: The LLM determines that the query requires information about the last order and invokes the get_orders_by_user_id function:
Database Interaction: The function implementation fetches all orders of the user from the database.
Response Construction: The LLM uses the function's output to generate a user-friendly reply:
{"role": "assistant", "content": "Your latest order, placed on January 18, 2024, is currently in processing. Here are the details of the items included in this order:..."}
Main Function: Tying It All Together
To demonstrate how all the components work together, here’s the route_function_call() and the main() function for our chatbot. Make sure to setup the required environment variables in a .env file like this and setup a PostgreSQL database with the provided schema.
import os
import openai
import dotenv
import json
import psycopg2
class ChatResponse:
def __init__(self, response_text: str, tokens_used: int):
self.response_text = response_text
self.tokens_used = tokens_used
def get_chat_response(
cursor: psycopg2.extensions.cursor,
client,
model: str,
messages: List[dict],
functions: List[dict],
) -> ChatResponse:
response = client.chat.completions.create(
model=model,
messages=messages,
tools=functions,
)
# Check if there are tool calls
if response.choices[0].message.tool_calls:
tool_calls = response.choices[0].message.tool_calls
# Add assistant's response with tool calls to messages
messages.append(response.choices[0].message.model_dump())
# Process all tool calls and add their responses
for tool_call in tool_calls:
result = route_function_call(conn, tool_call)
messages.append(
{
"role": "tool",
"tool_call_id": tool_call.id,
"name": tool_call.function.name,
"content": json.dumps(result),
}
)
# Recursive call with updated messages
return get_chat_response(conn, client, model, messages, functions)
response_content = response.choices[0].message.content
return ChatResponse(
response_text=response_content,
tokens_used=response.usage.total_tokens,
)
def route_function_call(cursor: psycopg2.extensions.cursor, tool_call: dict) -> dict:
function_name = tool_call.function.name
function_args = json.loads(tool_call.function.arguments)
if function_name == "get_user_by_username":
return get_user_by_username(cursor, function_args["username"])
if function_name == "get_all_products":
return get_all_products(cursor)
if function_name == "get_product_by_id":
return get_product_by_id(cursor, function_args["product_id"])
if function_name == "get_orders_by_user_id":
return get_orders_by_user_id(cursor, function_args["user_id"])
if function_name == "get_order_items_by_order_id":
return get_order_items_by_order_id(
cursor, function_args["order_id"], function_args["user_id"]
)
raise ValueError(f"Unknown function: {function_name}")
def main():
dotenv.load_dotenv()
endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
api_key = os.getenv("AZURE_OPENAI_API_KEY")
model = os.getenv("AZURE_OPENAI_MODEL")
api_version = os.getenv("AZURE_OPENAI_API_VERSION")
conn = psycopg2.connect(
host=os.getenv("POSTGRES_HOST"),
database=os.getenv("POSTGRES_DB"),
user=os.getenv("POSTGRES_USER"),
password=os.getenv("POSTGRES_PASSWORD"),
)
cursor = conn.cursor()
client = openai.AzureOpenAI(
azure_endpoint=endpoint, api_key=api_key, api_version=api_version
)
messages = [
{
"role": "system",
"content": "You are an assistant for an e-commerce application. Respond to user queries by performing relevant operations.",
},
]
functions = [
get_user_by_username_function_tool(),
get_all_products_function_tool(),
get_product_by_id_function_tool(),
]
while True:
user_input = input("User: ")
messages.append({"role": "user", "content": user_input})
response = get_chat_response(cursor, client, model, messages, functions)
print(f"Assistant: {response.response_text}")
if __name__ == "__main__":
main()
Implementation of get_chat_response
The get_chat_response function plays a critical role in integrating the LLM with the registered functions and handling user queries.
Initialization: Logs the user query and sends it to the OpenAI API along with registered functions.
Tool Call Handling: Checks if the model's response includes tool calls and routes them to their respective implementations.
Recursive Execution: If tool calls return results, the function integrates the data into the conversation and recursively updates the response.
Final Response: Once all tool calls are resolved, the function returns a cohesive response to the user.
This implementation ensures that the chatbot can handle complex queries and retrieve relevant data from the database while autonomously deciding which function to call, in which order to call them and how to integrate the results into the conversation.
One aspect that we didn't cover here is the security of the database access. Image, a user asks the following question:
{"role": "user", "content": "Show me all orders of user 'john_doe'"}
The chatbot would then need to fetch all orders of the user john_doe from the database. Let's quickly bring back the get_orders_by_user_id function:
def get_orders_by_user_id(cursor: psycopg2.extensions.cursor, user_id: str) -> List[Order]:
cursor.execute(
"SELECT order_id, user_id, delivery_status, ordered_at, delivery_time FROM orders WHERE user_id = %s",
(user_id,),
)
orders = cursor.fetchall()
return [
Order(
order_id=order[0],
user_id=order[1],
delivery_status=order[2],
ordered_at=order[3],
delivery_time=order[4],
)
for order in orders
]
If you look at the function signature, you'll see that it takes a user_id as an argument meaning that the LLM can decide, which user's orders to fetch. If the database access is not secured, this could potentially expose sensitive user data to unauthorized users.
The second blog post demonstrates how user authentication can be implemented with Zitadel and OAuth 2.0.
In the third blog post of this series, we'll explore how to secure database access with PostgreSQL's row-level security, ensuring reliable protection for user data.
Conclusion
Function tool calls empower chatbots to go beyond simple conversational abilities, transforming them into interactive agents capable of real-world tasks.
Agentic AI and Use Cases
Agentic AI refers to advanced autonomous agents that can interact with diverse data sources, systems, and devices to make intelligent, data-driven decisions and execute complex workflows without human intervention. By connecting previously siloed systems and leveraging real-time data, agentic AI enables organizations to operate more efficiently, personalize user experiences, and optimize decision-making. The key advantage of agentic AI is its ability to automate processes end-to-end, seamlessly bridging gaps between different media and systems.
Use cases for agentic AI include:
AI-powered virtual assistants that handle customer inquiries in real-time
An AI agent assesses the validity of insurance claims, collects relevant information from various databases, and communicates empathetically with customers
Real-time monitoring and optimization of operations using IoT
Optimizing routes and operations in transportation networks
Generate documents, reports and other correspondence based on data from various sources
Conversation Excerpt
If you are interested in a more detailed look at the interaction between the LLM and the tool calls, you can check the following conversation. Note how the LLM chooses the right functions to call and how it integrates the results into the conversation and subsequent tool calls until the user query is answered.
The below excerpt is not complete, but it demonstrates the interaction between the LLM and the tool calls:
User Query
[
{
"role": "system",
"content": "You are an assistant for an e-commerce application. Respond to user queries by performing relevant operations."
},
{
"role": "user",
"content": "Summarize my latest order."
}
]
Assistant Response
{
"content": null,
"refusal": null,
"role": "assistant",
"audio": null,
"function_call": null,
"tool_calls": [
{
"id": "call_zkNcCIGEyNR2bUO73lxcFXZb",
"function": {
"arguments": "{\"username\":\"sarah.wilson@example.com\"}",
"name": "get_user_by_username"
},
"type": "function"
}
]
}
First Tool Call
tool_calls=[{'function_name': 'get_user_by_username', 'arguments': '{"username":"sarah.wilson@example.com"}'}]
LLM Invocation with Tool Call Result
[
{
"role": "system",
"content": "You are an assistant for an e-commerce application. Respond to user queries by performing relevant operations."
},
{
"role": "user",
"content": "Summarize my latest order."
},
{
"content": None,
"refusal": None,
"role": "assistant",
"audio": None,
"function_call": None,
"tool_calls": [
{
"id": "call_zkNcCIGEyNR2bUO73lxcFXZb",
"function": {
"arguments": "{\"username\": \"sarah.wilson@example.com\"}",
"name": "get_user_by_username"
},
"type": "function"
}
]
},
{
"role": "tool",
"tool_call_id": "call_zkNcCIGEyNR2bUO73lxcFXZb",
"name": "get_user_by_username",
"content": "{\"user_id\": \"123e4567-e89b-12d3-a456-426614174005\", \"username\": \"sarah.wilson@example.com\"}"
}
]
Assistant Response
{
"content": null,
"refusal": null,
"role": "assistant",
"audio": null,
"function_call": null,
"tool_calls": [
{
"id": "call_4zxUxeiY9aZwTcn1sGjSuszO",
"function": {
"arguments": "{\"user_id\":\"123e4567-e89b-12d3-a456-426614174005\"}",
"name": "get_orders_by_user_id"
},
"type": "function"
}
]
}
Second Tool Call
tool_calls=[{'function_name': 'get_orders_by_user_id', 'arguments': '{"user_id":"123e4567-e89b-12d3-a456-426614174005"}'}]
LLM Invocation with Tool Call Result
[
{
"role": "system",
"content": "You are an assistant for an e-commerce application. Respond to user queries by performing relevant operations."
},
{
"role": "user",
"content": "Summarize my latest order."
},
{
"content": None,
"refusal": None,
"role": "assistant",
"audio": None,
"function_call": None,
"tool_calls": [
{
"id": "call_zkNcCIGEyNR2bUO73lxcFXZb",
"function": {
"arguments": "{\"username\": \"sarah.wilson@example.com\"}",
"name": "get_user_by_username"
},
"type": "function"
}
]
},
{
"role": "tool",
"tool_call_id": "call_zkNcCIGEyNR2bUO73lxcFXZb",
"name": "get_user_by_username",
"content": "{\"user_id\": \"123e4567-e89b-12d3-a456-426614174005\", \"username\": \"sarah.wilson@example.com\"}"
},
{
"content": None,
"refusal": None,
"role": "assistant",
"audio": None,
"function_call": None,
"tool_calls": [
{
"id": "call_4zxUxeiY9aZwTcn1sGjSuszO",
"function": {
"arguments": "{\"user_id\": \"123e4567-e89b-12d3-a456-426614174005\"}",
"name": "get_orders_by_user_id"
},
"type": "function"
}
]
},
{
"role": "tool",
"tool_call_id": "call_4zxUxeiY9aZwTcn1sGjSuszO",
"name": "get_orders_by_user_id",
"content": "[{\"order_id\": \"123e4567-e89b-12d3-a456-426614174206\",\"user_id\": \"123e4567-e89b-12d3-a456-426614174005\", \"delivery_status\": \"processing\", \"ordered_at\": \"2024-01-18T13:20:00+00:00\"},{\"order_id\": \"123e4567-e89b-12d3-a456-426614174207\", \"user_id\": \"123e4567-e89b-12d3-a456-426614174005\", \"delivery_status\": \"delivered\", \"ordered_at\": \"2024-01-14T15:45:00+00:00\", \"delivery_time\": \"2024-01-14T15:45:00+00:00\"}]"
}
]
Disclaimer
This blog post is a simplified example and does not cover all aspects that have to be considered when deploying any LLM powered application to production.