Conversations
Conversations are the foundation of interactive AI experiences in ChatBotKit, providing a structured way to manage ongoing dialogues between users and AI bots. Each conversation maintains its own context, history, and state, allowing for natural, context-aware interactions that can span multiple messages and sessions.
A conversation serves as a container for messages, maintaining the dialogue history and configuration that determines how the AI responds. Conversations can be associated with bots, contacts, tasks, and spaces, providing flexible organization and management capabilities for different use cases.
Creating Conversations
Creating a conversation initializes a new interactive session with specific configuration options that control the AI's behavior. You can create a conversation by referencing an existing bot (which provides the backstory, model, and other settings) or by providing the configuration directly in the request.
To create a conversation, send a POST request to the conversation creation endpoint with your desired configuration:
When creating a conversation, you can specify several key parameters:
- name: A descriptive name for the conversation (optional)
- description: Additional context about the conversation's purpose (optional)
- botId: Reference to an existing bot that provides configuration (optional)
- contactId: Link to a contact record for tracking user interactions (optional)
- taskId: Associate the conversation with a specific task (optional)
- spaceId: Organize the conversation within a space (optional)
- messages: Include initial messages to start the conversation (optional)
Configuration Options
If you don't reference a bot, you can provide configuration directly:
- backstory: Instructions that define the AI's personality and behavior
- model: The language model to use (e.g., "gpt-4", "claude-3-5-sonnet")
- datasetId: Reference to a dataset for knowledge retrieval
- skillsetId: Reference to a skillset for extended capabilities
- privacy: Enable privacy mode to prevent data retention
- moderation: Enable content moderation for safety
Including Initial Messages
You can initialize a conversation with messages by including a messages array:
The API will return the created conversation ID and any processed messages, allowing you to immediately continue the interaction.
Important Notes:
- Conversations inherit configuration from their associated bot if a botId is provided, but you can override specific settings by providing them directly
- Each conversation maintains its own message history and context
- Conversations can be organized using contacts, tasks, and spaces for different tracking and filtering needs
- Privacy mode prevents message content from being stored, useful for sensitive conversations
Listing Conversations
Retrieving a list of conversations allows you to access and manage all conversations associated with your account. The list endpoint provides powerful filtering, pagination, and ordering capabilities to help you find and organize conversations efficiently.
To list conversations, send a GET request to the list endpoint:
This returns all conversations for the authenticated user, ordered by creation date (most recent first) by default.
Pagination and Ordering
The list endpoint supports cursor-based pagination for efficient retrieval of large conversation sets:
Available parameters for pagination and ordering include:
- cursor: A pagination cursor from the previous response to fetch the next page of results
- take: Number of conversations to retrieve (default and maximum depend on your plan)
- order: Sort order, either "asc" (ascending) or "desc" (descending) by creation date
The response includes an items array containing conversation objects, each with their ID, name, description, configuration, and timestamps. If there are more results available, the response will include a cursor for fetching the next page.
Filtering by Relationships
You can filter conversations by their associated resources using query parameters:
Supported filter parameters include:
- botId: Filter by associated bot
- contactId: Filter by associated contact
- taskId: Filter by associated task
Filtering by Metadata
Conversations with custom metadata can be filtered using meta queries. This allows you to organize and retrieve conversations based on your own custom fields and values.
Response Format
Each conversation in the response includes:
- Core identifiers (id)
- Basic information (name, description)
- Resource relationships (botId, contactId, taskId, spaceId, datasetId, skillsetId)
- Configuration (backstory, model, privacy, moderation)
- Metadata (meta)
- Timestamps (createdAt, updatedAt)
Best Practices:
- Use pagination for large conversation sets to improve performance
- Apply filters to narrow results when searching for specific conversations
- Consider the order parameter based on your use case (recent conversations vs. oldest first)
- Store cursors for efficient navigation through paginated results
Fetching a Conversation
Retrieving a specific conversation provides access to its complete configuration, including all settings, relationships, and metadata. This is useful when you need to inspect a conversation's current state, verify its configuration, or retrieve details for display or modification.
To fetch a conversation, send a GET request with the conversation ID:
Replace {conversationId} with the actual ID of the conversation you want to
retrieve. The conversation ID is returned when you create a conversation or
can be obtained from the list endpoint.
Response Details
The response includes the complete conversation object with all configuration and relationship information, including references to associated resources, conversation settings, and metadata.
Use Cases
Fetching a conversation is commonly used to:
- Verify the current configuration before sending messages
- Display conversation details in a user interface
- Retrieve the conversation state for analytics or monitoring
- Check which bot, dataset, or skillset is associated
- Access custom metadata for application-specific logic
Security Note: You can only fetch conversations that belong to your account. Attempting to access another user's conversation will result in an authorization error.
Updating a Conversation
Modifying a conversation allows you to change its configuration, update relationships, or adjust settings after creation. This is useful for adapting the conversation's behavior, correcting information, or changing associations as your application's needs evolve.
To update a conversation, send a POST request with the conversation ID and the fields you want to modify:
Replace {conversationId} with the actual ID of the conversation you want to
update. You only need to include the fields you want to change; all other
fields will remain unchanged.
Updateable Fields
You can update the following conversation properties:
Basic Information:
- name: Change the conversation's display name
- description: Update the conversation's description
Relationships:
- botId: Change the associated bot (null to remove association)
- contactId: Change the associated contact (null to remove)
- taskId: Change the associated task (null to remove)
- spaceId: Change the associated space (null to remove)
- datasetId: Change the dataset for knowledge retrieval (null to remove)
- skillsetId: Change the skillset for capabilities (null to remove)
Configuration:
- backstory: Modify the AI's instructions and behavior
- model: Switch to a different language model
- privacy: Enable or disable privacy mode
- moderation: Enable or disable content moderation
Metadata:
- meta: Update or add custom metadata fields
Example: Changing AI Behavior
You can modify the conversation's backstory to change how the AI responds:
Example: Switching Models
To use a different language model for better performance or cost optimization:
Example: Updating Relationships
Associate the conversation with a different bot or dataset:
Metadata Management
The update operation intelligently merges metadata. If you provide a meta object, it will merge with existing metadata rather than replacing it entirely, preserving fields you don't explicitly update.
Important Considerations:
- Updating a conversation does not affect its existing message history
- Configuration changes apply to future messages in the conversation
- Changing the model or backstory will change how the AI responds going forward
- Updates are applied immediately and affect the next interaction
- You can only update conversations that belong to your account
Deleting a Conversation
Deleting a conversation permanently removes it along with all associated messages and data. This operation is irreversible and should be used carefully, typically for cleanup, privacy compliance, or when a conversation is no longer needed.
To delete a conversation, send a POST request with the conversation ID:
Replace {conversationId} with the actual ID of the conversation you want to
delete. The request body should be an empty JSON object.
What Gets Deleted
When you delete a conversation, the following data is permanently removed:
- The conversation record itself
- All messages within the conversation
- Any associated metadata and configuration
- Message history and context
- File attachments and other associated data
- Related usage statistics for that conversation
Response
Upon successful deletion, the API returns the ID of the deleted conversation:
This confirms which conversation was deleted and can be used for logging or auditing purposes.
Data Relationships
Deleting a conversation does not affect:
- The bot referenced by the conversation (if any)
- The contact associated with the conversation (if any)
- The task linked to the conversation (if any)
- Any datasets or skillsets referenced by the conversation
- Other conversations or resources in your account
Only the conversation itself and its direct contents (messages) are removed.
Use Cases
Common scenarios for deleting conversations include:
- Privacy Compliance: Removing user data upon request (GDPR, CCPA)
- Cleanup: Removing test or obsolete conversations
- Data Management: Pruning old conversations to manage storage
- Error Correction: Removing conversations created by mistake
- User-Initiated Deletion: Allowing users to delete their conversation history
Bulk Deletion
To delete multiple conversations, you'll need to call the delete endpoint for each conversation individually. Consider implementing rate limiting and error handling when performing bulk deletions to avoid overwhelming the API.
Warning: This operation is permanent and cannot be undone. Ensure you have proper authorization checks and confirmation flows in your application before allowing conversation deletion. Consider implementing a soft-delete pattern in your application if you need the ability to recover deleted conversations.
Security Note: You can only delete conversations that belong to your account. Attempting to delete another user's conversation will result in an authorization error.
Sending Messages to a Conversation
The send endpoint allows you to send a user message to a conversation and add it to the conversation history. The message is processed and events may be generated, but this endpoint does not produce an AI response. To receive the AI's response, you need to call the receive route separately. This design provides flexibility in controlling conversation flow and separating message sending from response generation.
To send a message to a conversation, use a POST request with streaming support:
Replace {conversationId} with the actual ID of the conversation. The text
field is required and contains the user's message.
How Send Works
The send endpoint adds your message to the conversation and processes it, but does not generate an AI response. It may generate events and perform some processing based on the message content, but to receive a message from the AI agent, you need to call the receive route separately. This allows you to have more control over the conversation flow and separate the message sending from the response generation phases.
The response is delivered as a stream of JSON lines (JSONL), where each line represents an event related to message processing.
Advanced Features
The send endpoint supports several advanced features for enhanced functionality:
Function Calling:
You can enable the AI to call functions during the conversation:
When the AI determines a function call is appropriate, it will include function call information in the streaming response. The result object is used to return the function execution results.
Extensions (Trusted Sessions Only):
For trusted API sessions, you can temporarily extend the conversation's capabilities:
- extensions.backstory: Add additional instructions for this message only
- extensions.datasets: Provide inline dataset records for context
- extensions.skillsets: Add temporary abilities for this interaction
- extensions.features: Enable specific features for this message
Response Structure
The final result event includes the ID of the created message and usage statistics for the operation.
Message Flow
When you send a message:
- Your message is added to the conversation history
- The message is processed and events may be generated
- The message ID is returned in the result event
- No AI response is generated (use the receive route to get the AI response)
- The conversation is ready for further interactions
Best Practices
- Handle Streaming Properly: Implement proper streaming parsing in your client to handle JSONL responses
- Handle Errors Gracefully: Watch for error events in the stream and display appropriate messages
- Respect Rate Limits: Be aware of message and token rate limits for your account
Important Notes:
- The conversation maintains full message history for context
- The send operation adds your message to the conversation but does not generate an AI response
- To receive an AI response, call the receive route after sending
- Token usage is tracked and counted against your account limits
- Streaming responses can be interrupted if the connection is lost
Complete Conversation Interaction
The complete endpoint provides a full round-trip conversation interaction, sending a user message and receiving the AI's complete response through a streaming connection. Unlike the send endpoint which only sends the message, complete handles both sending and receiving in a single operation, making it ideal for traditional request-response chat patterns.
To complete a conversation interaction, send a POST request. The API supports
both streaming and non-streaming responses. For streaming, include the
Accept: application/jsonl header; otherwise, the response defaults to
non-streaming JSON:
Replace {conversationId} with the actual conversation ID. The text field
contains the user's message and is required.
How Complete Works
The complete endpoint orchestrates a full conversation turn:
- Send Phase: Your message is added to the conversation and processed
- Processing: The AI analyzes the message with full conversation context
- Receive Phase: The AI generates and streams its response
- Result: Both messages are saved to the conversation history
This two-phase approach ensures that both the user's message and the AI's response are properly recorded and contribute to the ongoing conversation context.
Streaming Response Events
The complete endpoint delivers a stream of events as JSONL (JSON Lines), with three main event types:
send_result Event:
Emitted after the user's message is processed, containing:
- id: The ID of the user's message
- text: The user's message text
- entities: Extracted entities from the user's message
- usage: Token usage for processing the user's message
receive_result Event:
Emitted after the AI's response is complete, containing:
- id: The ID of the AI's response message
- text: The AI's complete response text
- usage: Cumulative token usage for the entire interaction
Streaming Tokens:
Between send_result and receive_result, the AI's response is streamed as individual tokens (word pieces), allowing you to display the response incrementally as it's generated.
Advanced Features
The complete endpoint supports advanced features for enhanced functionality:
Function Calling:
Enable the AI to call functions during the interaction:
Extensions (Trusted Sessions Only):
For API sessions with trusted status, you can extend conversation capabilities for a single interaction:
- extensions.backstory: Additional instructions for this specific interaction
- extensions.datasets: Inline dataset records to provide context
- extensions.skillsets: Temporary abilities for this message
- extensions.features: Enable specific features for this interaction
When to Use Complete vs Send
Use Complete When:
- You want a traditional request-response chat pattern
- You need both messages saved in a single operation
- You want separated send and receive events in the stream
- Your application requires explicit confirmation of both phases
Use Send When:
- You only need to send a message without waiting for a response
- You're implementing a fire-and-forget pattern
- You have a different mechanism for receiving responses
Error Handling
The complete endpoint includes comprehensive error handling. If an error occurs during either the send or receive phase, an error event will be included in the stream with details about what went wrong. Your client should handle these error events gracefully and provide appropriate feedback to users.
Performance Considerations
- The complete operation can take up to 800 seconds for long-running generations
- Token streaming provides immediate feedback while generation continues
- Both send and receive phases count toward token usage limits
- Rate limits apply to both message count and token usage
Best Practices:
- Implement proper JSONL streaming parsing in your client
- Handle all three event types (send_result, receive_result, and tokens)
- Display tokens incrementally for better user experience
- Watch for error events and handle them appropriately
- Store message IDs for reference and conversation management
- Monitor usage data to track conversation costs