Ratings
Ratings provide a comprehensive system for capturing and managing feedback across your conversational AI platform. Unlike simple upvote/downvote operations, ratings offer flexible numerical values and detailed reasoning that help you analyze performance patterns, identify improvement opportunities, and make data-driven optimization decisions.
The rating system enables you to track feedback at multiple levels: individual messages, entire conversations, specific bots, and even contact interactions. Each rating includes a numerical value for quantitative analysis and an optional reason field for capturing qualitative insights about what worked well or needs improvement.
Listing Ratings
Retrieve a paginated list of all ratings associated with your account, enabling comprehensive analysis of feedback patterns across your conversational AI implementations. The list operation supports advanced filtering to help you focus on specific aspects of your rating data.
This endpoint returns all ratings you've created, ordered by creation date (most recent first by default). Each rating includes complete context about what was rated, including associated contact, bot, conversation, and message identifiers.
Filtering by Resource
Focus your analysis by filtering ratings for specific resources using query parameters. You can filter by contact, bot, conversation, or message to analyze feedback for particular interactions:
The filtering system supports multiple criteria simultaneously, allowing you to create precise queries like "all negative ratings for a specific bot" or "all ratings from a particular contact during a conversation." This flexibility enables targeted analysis of feedback patterns and helps identify specific areas requiring attention.
Pagination and Ordering
Manage large rating datasets efficiently using cursor-based pagination:
The take parameter controls how many ratings to retrieve per request
(useful for performance when dealing with thousands of ratings), while the
cursor parameter enables efficient pagination through large result sets.
Use the order parameter to control sort direction (asc or desc).
Metadata Filtering
Enhance your rating organization by using the metadata filtering system to tag and categorize ratings according to your specific needs. Metadata provides flexible key-value storage for custom attributes, enabling sophisticated analysis and reporting:
Common metadata use cases include categorizing rating types (technical, usability, content quality), tracking rating sources (automated testing, user feedback, internal review), and associating ratings with specific feature areas or business metrics.
Warning: Rating data accumulates over time and can become substantial. Use filtering and pagination effectively to maintain query performance and avoid retrieving unnecessary data. Consider implementing date range filters through metadata when analyzing time-specific feedback patterns.
Creating Ratings
Creating a rating captures structured feedback about bot interactions, conversations, or specific messages, enabling comprehensive performance tracking and quality analysis across your conversational AI platform. Unlike simple upvote/downvote operations, the rating creation endpoint provides flexible numerical scoring with optional qualitative context.
To create a new rating, send a POST request with the rating value and associated resource identifiers. At minimum, you must provide a numerical value representing the rating score. Optionally link the rating to specific contacts, bots, conversations, or messages for granular feedback tracking:
Understanding Rating Values
The value field accepts any numerical value, providing flexibility for
different rating scales and methodologies. Common patterns include:
- Binary feedback: Use -100 (negative) and 100 (positive) for simple good/bad ratings
- Five-star equivalent: Use -100, -50, 0, 50, 100 for five-point scales
- NPS-style: Use values from -100 to 100 for Net Promoter Score tracking
- Custom metrics: Define your own scale matching internal quality standards
The numerical approach enables sophisticated analytics including trend analysis, average performance calculation, and statistical quality tracking that would be difficult with categorical feedback alone.
Resource Association
Link ratings to specific platform resources for targeted feedback analysis:
- contactId: Associate with a specific contact to track satisfaction at the user level
- botId: Link to a bot for overall bot performance metrics
- conversationId: Connect to a conversation for session-level quality tracking
- messageId: Tie to a specific message for precise response evaluation
You can link to multiple resources simultaneously (e.g., both a conversation and a specific message within it) to enable multi-dimensional analysis of feedback patterns.
Providing Context with Reasons
The optional reason field captures qualitative context explaining the
numerical rating. This text field helps you understand the "why" behind
feedback scores, enabling meaningful improvements to your conversational AI:
Detailed reasons transform numerical ratings into actionable insights, helping you identify specific improvement areas, recurring issues, and patterns that require attention. Consider establishing reason categorization standards across your team for consistent feedback analysis.
Metadata and Organization
Use the meta field to attach custom attributes for sophisticated rating
organization and analysis. Common metadata patterns include:
- Category tags:
{"category": "accuracy", "subcategory": "factual"} - Source tracking:
{"source": "automated_test", "testId": "acc_001"} - Business context:
{"department": "support", "priority": "high"} - Time-based context:
{"businessHours": true, "peakTime": false}
Metadata provides the flexibility to implement custom rating taxonomies matching your organization's specific quality metrics and reporting needs.
Response:
The API returns the newly created rating's unique identifier upon successful creation. Store this ID if you need to update or reference the rating later.
Best Practices:
- Consistency: Establish clear rating value conventions across your organization for meaningful comparative analysis
- Context: Always provide reasons for extreme ratings (very positive or very negative) to capture actionable insights
- Timeliness: Create ratings promptly after interactions while context is fresh and accurate
- Association: Link ratings to the most specific resource available (e.g., messageId rather than just conversationId) for precise feedback tracking
- Automation: Consider implementing automated rating creation for quality assurance testing and continuous performance monitoring
Important Considerations:
Rating data accumulates over time and becomes a valuable analytics asset. Plan your rating strategy carefully, establishing clear conventions for value scales, reason formats, and metadata structure before widespread implementation. Consistent rating patterns enable meaningful trend analysis and performance comparisons across bots, time periods, and use cases.
Fetching a Rating
Retrieve detailed information about a specific rating using its unique identifier. This operation returns all rating data including the numerical value, associated reason, linked resources, and metadata, enabling detailed review and analysis of individual feedback records.
To fetch a rating, send a GET request with the rating ID:
Replace {ratingId} with your rating's unique identifier (format:
rtg_abc123xyz).
Response Structure
The endpoint returns comprehensive rating information including all contextual data needed to understand and analyze the feedback:
Understanding Rating Context
The fetched rating includes several fields providing context about what was rated and why:
- value: The numerical rating score using your chosen scale
- reason: Optional qualitative explanation for the rating
- Resource links: IDs connecting the rating to specific contacts, bots, conversations, or messages
- name/description: Optional human-readable labels for rating organization
- meta: Custom attributes for flexible categorization and analysis
- timestamps: Creation and last update times for tracking rating history
All resource ID fields (contactId, botId, conversationId, messageId) may be null if the rating wasn't explicitly linked to those resources during creation.
Use Cases for Fetching Ratings
Retrieving individual ratings supports several analytical workflows:
- Quality review: Examine detailed feedback including reasons and context for understanding specific quality issues
- Performance audits: Review ratings associated with particular bots, conversations, or time periods
- Follow-up actions: Access rating details to inform improvement initiatives or customer outreach
- Reporting: Pull specific rating data for inclusion in dashboards, reports, or presentations
- Debugging: Investigate rating-related issues or verify rating data accuracy
Integration with Analytics
Use fetch operations in combination with list operations for comprehensive analytics workflows. First, query ratings list with filters to identify ratings of interest, then fetch individual ratings to access complete details for deeper analysis:
This two-step pattern enables efficient data access, retrieving summary information for many ratings while accessing full details only when needed.
Important: Rating fetch requires proper authorization. You can only retrieve ratings that belong to your user account. Attempting to fetch ratings owned by other users will result in an authorization error.
Updating a Rating
Modify an existing rating to reflect changed assessments, add additional context, or update resource associations. The update operation provides flexibility to revise ratings as situations evolve, new information becomes available, or initial assessments require refinement.
To update a rating, send a POST request with the rating ID and updated fields:
Replace {ratingId} with your rating's unique identifier. All fields are
optional—include only the properties you want to modify.
When to Update Ratings
Several scenarios justify rating updates rather than creating new ratings:
- Quality review refinement: Adjusting ratings after human review or quality assurance processes identify different perspectives
- Context changes: Updating when additional information becomes available that changes the assessment
- Reason elaboration: Adding more detailed explanations or context to existing ratings
- Resource association: Linking ratings to additional resources (e.g., adding a botId after initial message-only rating)
- Metadata enrichment: Adding categorization, tags, or other metadata after initial creation
- Error correction: Fixing mistakes in original rating values or associations
Updates preserve the original rating ID and creation timestamp while updating
the updatedAt field, maintaining a clear audit trail of when changes
occurred.
Updating Rating Values
Change the numerical rating score to reflect revised assessments:
When updating values, consider including updated reasons that explain both the new rating and why it changed from the previous value. This creates valuable context for anyone reviewing rating history and helps maintain confidence in your feedback data quality.
Managing Resource Associations
Update which resources a rating is associated with, useful when initial context was incomplete or when reorganizing feedback structure:
You can add new associations (providing IDs for previously null fields), change existing associations (replacing IDs with new ones), or remove associations by setting fields to null. This flexibility enables rating reorganization as your understanding of feedback context evolves.
Enriching with Metadata
Add or update custom metadata to enhance rating organization and analysis capabilities. The update operation merges new metadata with existing data, preserving untouched fields:
Metadata updates enable progressive enrichment of ratings over time as they move through review workflows, get categorized into reporting structures, or accumulate additional context from various analysis processes.
Partial Updates
The update endpoint supports partial updates—you only need to include fields you want to change. Omitted fields retain their existing values:
This partial update approach enables targeted modifications without requiring you to re-specify unchanged data, reducing update complexity and minimizing the risk of unintentional modifications.
Response:
The API returns the updated rating's ID confirming successful modification. Use the fetch endpoint to retrieve the complete updated rating data if needed.
Best Practices:
- Document changes: When significantly modifying ratings, update the reason field to explain what changed and why
- Preserve history: Consider storing update context in metadata rather than replacing reason text entirely
- Avoid excessive updates: Frequent rating changes can indicate unclear rating criteria—establish clear standards to minimize post-creation revisions
- Audit trails: Use metadata to track update history, reviewers, and approval workflows
Important: Rating updates affect analytics and reporting. Consider how rating modifications impact historical analysis, trend tracking, and performance metrics. In some cases, creating a new rating with updated values may be preferable to modifying existing ratings, especially when maintaining temporal accuracy is critical for your analysis needs.
Deleting a Rating
Permanently remove a rating from your account when it's no longer needed, was created in error, or requires removal for data management purposes. The delete operation irreversibly removes all rating data including the value, reason, resource associations, and metadata.
To delete a rating, send a POST request with the rating ID:
Replace {ratingId} with your rating's unique identifier. The operation
requires an empty JSON body but must include the Content-Type header.
When to Delete Ratings
Rating deletion is appropriate in specific scenarios:
- Erroneous creation: Removing ratings created by mistake or with incorrect data
- Test data cleanup: Removing ratings created during testing or development
- Data privacy compliance: Fulfilling data deletion requests or privacy regulations
- Duplicate removal: Cleaning up accidentally duplicated rating entries
- Invalid feedback: Removing ratings that don't meet your quality standards or were created under unusual circumstances
Consider carefully whether deletion is necessary versus updating the rating or marking it as invalid through metadata. Deletion removes historical data that might have analytical value even if initially assessed incorrectly.
Alternative to Deletion: Deactivation
Instead of deleting ratings, consider using metadata to mark them as inactive or invalid while preserving the historical record:
This approach maintains data integrity for historical analysis while allowing you to filter out invalid ratings from active reports and analytics. Your list operations can exclude deactivated ratings using metadata filters, achieving similar practical results to deletion while preserving the complete historical record.
Impact on Analytics
Deleting ratings affects historical analytics and performance metrics:
- Aggregate calculations: Average ratings, total counts, and distribution metrics change immediately upon deletion
- Trend analysis: Historical trends and time-series data lose data points
- Performance tracking: Bot or conversation performance metrics are recalculated without the deleted rating
- Reporting accuracy: Existing reports or dashboards referencing deleted ratings become incomplete
If ratings contribute to published reports, shared dashboards, or compliance documentation, consider the downstream impact of deletion on those systems. In regulated environments or audit scenarios, marking ratings as invalid through metadata may be preferable to permanent deletion.
Deletion Scope and Permanence
Rating deletion is immediate and permanent:
- No recovery: Deleted ratings cannot be restored. There is no "undelete" operation
- Complete removal: All data associated with the rating (value, reason, associations, metadata) is permanently deleted
- No cascade effects: Deleting a rating doesn't affect the associated resources (bots, conversations, messages, contacts) which remain unchanged
Ensure you have backups or exports of important rating data before deletion if there's any possibility you might need the information later.
Bulk Deletion Workflow
For deleting multiple ratings, combine list operations with individual deletions:
Implement bulk deletion carefully with proper error handling, as each deletion is independent and some may succeed while others fail due to authorization or existence issues.
Response:
The API returns the deleted rating's ID confirming successful removal. After receiving this response, the rating no longer exists and cannot be fetched, updated, or referenced.
Best Practices:
- Verify before deletion: Fetch the rating first to confirm you're deleting the correct record
- Export important data: If ratings have analytical value, export before deletion for archival purposes
- Consider alternatives: Evaluate whether deactivation through metadata is more appropriate than permanent deletion
- Batch carefully: When deleting multiple ratings, implement proper error handling and logging
- Document deletion rationale: Maintain logs of why ratings were deleted for audit and compliance purposes
Warning: Rating deletion is irreversible. Once deleted, the rating data is permanently lost and cannot be recovered. Ensure you have proper backups or exports of any rating data that might be needed for future analysis, compliance, or audit purposes before performing deletion operations.
Exporting Ratings
Export your rating data in bulk for comprehensive analysis, reporting, or archival purposes. The export operation provides access to your complete rating history with the same powerful filtering capabilities available in the list operation, but optimized for large-scale data retrieval.
Exports include all rating fields such as value, reason, timestamps, and associated resource identifiers (contact, bot, conversation, message). The operation returns data in a format suitable for import into spreadsheet applications, business intelligence tools, or custom analytics platforms.
Filtering Export Data
Apply the same filtering capabilities available in the list operation to control which ratings are included in your export. This enables targeted analysis such as exporting all negative ratings for a specific time period, all ratings for a particular bot, or ratings matching specific metadata criteria:
Common export scenarios include generating monthly feedback reports, analyzing rating trends over time, identifying patterns in negative feedback, comparing performance across different bots, and creating compliance or audit documentation.
Metadata in Exports
Metadata fields are included in exports and can be used for filtering, enabling rich categorization and analysis of exported data. Structure your metadata consistently to facilitate automated processing and reporting of exported rating data:
Consider using metadata flags like exported, processed, or reviewed
to track which ratings have been included in previous exports or analysis
cycles. This helps maintain data integrity and prevents duplicate processing
in recurring export workflows.
Performance Note: Export operations may take longer than list operations when retrieving large volumes of rating data. For optimal performance, use filtering parameters to limit exports to specific time periods or resources rather than exporting entire rating histories unnecessarily.