back to changelog

User Safety with New Notification Feature for Moderation

A significant update to our moderation features which provides real-time notifications for abusive language, indications of self-harm, and other concerning messages within conversations.

We are proud to announce a significant update to our moderation features which provides real-time notifications for abusive language, indications of self-harm, and other concerning messages within conversations. This enhancement is a part of our ongoing commitment to ensuring a safe and supportive environment for all users. With the activation of this monitoring service, ChatBotKit takes a proactive stance in identifying and alerting stakeholders about potential harm, thereby facilitating timely interventions.

In the digital age, the safety and well-being of users are paramount. Recognizing the critical role that timely information plays in safeguarding users, ChatBotKit has developed a system that automatically detects harmful content within conversations and sends immediate notifications to the appropriate parties. This feature is designed to assist in the early detection of situations that may pose a risk to individuals, especially those vulnerable to self-harm, by providing an additional layer of oversight and support.

Key Benefits of the New Notification Feature:

  • Early Detection: Timely alerts about harmful content enable quick response to potentially dangerous situations, helping to prevent escalation.
  • Enhanced Safety: By monitoring for abusive language and signs of self-harm, ChatBotKit helps create a safer environment for users to interact and communicate.
  • Support for Moderators: This feature acts as a valuable tool for moderators and administrators, aiding in the efficient management of community standards and user safety.

How It Works: Once activated, the moderation feature continuously monitors messages exchanged in the conversation. If the system detects content that falls within predefined criteria for harmful behavior or language, it automatically triggers a notification to the designated contacts. This immediate alert includes relevant details to assist in evaluating the situation and deciding on the appropriate course of action.

At ChatBotKit, we believe in leveraging technology to not only enhance user experiences but also to contribute to a safer digital ecosystem. The introduction of real-time notifications for harmful content is a reflection of our dedication to user welfare and our commitment to providing tools that support the health and safety of communities.

The new notification feature for moderation is now available, marking a crucial step forward in ChatBotKit’s efforts to promote digital well-being. This update underscores our pledge to empower users and administrators with the resources they need to maintain safe and respectful online spaces.

For more information about ChatbotKit and the enhanced moderation feature, please visit our documentation.