What are we launching?
We've introduced new moderation features in CometChat SDKs and UI Kits. These tools let developers manage inappropriate or harmful content out of the box. With shadow blocking, messages can be blocked silently without the sender knowing, and developers also have the option to display that a message has been blocked in the chat UI. These tools give developers out-of-the-box ways to handle inappropriate or harmful content in chat applications without building moderation systems from scratch.
With this release, developers can:
Show users when their messages are blocked (and explain why).
Ensure moderation works consistently across platforms and message types.
Why does this matter to me, as a developer?
Building chat is one thing; keeping it safe and transparent at scale is another. Without built-in moderation, developers often need to:
Write custom logic to filter harmful content.
Maintain moderation queues and review dashboards.
Manage inconsistent user experiences across platforms.
These tasks are time-consuming and error-prone. The new moderation UI Kits and SDK features handle these concerns for you so that you can focus on your app's unique logic instead of rebuilding standard moderation workflows.

Blocked Message Notifications
When a message violates community guidelines (e.g., spam, offensive language, disallowed media), the system blocks it.
The sender receives a clear notification:
"This message was blocked due to harmful content."
The notification also includes the reason, improving transparency.
This prevents confusion for users and reduces support tickets like "Why didn't my message send?"
Cross-Platform Support
Moderation UI Kits are available across:
Web: React
Mobile: iOS, Android, Flutter
This ensures the same moderation flow works regardless of the client.
Multi-Message Type Support
Moderation isn't limited to text. The system can evaluate and block:
Text messages (e.g., profanity, hate speech)
Images (e.g., inappropriate pictures)
Videos
Custom messages (structured payloads defined by your app)
This ensures all communication channels in your app are covered.
What's interesting about the tech?
At its core, moderation relies on two components:
AI moderation service that analyzes content and determines if it should be blocked.
UI Kit integration that provides ready-made components to display blocked-message notifications and reporting workflows.
Instead of you writing manual regex filters or integrating third-party moderation APIs directly, CometChat SDKs handle the decision-making and provide callbacks/events for moderation actions.
For example, when a message is blocked:
The SDK emits an event with details (blockedMessage, reason).
The UI Kit component listens to this event and renders the appropriate UI.
This separation means you can customize the experience if needed, but the defaults work out of the box.
Example Scenarios
User-driven reporting in an education app
Group chat with profanity filter
A user sends "You are [expletive]!"
The message is blocked.
Sender sees: "This message was blocked due
to offensive language."
Other users never see the message.
Image sharing in a dating app
A user uploads an inappropriate image.
The image is blocked.
Sender is notified with "This image violates content guidelines."
Moderators can review the flagged content later.
User-driven reporting in an education app
A student receives spam messages.
They click "Report" with reason "Spam."
Moderator dashboard shows the report for action.
Reference Links
CometChat Documentation – Moderation Guide
Nivedita Bharathy
Product Marketing Specialist , CometChat
