Skip to main content
CometChat allows you to integrate your own moderation logic using a Custom API. With this feature, you can define a webhook URL where CometChat will send messages for moderation along with relevant context from the conversation.

Integration

Step 1: Configure Custom API Settings

  1. Login to the CometChat Dashboard
  2. Navigate to Moderation Settings
    • Go to Moderation → Settings in the left-hand menu.
  3. Open Custom API Settings Tab
    • Click on the Custom API tab within the Moderation Settings.
  4. Fill in the Custom API Configuration
    • Webhook URL
      • Enter the endpoint URL where CometChat will send messages for moderation.
    • Authentication (Optional)
      • Enable Basic Authentication to secure your webhook endpoint.
      • Provide a username and password that CometChat will include in the Authorization header.
    • Set Action on API Error
      • Define how the system should respond if the Custom API is unavailable:
        • Allow message – Messages are delivered even if moderation fails.
        • Block message – Messages are blocked when moderation is unavailable.
    • Set Context Window
      • Specify the number of previous messages in a conversation to include for context (0-10).
  5. Click Save Settings

Step 2: Create a Moderation Rule

  1. Navigate to Moderation → Rules.
  2. Click “Create New Rule”.
  3. Select Custom API as the moderation type.
  4. Choose the rule type:
    • Text Contains – For text message moderation
    • Image Contains – For image message moderation
  5. Configure the action to take when content is flagged (block, flag for review, etc.).
  6. Save the rule.

Webhook Request

Headers

When CometChat calls your webhook, it includes the following headers:
HeaderDescription
Content-Typeapplication/json
AuthorizationBasic auth credentials (if configured)

Payload

The payload includes:
  • The latest message (the one just sent) — provided in full detail (entire message object)
  • The previous messages — provided as plain text only, for context (based on the context window setting)
{
  "contextMessages": [
    {
      "cometchat-uid-1": "Hello there!"
    },
    {
      "cometchat-uid-2": "Hey, how are you?"
    },
    {
      "cometchat-uid-1": "Let's team up."
    },
    {
      "cometchat-uid-2": {
        "id": "30431",
        "muid": "_r49ocm6oj",
        "conversationId": "cometchat-uid-1_user_cometchat-uid-2",
        "sender": "cometchat-uid-1",
        "receiverType": "user",
        "receiver": "cometchat-uid-2",
        "category": "message",
        "type": "text",
        "data": {
          "text": "ok",
          "resource": "WEB-4_0_10-04aecbad-8354-4fc8-98df-d0119e1a9539-1747717193939",
          "entities": {
            "sender": {
              "entity": {
                "uid": "cometchat-uid-1",
                "name": "Andrew Joseph",
                "avatar": "https://data-us.cometchat-staging.com/assets/images/avatars/andrewjoseph.png",
                "status": "available",
                "role": "default",
                "lastActiveAt": 1747717203
              },
              "entityType": "user"
            },
            "receiver": {
              "entity": {
                "uid": "cometchat-uid-2",
                "name": "George Alan",
                "avatar": "https://data-us.cometchat-staging.com/assets/images/avatars/georgealan.png",
                "status": "offline",
                "role": "default",
                "lastActiveAt": 1721138868,
                "conversationId": "cometchat-uid-1_user_cometchat-uid-2"
              },
              "entityType": "user"
            }
          },
          "moderation": {
            "status": "pending"
          }
        },
        "sentAt": 1747717214,
        "updatedAt": 1747717214
      }
    }
  ]
}

Webhook Response

Your webhook must return a JSON response indicating the moderation decision.

When content violates rules

{
  "isMatchingCondition": true,
  "confidence": 0.95,
  "reason": "Contains hate speech"
}

When content is safe

{
  "isMatchingCondition": false,
  "confidence": 0.98,
  "reason": ""
}

Response Fields

FieldTypeDescription
isMatchingConditionbooleantrue if the message violates the rule, false if safe
confidencenumberConfidence score of the decision (0.0 - 1.0)
reasonstringReason for flagging (can be empty for safe content)

Example Webhook Implementation

Here’s a simple Node.js/Express example:
const express = require('express');
const app = express();

app.use(express.json());

app.post('/moderate', (req, res) => {
  const { contextMessages } = req.body;
  
  // Get the latest message (last item in array)
  const latestEntry = contextMessages[contextMessages.length - 1];
  const senderId = Object.keys(latestEntry)[0];
  const messageData = latestEntry[senderId];
  
  // Extract text content
  const text = typeof messageData === 'string' 
    ? messageData 
    : messageData.data?.text || '';
  
  // Your moderation logic here
  const isViolation = containsBadContent(text);
  
  res.json({
    isMatchingCondition: isViolation,
    confidence: 0.95,
    reason: isViolation ? 'Content policy violation' : ''
  });
});

function containsBadContent(text) {
  // Implement your moderation logic
  // Could call OpenAI Moderation API, Perspective API, etc.
  return false;
}

app.listen(3000);