Skip to main content

What you’ll build

  • An AG2 ConversableAgent with callable tools
  • The same agent connected to CometChat (Agent ID + Deployment URL)
  • A customized chat experience using UI Kit Builder
  • An export to React UI Kit code or Chat Widget for integration

Prerequisites

  • A CometChat account and an app: Create App
  • Python 3.10+ and pip
  • Environment variables:
    • OPENAI_API_KEY (required for AG2 LLM calls)
    • WEATHER_API_KEY (optional — enables real weather data in the sample tool)
  • Git (to clone the example project)

Step 1 - Create your CometChat app

1

Create or open an app

Sign in at app.cometchat.com. Create a new app or open an existing one.
2

Copy credentials

Note your App ID, Region, and Auth Key (needed if you export the Chat Widget later).

Step 2 - Connect your AG2 Agent

Navigate to AI Agent → Get Started and then AI Agents → Add Agent.
1

Choose provider

Select AG2 (AutoGen).
2

Basic details

Provide:
  • Name and optional Icon
  • (Optional) Greeting and Introductory Message
  • (Optional) Suggested messages such as “What’s the weather in Austin?”
3

AG2 configuration

Paste the following from your deployment:
  • Agent ID — for the sample project, use weather.
  • Deployment URL — the HTTPS endpoint that proxies to /agent on your server.
4

Save & enable

Click Save, then ensure the agent’s toggle is ON in the AI Agents list.
Tip: Keep your Deployment URL stable (e.g., https://your-domain.tld/agent). Update server logic without changing the URL to avoid reconfiguration.

Step 3 - Define Frontend Actions (Optional)

1

Add an action

Go to AI Agent → Actions and click Add to create a frontend action your agent can call (e.g., “Open Product,” “Start Demo,” “Book Slot”).
2

Define fields

Include:
  • Display Name — Shown to users (e.g., “Open Product Page”).
  • Execution Text — How the agent describes running it.
  • Name — A unique, code-friendly key (e.g., open_product).
  • Description — What the tool does and when to use it.
  • Parameters — JSON Schema describing inputs (the agent will fill these).
3

Validate inputs (schema)

Example parameters JSON:
{
  "type": "object",
  "required": ["location"],
  "properties": {
    "location": {
      "type": "string",
      "description": "City, zip code, or coordinates"
    }
  }
}
4

Handle in your UI

Listen for tool calls at runtime and execute them client-side (route changes, dashboards, highlights).

Step 4 - Customize in UI Kit Builder

1

Open variant

From AI Agents click the variant (or Get Started) to enter UI Kit Builder.
2

Customize & Deploy

Select Customize and Deploy.
3

Adjust settings

Theme, layout, features; ensure the AG2 agent is attached.
4

Preview

Use live preview to validate responses & tool triggers.

Step 5 - Export & Integrate

Choose how you’ll ship the experience (Widget or React UI Kit export).
The AG2 agent from Step 2 is included automatically in exported variants—no extra code needed for basic conversations.
1

Decide delivery mode

Pick Chat Widget (fastest) or export React UI Kit for code-level customization.
2

Widget path

Open UI Kit Builder → Get Embedded Code → copy script + credentials.
3

React UI Kit path

Export the variant as code (UI Kit) if you need deep theming or custom logic.
4

Verify agent inclusion

Preview: the AG2 agent should appear without extra config.

Step 6 - Run Your AG2 Agent (Reference)

  1. git clone https://github.com/cometchat/ai-agent-ag2-examples.git
  2. cd ai-agent-ag2-examples/ag2-cometchat-agent
  3. python -m venv .venv && source .venv/bin/activate (or .venv\Scripts\activate on Windows)
  4. pip install -r requirements.txt

Create a .env file:

OPENAI_API_KEY=sk-your-key
# Optional: enables live data from OpenWeatherMap
WEATHER_API_KEY=your-openweather-key

Without WEATHER_API_KEY, the tool still returns stubbed error messages that the agent can surface gracefully.

# agent.py (excerpt)
llm_config = LLMConfig(
    config_list=[{
        "api_type": "openai",
        "model": "gpt-4o-mini",
        "api_key": self.openai_api_key,
    }],
    temperature=0.7,
)

self.agent = ConversableAgent(
    name="WeatherAssistant",
    system_message="You are a helpful assistant that can provide weather information...",
    llm_config=llm_config,
    human_input_mode="NEVER",
)

@self.agent.register_for_llm
def get_weather(location: Annotated[str, "City name, zip code, or coordinates"]) -> Dict[str, Any]:
    return self.get_weather(location)

The agent streams Server-Sent Events (SSE) with tool call telemetry and message chunks so CometChat can render partial replies in real time.

  1. uvicorn server:app —reload —host 0.0.0.0 —port 8000
  2. Verify health: curl http://localhost:8000/health
  3. Trigger a message (SSE response):
curl -N -X POST http://localhost:8000/agent \
  -H "Content-Type: application/json" \
  -d '{
    "thread_id": "demo-thread",
    "run_id": "demo-run",
    "messages": [
      { "role": "user", "content": "What is the weather in Austin?" }
    ],
    "tools": []
  }'

Use a tunneling tool (ngrok, Cloudflare Tunnel, etc.) to create the public Deployment URL CometChat needs.

  • Configure logging, rate limiting, and auth (API key/JWT) on the /agent route.
  • Store secrets in server-side env vars only; never expose them in client code.
  • Namespace tool calls and sanitize user input before hitting external APIs.
  • Scale the FastAPI app behind your preferred hosting (Render, Fly.io, Vercel functions, etc.).

Test your setup

1

Enable the agent

In AI Agents, ensure your AG2 agent shows Enabled.
2

Preview in UI Kit Builder

Open UI Kit Builder and start a preview session.
3

Validate conversation

Send a message; confirm the agent responds.
4

Test actions

Trigger a Frontend Action and verify your UI handles the tool call.

Troubleshooting

  • Verify your Deployment URL is publicly reachable (no VPN/firewall).
  • Check server logs for 4xx/5xx errors or missing API keys.
  • Confirm the Action’s Name matches the tool name emitted by AG2.
  • Ensure your agent registers tools via register_for_llm and proxies execution.
  • Use authKey only for development. For production, implement a secure token flow for user login.