API Documentation
Analyze user-generated content with advanced AI moderation. Detect harmful content across multiple categories with high accuracy and low latency.
Introduction
The Moderation API is a powerful REST API that helps you automatically detect and filter harmful content in user-generated text. Built on REST principles, we enforceHTTPS in every request to ensure data security, integrity, and privacy.
All requests use the following base URL:
https://api.sereinmod.com99.5% Accuracy
High precision AI models trained on millions of examples
Low Latency
Fast response times for real-time moderation
Easy Integration
Simple REST API with comprehensive documentation
Quick Start
Get up and running with the Moderation API in just a few minutes. Follow these simple steps:
Create a Project
Sign up for an account and create a new project in your dashboard. Each project has its own moderation settings and API keys.
Generate an API Key
Navigate to your project settings and create a new API key. You can create both live and test keys. Keep your API key secure and never expose it in client-side code.
Make Your First Request
Use the code example below to moderate your first piece of content. Replacemod_xxxxxxxxx with your actual API key.
curl -X POST https://api.sereinmod.com/v1/moderate/text \
-H "x-api-key: mod_xxxxxxxxx" \
-H "Content-Type: application/json" \
-d '{"content": "Hello, world!"}'Handle the Response
The API returns a final_decision("approved" or "denied") and detailed probability scores for each moderation category. Use these scores to make decisions in your application.
Authentication
All API requests require authentication using an API key. Include your API key in thex-api-key header with every request.
x-api-key: mod_xxxxxxxxxSecurity Best Practice
Never expose your API key in client-side code, public repositories, or share it publicly. Always use environment variables or secure secret management systems.
POST /v1/moderate/text
Send text content to be evaluated using your project's moderation settings. Returns detailed probability scores for multiple harmful content categories and a final decision.
Request Headers
Request Body
{
"content": "string", // Required: Text to moderate (max 10,000 characters)
"meta_data": { // Optional: Additional metadata to store with the log
"user_id": "123",
"session_id": "abc",
"ip_address": "192.168.1.1"
}
}The meta_data field allows you to attach custom information to moderation logs for tracking and analytics purposes.
Response
{
"final_decision": "denied",
"model_output": {
"toxic": 0.87,
"severe_toxic": 0.92,
"obscene": 0.78,
"threat": 0.03,
"insult": 0.45,
"identity_hate": 0.65
}
}final_decision: Either "approved" or "denied" based on your project's thresholds
model_output: Probability scores (0.0 to 1.0) for each moderation category
Code Examples
fetch('https://api.sereinmod.com/v1/moderate/text', {
method: 'POST',
headers: {
'x-api-key': 'mod_xxxxxxxxx',
'Content-Type': 'application/json'
},
body: JSON.stringify({
content: 'Hello, this is a test message!'
})
})
.then(response => response.json())
.then(data => console.log(data))
.catch(error => console.error('Error:', error));Moderation Categories
The API evaluates content across six distinct categories. Each category returns a probability score between 0.0 and 1.0, where higher scores indicate a greater likelihood of harmful content.
Toxic
Default: 0.7General toxic language, including rude, disrespectful, or unreasonable comments.
Severe Toxic
Default: 0.8Extremely toxic content that is highly offensive and harmful.
Obscene
Default: 0.6Content containing profanity, vulgarity, or sexually explicit language.
Threat
Default: 0.75Content that threatens physical or psychological harm to individuals or groups.
Insult
Default: 0.5Content that insults, mocks, or belittles individuals or groups.
Identity Hate
Default: 0.65Content that attacks or expresses hatred towards groups based on identity (race, religion, gender, etc.).
Understanding Thresholds
Thresholds determine when content is flagged. For each enabled category, if the probability score meets or exceeds the threshold, the content is denied.
How Decisions Are Made
- The API evaluates content across all enabled categories
- Each category receives a probability score (0.0 to 1.0)
- If any enabled category score ≥ its threshold, content is denied
- If all enabled category scores < their thresholds, content is approved
Customizing Thresholds
You can customize thresholds and enable/disable categories in your project settings. Lower thresholds are more strict (flag more content), while higher thresholds are more lenient.
Webhooks
Webhooks allow you to receive real-time notifications when moderation events occur. Instead of polling the API, you can configure webhooks to automatically notify your server when content is approved, denied, or when decisions are changed.
Real-time Notifications
Get instant notifications when moderation events occur, without needing to poll the API.
Secure & Verified
All webhooks are signed with HMAC-SHA256 signatures to ensure authenticity and prevent tampering.
Automatic Retries
Failed webhook deliveries are automatically retried up to 3 times with exponential backoff.
Delivery History
Track all webhook deliveries, including success/failure status and response codes.
Webhook Events
The following events can be configured to trigger webhooks. You can enable or disable specific events based on your needs.
content.approved
Triggered when content passes moderation checks and is approved.
Payload Structure
{
"event_type": "content.approved",
"log_id": "uuid-here",
"decision": "approved",
"content": "The original content text",
"model_output": {
"toxic": 0.2,
"severe_toxic": 0.1,
"obscene": 0.15,
"threat": 0.05,
"insult": 0.1,
"identity_hate": 0.08
},
"metadata": {},
"timestamp": "2025-01-15T10:30:00Z"
}content.denied
Triggered when content fails moderation checks and is denied.
Payload Structure
{
"event_type": "content.denied",
"log_id": "uuid-here",
"decision": "denied",
"content": "The original content text",
"model_output": {
"toxic": 0.87,
"severe_toxic": 0.92,
"obscene": 0.78,
"threat": 0.03,
"insult": 0.45,
"identity_hate": 0.65
},
"metadata": {},
"timestamp": "2025-01-15T10:30:00Z"
}decision.changed
Triggered when a moderation decision is manually changed in the dashboard.
Payload Structure
{
"event_type": "decision.changed",
"log_id": "uuid-here",
"old_decision": "denied",
"new_decision": "approved",
"content": "The original content text",
"timestamp": "2025-01-15T10:30:00Z"
}Setting Up Webhooks
Configure webhooks for your project through the dashboard. You'll need to provide a URL endpoint that can receive POST requests and select which events you want to receive.
Step 1: Create a Webhook Endpoint
Create an HTTP endpoint in your application that can receive POST requests. The endpoint should return a 2xx status code to acknowledge receipt.
Step 2: Configure Webhook Settings in Dashboard
Navigate to your project settings in the dashboard to configure webhooks:
- Go to your project dashboard and select the project you want to configure
- Navigate to the Settings tab
- Scroll to the Webhooks section
- Enter your webhook URL (must be HTTPS in production)
- Select which events you want to receive (content.approved, content.denied, decision.changed)
- Toggle the webhook to active
- Save your settings
Note: A webhook secret will be automatically generated when you configure your webhook URL. Make sure to copy and securely store this secret - you'll need it to verify webhook signatures.
Security Requirements
- Webhook URLs must use HTTPS in production (HTTP allowed for localhost in development)
- Always verify webhook signatures before processing events using the secret from your dashboard
- Keep your webhook secret secure and never expose it in client-side code
Webhook Signature Verification
All webhook requests include an X-Webhook-Signature header containing an HMAC-SHA256 signature. Always verify this signature to ensure the request is authentic.
Signature Format
The signature is computed using HMAC-SHA256 with your webhook secret and the JSON payload (sorted keys, no whitespace).
Verification Examples
const crypto = require('crypto');
function verifyWebhookSignature(payload, signature, secret) {
const payloadString = JSON.stringify(payload);
const hmac = crypto.createHmac('sha256', secret);
hmac.update(payloadString);
const expectedSignature = 'sha256=' + hmac.digest('hex');
return crypto.timingSafeEqual(
Buffer.from(signature),
Buffer.from(expectedSignature)
);
}
// In your webhook endpoint
app.post('/webhook', express.json(), (req, res) => {
const signature = req.headers['x-webhook-signature'];
const secret = process.env.WEBHOOK_SECRET;
if (!verifyWebhookSignature(req.body, signature, secret)) {
return res.status(401).send('Invalid signature');
}
// Process webhook
console.log('Webhook received:', req.body);
res.status(200).send('OK');
});Webhook Management
Webhook configuration is managed through the dashboard. Navigate to your project settings to configure webhooks, view delivery history, and test your webhook endpoint. The API endpoints below are available for programmatic access but require web authentication.
Recommended: Use the Dashboard
For most users, we recommend configuring webhooks through the dashboard UI. It provides a user-friendly interface for managing webhook settings, viewing delivery history, and testing endpoints.
Get webhook configuration for a project.
Requires web authentication (not API key).
Get webhook delivery history (last 50 deliveries).
Requires web authentication (not API key). Available in dashboard.
Send a test webhook event to verify your endpoint is working correctly.
Requires web authentication (not API key). Available in dashboard.
Rate Limits
API requests are rate limited to ensure fair usage and optimal performance for all users. Rate limit information is included in response headers.
Free Tier
100
requests per day
Pro Tier
90,000
requests per month
Rate Limit Headers
Every response includes rate limit information in headers:
X-RateLimit-Remaining: 95
X-RateLimit-Limit: 100When you exceed your limit, you'll receive a 429 status code.
Error Handling
The API uses standard HTTP status codes and returns error details in the response body. Always check the status code and handle errors appropriately in your application.
Error Response Format
{
"detail": "Error message describing what went wrong"
}Best Practices:
- Always check the HTTP status code before processing responses
- Implement exponential backoff for 429 (rate limit) errors
- Log error details for debugging but don't expose sensitive information to users
- Handle network errors and timeouts gracefully
Example Error Handling
async function moderateContent(content) {
try {
const response = await fetch('https://api.sereinmod.com/v1/moderate/text', {
method: 'POST',
headers: {
'x-api-key': 'mod_xxxxxxxxx',
'Content-Type': 'application/json'
},
body: JSON.stringify({ content })
});
if (!response.ok) {
const error = await response.json();
if (response.status === 429) {
// Rate limit exceeded - implement retry with backoff
throw new Error('Rate limit exceeded. Please try again later.');
}
throw new Error(error.detail || 'Moderation request failed');
}
return await response.json();
} catch (error) {
console.error('Moderation error:', error);
throw error;
}
}HTTP Response Codes
The API uses standard HTTP status codes to indicate the success or failure of requests.
Best Practices
Security
- Never expose API keys in client-side code or public repositories
- Use environment variables or secure secret management systems
- Rotate API keys regularly, especially if compromised
- Always verify webhook signatures before processing events
- Use HTTPS for all API requests and webhook endpoints
Performance
- Implement request caching for repeated content when appropriate
- Use async/await or promises to avoid blocking your application
- Batch moderation requests when possible to reduce API calls
- Monitor your rate limit usage to avoid hitting limits
- Implement exponential backoff for retries
Integration
- Test with the playground endpoint before going live
- Use metadata fields to track moderation decisions in your system
- Set up webhooks for real-time notifications instead of polling
- Monitor webhook delivery status and handle failures appropriately
- Log moderation decisions for audit and analytics purposes
- Customize thresholds based on your content and audience
Error Handling
- Always check HTTP status codes before processing responses
- Implement proper error handling and user-friendly error messages
- Handle rate limit errors (429) with exponential backoff
- Set appropriate timeouts for API requests
- Have fallback behavior when the API is unavailable