Moderations
Classify text for potentially harmful content using the content moderation API. Requests are proxied to OpenAI's moderation endpoint with automatic BYOK key resolution.
Classify text for potentially harmful content using the content moderation API. Requests are proxied to OpenAI's moderation endpoint with automatic BYOK key resolution.
- POST
/v1/moderations - Auth:
Authorization: Bearer <token> - Drop-in compatible with the OpenAI Moderations API.
Quick start
curl -sS https://api.xantly.com/v1/moderations \
-H "Authorization: Bearer $XANTLY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"input": "I love programming and building useful tools."
}'Request body
| Field | Type | Required | Description |
|---|---|---|---|
input | string | array<string> | Yes | Text to classify. Pass a string for single input, or an array for batch classification. |
model | string | No | Moderation model to use (e.g. "omni-moderation-latest"). Defaults to the latest available model. |
Response body
The response follows the OpenAI moderation response format:
{
"id": "modr-abc123",
"model": "omni-moderation-latest",
"results": [
{
"flagged": false,
"categories": {
"sexual": false,
"hate": false,
"harassment": false,
"self-harm": false,
"sexual/minors": false,
"hate/threatening": false,
"violence/graphic": false,
"self-harm/intent": false,
"self-harm/instructions": false,
"harassment/threatening": false,
"violence": false
},
"category_scores": {
"sexual": 0.000012,
"hate": 0.000003,
"harassment": 0.000008,
"self-harm": 0.000001,
"sexual/minors": 0.000001,
"hate/threatening": 0.000001,
"violence/graphic": 0.000002,
"self-harm/intent": 0.000001,
"self-harm/instructions": 0.000001,
"harassment/threatening": 0.000002,
"violence": 0.000004
}
}
]
}| Field | Type | Description |
|---|---|---|
id | string | Moderation request identifier. |
model | string | Model used for classification. |
results | array | One result per input string. |
results[].flagged | boolean | true if any category is flagged. |
results[].categories | object | Boolean flags per content category. |
results[].category_scores | object | Confidence scores per category (0.0–1.0). |
Code examples
import os
from openai import OpenAI
client = OpenAI(
api_key=os.environ["XANTLY_API_KEY"],
base_url="https://api.xantly.com/v1",
)
response = client.moderations.create(
input="This is a test message for content moderation.",
)
result = response.results[0]
print(f"Flagged: {result.flagged}")
for category, flagged in result.categories.items():
if flagged:
print(f" {category}: {result.category_scores[category]:.4f}")import OpenAI from "openai";
const client = new OpenAI({
apiKey: process.env.XANTLY_API_KEY,
baseURL: "https://api.xantly.com/v1",
});
const response = await client.moderations.create({
input: "This is a test message for content moderation.",
});
const result = response.results[0];
console.log(`Flagged: ${result.flagged}`);BYOK support
The moderation endpoint automatically resolves your organization's BYOK OpenAI key. If no BYOK key is configured, the platform key is used as a fallback.
Errors
| HTTP | error.type | Typical trigger |
|---|---|---|
400 | invalid_request_error | Missing input field. |
401 | authentication_error | Missing or invalid Bearer token. |
500 | provider_error | No OpenAI API key configured, or upstream error. |
Next steps
- Chat Completions — Main inference endpoint
- Models — List available models
Models
List and retrieve models available through the Xantly gateway. The response is OpenAI-compatible and works with any tool or SDK that calls GET /v1/models.
Audio
Transcribe audio to text, translate audio to English, and generate speech from text. All endpoints proxy to OpenAI's audio APIs with automatic BYOK key resolution.