XantlyANTLY
API Reference

Moderations

Classify text for potentially harmful content using the content moderation API. Requests are proxied to OpenAI's moderation endpoint with automatic BYOK key resolution.

Classify text for potentially harmful content using the content moderation API. Requests are proxied to OpenAI's moderation endpoint with automatic BYOK key resolution.

  • POST /v1/moderations
  • Auth: Authorization: Bearer <token>
  • Drop-in compatible with the OpenAI Moderations API.

Quick start

curl -sS https://api.xantly.com/v1/moderations \
  -H "Authorization: Bearer $XANTLY_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "input": "I love programming and building useful tools."
  }'

Request body

FieldTypeRequiredDescription
inputstring | array<string>YesText to classify. Pass a string for single input, or an array for batch classification.
modelstringNoModeration model to use (e.g. "omni-moderation-latest"). Defaults to the latest available model.

Response body

The response follows the OpenAI moderation response format:

{
  "id": "modr-abc123",
  "model": "omni-moderation-latest",
  "results": [
    {
      "flagged": false,
      "categories": {
        "sexual": false,
        "hate": false,
        "harassment": false,
        "self-harm": false,
        "sexual/minors": false,
        "hate/threatening": false,
        "violence/graphic": false,
        "self-harm/intent": false,
        "self-harm/instructions": false,
        "harassment/threatening": false,
        "violence": false
      },
      "category_scores": {
        "sexual": 0.000012,
        "hate": 0.000003,
        "harassment": 0.000008,
        "self-harm": 0.000001,
        "sexual/minors": 0.000001,
        "hate/threatening": 0.000001,
        "violence/graphic": 0.000002,
        "self-harm/intent": 0.000001,
        "self-harm/instructions": 0.000001,
        "harassment/threatening": 0.000002,
        "violence": 0.000004
      }
    }
  ]
}
FieldTypeDescription
idstringModeration request identifier.
modelstringModel used for classification.
resultsarrayOne result per input string.
results[].flaggedbooleantrue if any category is flagged.
results[].categoriesobjectBoolean flags per content category.
results[].category_scoresobjectConfidence scores per category (0.01.0).

Code examples

import os
from openai import OpenAI

client = OpenAI(
    api_key=os.environ["XANTLY_API_KEY"],
    base_url="https://api.xantly.com/v1",
)

response = client.moderations.create(
    input="This is a test message for content moderation.",
)

result = response.results[0]
print(f"Flagged: {result.flagged}")
for category, flagged in result.categories.items():
    if flagged:
        print(f"  {category}: {result.category_scores[category]:.4f}")
import OpenAI from "openai";

const client = new OpenAI({
  apiKey: process.env.XANTLY_API_KEY,
  baseURL: "https://api.xantly.com/v1",
});

const response = await client.moderations.create({
  input: "This is a test message for content moderation.",
});

const result = response.results[0];
console.log(`Flagged: ${result.flagged}`);

BYOK support

The moderation endpoint automatically resolves your organization's BYOK OpenAI key. If no BYOK key is configured, the platform key is used as a fallback.


Errors

HTTPerror.typeTypical trigger
400invalid_request_errorMissing input field.
401authentication_errorMissing or invalid Bearer token.
500provider_errorNo OpenAI API key configured, or upstream error.

Next steps

On this page