Use the @mux/ai library to automatically moderate video content and detect inappropriate material
This guide demonstrates how to automatically screen video content for inappropriate material using AI. The @mux/ai library handles all the complexity of extracting thumbnails, analyzing them with moderation APIs, and returning actionable results. If content exceeds your defined thresholds for sexual or violent content, you can automatically remove access to protect your platform.
This approach provides an automated first line of defense against inappropriate content, helping you maintain content standards at scale without manual review of every upload.
Before starting this guide, make sure you have:
npm install @mux/aiSet your environment variables:
# Required
MUX_TOKEN_ID=your_mux_token_id
MUX_TOKEN_SECRET=your_mux_token_secret
# You only need the API key for the provider you're using
OPENAI_API_KEY=your_openai_api_key # OR
HIVE_API_KEY=your_hive_api_keyimport { getModerationScores } from "@mux/ai/workflows";
const result = await getModerationScores("your-mux-asset-id", {
provider: "openai", // or "hive"
thresholds: {
sexual: 0.7, // Flag content with 70%+ confidence
violence: 0.8 // Flag content with 80%+ confidence
}
});
console.log(result.exceedsThreshold); // true if content flagged
console.log(result.maxScores.sexual); // Highest sexual content score
console.log(result.maxScores.violence); // Highest violence scoreThe function analyzes multiple thumbnails from your video and returns:
{
"assetId": "your-asset-id",
"exceedsThreshold": false,
"maxScores": {
"sexual": 0.12,
"violence": 0.05
},
"thresholds": {
"sexual": 0.7,
"violence": 0.8
},
"thumbnailScores": [
{ "sexual": 0.12, "violence": 0.05, "error": false },
{ "sexual": 0.08, "violence": 0.03, "error": false }
// ... more thumbnails
]
}@mux/ai supports two moderation providers:
omni-moderation-latest model - Multi-modal moderation with vision support// Using OpenAI (default)
const result = await getModerationScores("your-mux-asset-id", {
provider: "openai"
});
// Using Hive
const result = await getModerationScores("your-mux-asset-id", {
provider: "hive"
});Thresholds use a 0-1 scale where higher values mean stricter moderation:
const result = await getModerationScores("your-mux-asset-id", {
thresholds: {
sexual: 0.7, // Flag content with 70%+ confidence of sexual content
violence: 0.8 // Flag content with 80%+ confidence of violence
}
});Adjust these based on your content policies and user base. Lower thresholds catch more content but may increase false positives.
For automated moderation when videos are uploaded, you should trigger the call to get moderation scores from the video.asset.ready webhook:
export async function handleWebhook(req, res) {
const event = req.body;
if (event.type === 'video.asset.ready') {
const result = await getModerationScores(event.data.id, { thresholds: { sexual: 0.7, violence: 0.8 } });
if (result.exceedsThreshold) {
await mux.video.assets.deletePlaybackId(event.data.id, event.data.playback_ids[0].id);
}
}
}Under the hood, @mux/ai handles: