CSAM Detection - Classifier API

🚧

Deprecated Endpoint

Soon, we will no longer be offering separate endpoints for our Hash Matching and Classifier CSAM detection models. Moving forward, customers should use our combined endpoint.

Our classifier creates embeddings based on an image or video input and uses the embeddings to make predictions across three possible classes:

  1. pornography: Pornographic media that does not involve children
  2. csam: Child sexual abuse material
  3. other: Non-pornographic and non-CSAM media

For each class, the classifier returns a confidence score between 0 and 1, inclusive. The confidence scores for each class sum to 1. The response object will have the same general structure for both image and video inputs.

This is an example of a pertinent response, wherein the content is CSAM and is classified as such. Note the reasons property which reflects that CSAM was flagged.

"file": {
    "fileType": "image",
    "reasons": ["csam"],
    "classifierPrediction": {
        "csam_classifier": {
            "pornography": 0.01,
            "csam": 0.98,
            "other": 0.01
        }
    }
},
"hashes": []

This is an example of a non-pertinent response, wherein the content is not CSAM. Note that the reasons array is empty.

"file": {
    "fileType": "image",
    "reasons": [],
    "classifierPrediction": {
        "csam_classifier": {
            "pornography": 0.01,
            "csam": 0.01,
            "other": 0.98
        }
    }
},
"hashes": []