Our classifier creates embeddings based on an image or video input and uses the embeddings to classify the media as CSAM or not. The response object will have the same general structure for both image and video inputs.
This is an example of a pertinent response, wherein the content is CSAM and is classified as such. Note the reasons
property which reflects that CSAM was flagged.
"file": {
"fileType": "image",
"reasons": ["csam"],
"classifierPrediction": {
"csam_classifier": {
"pornography": 0.01,
"csam": 0.98,
"other": 0.01
}
}
},
"hashes": []
This is an example of a non-pertinent response, wherein the content is not CSAM. Note that the reasons
array is empty.
"file": {
"fileType": "image",
"reasons": [],
"classifierPrediction": {
"csam_classifier": {
"pornography": 0.01,
"csam": 0.01,
"other": 0.98
}
}
},
"hashes": []