CSAM Detection - Combined API
Overview
Hive's Combined CSAM Detection API runs two CSAM (Child Sexual Abuse Material) detection models, created in partnership with Thorn:
- Hash Matching: Capable of detecting known CSAM.
- Classifier: Capable of detecting novel CSAM.
Note: Currently, this API can only accept images as input. We plan to add video capabilities in the near future.
Hash Matching
To detect CSAM through hash matching, the model first hashes the input image and then matches it against an index of known CSAM. All matching images in the database will be sent back along with the matchDistance
which indicates the dissimilarity between the source and the target media.
If there is no match found, we will send this image to the classifier as well.
Classifier
The classifier works by first creating embeddings of the media. An embedding is a list of computer-generated scores between 0 and 1. After we create the embeddings, we permanently delete all of the original media. Then, we use the classifier to classify the content as CSAM or not based on the embeddings. This process ensures that we do not store any CSAM.
Response
The endpoint returns both models' responses into a single, combined response. To see an annotated example of an API response object for this model, you can visit its API reference page.
Updated 1 day ago