This model is available to Enterprise customers only. Contact us to enable access for your organization.
Overview
Hive's CSE Text Classifier API, which was created in partnership with Thorn, detects suspected child sexual exploitation (CSE) in both English and Spanish.
Each text sequence submitted is tokenized before being passed into the text classifier, with the maximum input size being 1024 characters. The classifier then returns the text sequence’s scores (between 0 and 1, inclusive) for each of the possible labels.
There are seven possible labels: csa_discussion
, child_access
, csam
, has_minor
, self_generated_content
, sextortion
, and not_pertinent
. For in-depth explanations of each label, please refer to the CSE Text Classifier API Guides page.
Example Responses
This is an example of a pertinent response, i.e., one of the listed labels (with the exception of not_pertinent) has received a score that is above the internally set threshold. Note the non-empty csa_text_classifier
array within the pertinent_labels
section.
{
"task_id": <TASK_ID>,
"created_on": "2025-08-02T01:51:14.915Z",
"moderated_on": "2025-08-02T01:51:15.143Z",
"moderated_by": "classifier",
"task_units": 1,
"charge": 0.00001,
"state": "finished",
"status": [
{
"status": {
"code": "0",
"message": "SUCCESS"
},
"_version": 2,
"response": {
"output": [
{
"predictions": {
"csam_classifier": {},
"csa_text_classifier": {
"csam": 0.9805574417114258,
"has_minor": 0.8309484124183655,
"sextortion": 0.002415776252746582,
"self_generated_content": 0.012813866138458252,
"not_pertinent": 0.007799208164215088,
"child_access": 0.033461928367614746,
"csa_discussion": 0.9864770174026488
}
},
"hash_matches": {},
"content": {
"detection_steps": [
"csa_text_classifier"
],
"text": <TEXT_SEQUENCE>,
"metadata": {}
},
"pertinent_labels": {
"hash_match": {},
"csam_classifier": {},
"csa_text_classifier": [
"csa_discussion",
"csam",
"has_minor"
]
}
}
]
}
}
]
}
This is an example of a non-pertinent response. Note that the csa_text_classifier
array within the pertinent_labels
section is empty.
{
"task_id": "4e13af50-6f38-11f0-8a18-ed9368d83a89",
"created_on": "2025-08-02T00:33:26.725Z",
"moderated_on": "2025-08-02T00:33:26.929Z",
"moderated_by": "classifier",
"task_units": 1,
"charge": 0.00001,
"state": "finished",
"status": [
{
"status": {
"code": "0",
"message": "SUCCESS"
},
"_version": 2,
"response": {
"output": [
{
"predictions": {
"csam_classifier": {},
"csa_text_classifier": {
"csam": 0.0008203387260437012,
"has_minor": 0.000935971736907959,
"sextortion": 0.0002201199531555176,
"self_generated_content": 0.0005838871002197266,
"not_pertinent": 0.9978225231170654,
"child_access": 0.0004310011863708496,
"csa_discussion": 0.0012388229370117188
}
},
"hash_matches": {},
"content": {
"detection_steps": [
"csa_text_classifier"
],
"text": "testing a normal sentence",
"metadata": {}
},
"pertinent_labels": {
"hash_match": {},
"csam_classifier": {},
"csa_text_classifier": {}
}
}
]
}
}
]
}