Manual Content Moderation
Manual Moderation Overview
In addition to the automated deep-learning based solutions, Hive also offers direct programmatic access to their manual moderation workforce. The same workforce that enables Hive to build best-in-class deep-learning models can be utilized to complete more difficult tasks at a near-zero error rate.
Much like the automated classification that Hive's models output, manual moderation can be used to classify images, text and video that are presumed to be extra sensitive, malicious or difficult. Each manual moderation API project will classify the input media into a predefined bucket. The current offerings of manual moderation are listed below
Note: manual moderation projects can only be accessed via the Asynchronous API Endpoint.
Image / Video Manual Content Moderation
Manual Content Moderation can be deployed as an assist or addition to Hive's automated [Visual Content Moderation] (https://docs.thehive.ai/docs/visual-content-moderation) solution. Manual Content Moderation is supported in the following visual moderation categories:
NSFW Head:
- nsfw - genitalia, sexual activity, nudity, buttocks, sex toys
- suggestive - shirtless men, underwear / swimwear, sexually suggestive poses without
genitalia - clean - none of the above, no sexual or suggestive content
Blood Head:
- very_bloody - gore, visible bleeding, self-cutting
- a_little_bloody - fresh cuts / scrapes, light bleeding
- clean - minor scabs, scars, acne, etc. are not considered ‘blood’ by model
- other_blood - animated blood, fake blood, animal blood such as game dressing
Custom Manual Moderation
Hive can also expose custom manual moderation solutions based on customer needs. User verification, age verification, among others are all available upon request. Please contact [email protected] for more information about these custom solutions.
Updated 5 months ago