This page gives a more detailed overview of Hive's visual moderation classes related to violence, gore, and weapons. If you need more details on these classes after reading our main Visual Moderation page, look here. We'll enumerate, as clearly as possible, which types of subject matter are covered by each class in our models.
Because all platforms have different moderation requirements and risk sensitivities, we recommend that you consult these descriptions carefully as you decide which classes to build into your moderation logic. At the end of the day, it's up to you to decide which classes are important to monitor based on your content policies.
To determine which class(es) cover specific types of visual content, it may be helpful to search this page (Ctrl/Cmd + F) with terms for that subject matter (i.e., gun, injury) rather than looking for it in specific class descriptions.
Before looking at subject matter breakdowns for each class, it may be helpful to understand the following:
Hive's visual classifier is multi-headed. Each model head defines a group of categorizations we call classes. Each model head includes at least one positive class (e.g., yes_hot_dog) and a negative class (e.g., no_hot_dog). Scores returned for each class correlate with the model's certainty the image meets our ground truth definition for the category. This page attempts to explain these ground truth definitions as clearly as possible.
The model makes classifications for each model head independently. In other words, if an image scores highly in multiple classes, the image meets our definitions for each class. Confidence scores from each model head are generated separately and are not correlated in and of themselves. It's easiest to think of this as asking multiple, narrower models (that may or may not overlap in scope) to each make a prediction on an image.
As a corollary, a high confidence score in negative classes (e.g., no_gun, no_knife) does not mean the image is clean in general. This is simply the logical opposite of the positive classification; the subject matter captured by the positive class in that model head is not present. For example, an image that scores 0.99 in no_gun can still score highly in very_bloody, human_corpse, or any other class trained to flag other subject matter. For this reason, we will describe these negative classes as a non-exhaustive list of subject matter that is not captured by the positive classes in that model head but that might be helpful to distinguish what is and is not flagged by that model head (e.g., borderline content, content captured by other classes).
For some classes, the model classifies animations, drawings, diagrams, paintings, or other artwork in the same way as photographs or photorealistic images. We will call out which model heads this does or does not apply to in the detailed description.
This model head can be used to flag images depicting handguns, rifles, machine guns, etc.. These classes are specific to firearms and do not capture other weapons or military activity more generally. Other types of guns that are not visually distinguishable from actual firearms – airsoft or paintball guns, model guns etc. – will also be flagged.
Art, animations, drawings and other representational depictions of (photorealistic) guns are classified as animated_gun. This class can be used to distinguish real weapons from those depicted in video games and animated content.
gun_in_hand: the image is a photograph of a person holding or handling a gun. Captures:
- A person holding a firearm of any kind (pistol, rifle, etc.)
- A person handling or touching a firearm, even if holstered or mounted
gun_not_in_hand: the image is a photograph showing a gun that is not being held or handled. Captures:
- Firearms displayed on a stand or mount
- Firearms strapped to or slung over a person's back but otherwise not being touched or handled
- Firearms in a holster that is not being touched or handled
- Guns mounted to a vehicle or turret
animated_gun: the image is a drawing, animation, or diagram depicting a gun. Captures:
- Guns depicted in video games
- Guns depicted in cartoons and animations
- Guns depicted in diagrams or schematics
- Guns depicted in drawings and photorealistic art
no_gun: the image does not depict a gun. For clarity, the following subject matter would not be captured by other classes in this model head:
- Photographs or animations, drawings, etc. where no gun is visible
- Large weapons mounted on ships, tanks, and planes
- Rocket-propelled grenades and other handheld rocket launchers
- Cannons and artillery
- Explosives such as grenades, dynamite, and C4
- Non-realistic toy guns such as water guns, nerf guns, obvious cosplay guns, props, etc.
This model head can be used to flag images of blades such as knives, box cutters, machetes, swords, and other bladed weapons. This does not include scissors, saws, and other bladed tools that are not easily usable or intended to be used as weapons. This model head classifies knives used in cooking and other culinary contexts as a separate class that can be used to distinguish these contexts from weapons.
For this model head, animations, drawings, and other non-photographic depictions of knives and blades are not flagged. These images are classified as no_knife.
knife_in_hand: the image is a photograph showing a knife or blade being held or handled by a person (outside of culinary/agricultural settings). This applies to:
- Common knives (including plastic knives), machetes, box cutters, daggers, throwing knives or shruiken, exposed razor blades, swords, and bayonet blades
- Sheathed knives, swords, or other blades
- Bayonets attached to a gun that is being held or handled
knife_not_in_hand: the image is a photograph showing any of the above types of blades, but not being held or handled by a person. This applies to:
- Knives stored in knife blocks or drawers
- Sheathed knives and blades that are displayed or otherwise not being handled
- Unsheathed knives and other blades/bladed weapons being displayed
- Knives and other blades lodged into objects
culinary_knife_in_hand: the image shows a person holding or using a knife or blade to prepare or harvest food. This class is used to distinguish these uses from the other positive classes in this model head. Captures:
- Knives and blades being handled or used when processing or preparing food
- A person holding a knife in a kitchen with ingredients or cutting boards visible
- A person using a knife as a utensil when eating
- A butcher processing meat with knives and other bladed tools
- Blades being used to harvest plants, grains, vegetables in agricultural settings
no_knife: the image does not show a knife or blade or depicts an animated or illustrated knife. To be clear, the following subject matter is not captured by the other classes in the knife model head:
- Axes and hatchets, even if crafted as weapons
- Batons and nightsticks
- Saws, including chainsaws
- Shaving razors and razor heads
- Animated or illustrated knives and blade weapons
This model head can be used to flag images showing blood, wounds, active bleeding, and gore. Generally, these classes do not capture images of injuries where blood is not present. Blood depicted in art, animations, drawings, etc. are classified as other_blood
very_bloody: the image is a photograph showing substantial amounts of blood, major wounds that are actively bleeding, or gore. This includes:
- Gunshot wounds
- Stab wounds
- Deep cuts
- Other major injuries resulting in visible bleeding: loss of limbs, fingers, etc., animal attacks/bites, and the like
- Profuse bloody noses
a_little_bloody: the image is a photograph showing minor amounts of blood or evidence of a major injury that has been treated or healed. This includes:
- Minor cuts, scrapes, and scratches
- Small amounts of blood on surfaces
- Stitches and scar tissue
other_blood: the image shows animal blood, blood in a medical or laboratory setting, or blood depicted in art, animations, or illustrations. This includes:
- Blood in test tubes, transfusion bags, dialysis machines, etc.
- Animal blood and injuries, including slaughter and butchering
- Depictions of blood or graphic injury in animations, art, illustrations, etc.
- Imitation blood used such that it is evidently fake (e.g., Halloween costumes or decorations)
- Liquids that could be blood, but the image lacks contextual evidence that this is from human injury
no_blood: the image does not show blood or imitation blood. For clarity, the following subject matter is not captured by the other classes in this model head:
- Bruises and contusions
- Accidents or injuries with no visible blood
This model head can be used to flag images of nooses, hangings, and hanging bodies. Art, animations, and illustrations follow the same definitions for these classes as photographic images.
hanging: the image depicts corpses hanging from a rope or a person being hanged
noose: the image depicts a rope tied as a noose hanging from gallows, trees, or other objects
no_hanging_no_noose: the image does not depict a noose or hanging. To be clear, the following subject matter is not captured by the other classes in this model head:
- Corpses that are not hanging
- Ropes hanging that are not tied as a noose
- Nooses not hanging from an object (e.g., on a surface such as a table or floor)
- Bondage and hanging in BDSM situations not intended to cause death
- Knots, lassos, and regular ropes
This model head can be used to flag images of human corpses or images of bodies with enough contextual evidence to assume the person is dead. It does not flag even graphic injuries and accidents if the victim is visibly alive (use Blood model head instead).
The human_corpse flags photographic images only. Art, animations, and illustrations of dead bodies are classified as animated_corpse.
human_corpse: the image is a photograph of a dead human body. This includes:
- Motionless bodies with evidence of potentially fatal injury
- Bodies that are clearly identifiable as dead based on color, lividity, decomposition, etc. even if no injuries are visible
- Bodies that are clearly identifiable as dead based on contextual factors (e.g., body in a casket, body in a morgue) even if no injuries are visible
- Autopsy photos
- Any of the above depicted by actors and/or makeup and effects in a movie or TV show
animated_corpse: the image is an animation or illustration of a dead human body. This includes:
- Deaths and corpses depicted in video games
- Bodies and death scenes depicted in cartoons, anime, etc.
no_corpse: the image does not explicitly show a dead human body. To be clear, the following subject matter would not be flagged by the other classes in the corpse model head:
- Graphic injuries without evidence that the victim is dead (use very_bloody to flag this instead)
- Body bags or caskets where no corpse is visible
- Urns and funerary receptacles
- Fully decomposed bones and skeletons
- Mummified or embalmed bodies displayed in museums or mausoleums
- Staged deaths such as in a theater production or renaissance fair (note: staged deaths with convincing fake blood will likely be flagged)
- A person sleeping (e.g., in a sleeping bag)
- A homeless person laying on a bench or the ground
- A person laying on the ground while under arrest
This head flags images of people and animals that appear severely underweight, malnourished, or sickly. It is not sensitive to slim/skinny but otherwise healthy people or animals. For these classes, art, illustrations, and animations follow the same definitions as photographic images.
yes_emaciated_body: the image depicts a person or animal that is so underweight they appear ill or severely malnourished. In general, this includes:
- A person with ribs, hip bones, arm bones, and/or facial bone structure clearly visible through the skin
- Starving animals with clearly visible rib cages or pelvic bones
no_emaciated_body: the image does not depict the above. Generally, the following will not be flagged by yes_emaciated_body:
- Skinny/underweight individuals that appear healthy
- Muscular individuals with low body fat
- Skeletons and corpses
- Animals that are naturally slim with short fur, such as greyhounds
This model head flags images of intentional self-inflicted injuries and other indicators of self-harm. Injuries sustained from accidents and other causes are ignored. Animations and illustrations follow the same definitions as photographic images for these classes.
yes_self_harm: the image depicts acts or evidence of self-harm or self-inflicted injuries. This includes:
- Images of someone cutting or burning themselves
- Self-inflicted cuts or burn scars (e.g., as evidenced by location, number, dimensions, direction, hesitation, etc.)
- A person pointing a gun to their own head or chest
- A person holding knives, razor blades, fire, or hot objects to their body
- Religious self-harm such as self-flagellation or self-immolation
no_self_harm: the image does not depict self-harm or self-inflicted injuries. To be clear, the following subject matter is not flagged by yes_self_harm:
- Corpses and hanging
- Graphic injuries with no evidence that they are self-inflicted
- Smoking and drug use
- BDSM and sexual torture
- People handling knives, razor blades etc. without evidence of intent to self-harm
- Surgical wounds or scars
- Scar tissue or burn marks without additional evidence of self-harm
Updated 2 months ago