Deepfake Detection
Overview
Hive’s Deepfake Detection API identifies whether or not an image or video query is a deepfake. This model uses the same underlying technology as our Demographic API to locate faces within queries. It then performs a classification step on each face to determine whether or not those representations are deepfakes. The API response provides a confidence score for each classification.
Deepfakes, or videos in which deep learning is used to map one person’s appearance onto another’s, first gained media attention in 2017. Since then, they have grown in popularity — which in turn has inspired new ways of making them that are both more convincing and more accessible to those without experience in machine learning. This kind of realistic synthetic video content has enabled the creation of fake digital identities, political misinformation, and, most commonly, nonconsensual pornography. Identifying and removing them across online platforms is crucial to limit not only the significant harm they can cause to those who appear in them but also the misinformation, fraud, and digital sexual assault that they enable.

Our Model
Deepfake Detection utilizes a visual detection model to locate and classify faces in any given query. Visual detection models localize an object of interest in an image by returning a box that bounds that object, as well as the type, or class, of that object. For each detection, a detector outputs a classification and confidence score that are independent of any other detections.
After an image or video is submitted to our Deepfake Detection API, Hive’s backend splits the any video content into frames and runs the model on each frame (an image input is treated as a video with a single frame). After passing through this visual detection model, any faces in the query are passed through an additional classification step to identify whether or not they are deepfakes.
A separate classification is made for each detected face. This kind of approach can differentiate real people and synthetic ones by detecting and classifying each face separately, giving more information as to which part of a given input is manipulated.
Response
The output object in Deepfake Detection API lists each detected face, including:
The geometric description of the detected bounding box.
The predicted class for the detection.
The confidence score for the detection.
To see a full example of an API response object for this model, you can visit the API reference page.
Supported Input Types
Image Formats:
gif
jpg
png
webp
Video Formats:
mp4
webm
avi
mkv
wmv
mov
Updated 10 months ago