Hive Vision Language Model (VLM)
How to integrate with Hive's latest Vision Language Model.
About
π model key: hive/vision-language-model
The Hive Vision Language Model is trained on Hiveβs proprietary data, delivering leading performance with the speed and flexibility required for production vision tasks.
- Best-in-class moderation β Flags sexual content, hate, drugs, and other moderation classesβeven in nuanced edge cases across text and images.
- Deep multimodal comprehension β Detects fine-grained objects, reads text, and understands spatial and semantic relationships to provide a rich understanding of input images.
- All-in-one task engine β Generate captions, answer visual questions, run OCR, or analyze image characteristicsβall through a single endpoint.
How to Get Started
Authentication is required to use these models. Youβll need an API Key, which can be created in the left sidebar.
Follow these steps to generate your key:
- Click βAPI Keysβ in the sidebar.
- Click β+β to create a new key scoped to your organization. The same key can be used with any "Playground available" model.
β οΈ Important: Keep your API Key secure. Do not expose it in client-side environments like browsers or mobile apps.

Click '+' to create a new API Key
Querying Hive Vision Language Model
Hive offers an OpenAI-compatible Rest API for querying LLMs and multimodal LLMs. Here are the ways to call it:
- Using the OpenAI SDK
- Directly invoking the REST API
Using this API, the model will successively generate new tokens until either the maximum number of output tokens has been reached or if the modelβs end-of-sequence (EOS) token has been generated.
Note: Some fields such as top_k are supported via the REST API, but is not supported by the OpenAI SDK.
Performance Tips
Minimizing Latency
Hive VLM tokenizes each image into square "patches."
Images small enough to fit into a single patch (square images β€ 633x633) cost only 256 input tokens
Larger images are split into up to 6 patches (max 1,536 tokens) which increases latency and price.
For the fastest response, resize or center-crop your images to β€ 633x633 before calling the API.
Maximizing OCR Accuracy
For OCR tasks, it is recommended to keep your image patch size higher.
The higher the patch size (up to 6), the more accurate the VLM performs on text-on-image tasks.
For help achieving your specific use case, feel free to reach out to us!
Using the OpenAI SDK
from enum import Enum
from pydantic import BaseModel
from openai import OpenAI
# ββ Client setup βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
client = OpenAI(
base_url="https://api.thehive.ai/api/v3/", # Hive's endpoint
api_key="<YOUR-SECRET-KEY>" # β replace with your key
)
# ββ 1 Β· Define enum + response schema ββββββββββββββββββββββββββββββββββββββ
class SubjectLabel(str, Enum):
person = "person"
animal = "animal"
vehicle = "vehicle"
food = "food"
scenery = "scenery"
class ClassificationOutput(BaseModel):
subject: SubjectLabel
# ββ 2 Β· Call the VLM and parse the JSON directly into our schema βββββββββββ
completion = client.beta.chat.completions.parse(
model="hive/vision-language-model",
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": (
"Classify the **main subject** of this image as one of: "
"person, animal, vehicle, food, or scenery. "
"Return JSON only."
),
},
{
"type": "image_url",
"image_url": {
"url": (
"https://d24edro6ichpbm.thehive.ai/"
"example-images/vlm-example-image.jpeg"
)
},
},
],
}
],
response_format=ClassificationOutput, # π schema enforced by Hive
max_tokens=50,
)
# ββ 3 Β· Typed result ready for use βββββββββββββββββββββββββββββββββββββββββ
result: ClassificationOutput = completion.choices[0].message.parsed
print(result.subject) # e.g. β SubjectLabel.scenery
import OpenAI from "openai";
const openai = new OpenAI({
baseURL: "https://api.thehive.ai/api/v3/", // Hive endpoint
apiKey: "<YOUR-SECRET-KEY>", // β replace with your key
});
/* ββ 1 Β· JSON-Schema that tells Hive what we expect back βββββββββββββββ */
const classificationSchema = {
type: "object",
properties: {
subject: {
type: "string",
enum: ["person", "animal", "vehicle", "food", "scenery"],
},
},
required: ["subject"],
additionalProperties: false,
};
/* ββ 2 Β· Call the VLM & let Hive return JSON that fits the schema βββββββ */
const completion = await openai.chat.completions.create({
model: "hive/vision-language-model",
messages: [
{
role: "user",
content: [
{
type: "text",
text:
"Classify the **main subject** of this image as one of: " +
"person, animal, vehicle, food, or scenery. " +
"Return JSON only.",
},
{
type: "image_url",
image_url: {
url:
"https://d24edro6ichpbm.thehive.ai/" +
"example-images/vlm-example-image.jpeg",
},
},
],
},
],
response_format: { type: "json_schema", schema: classificationSchema },
max_tokens: 50,
});
/* ββ 3 Β· Use the structured result βββββββββββββββββββββββββββββββββββββββ */
const { subject } = JSON.parse(completion.choices[0].message.content);
// -> e.g. "scenery"
console.log("Detected subject:", subject);
Directly invoking the REST API
The Hive Vision Language Model can be called via REST API, and media can be sent in as:
- image URL
- base64 encoding
cURL examples:
curl --location --request POST 'https://api.thehive.ai/api/v3/chat/completions' \
--header 'authorization: Bearer <SECRET_KEY>' \
--header 'Content-Type: application/json' \
--data-binary $'{
"model": "hive/vision-language-model",
"max_tokens": 50,
"response_format": {
"type": "json_schema",
"json_schema": {
"schema": {
"type": "object",
"properties": {
"subject": {
"type": "string",
"enum": ["person", "animal", "vehicle", "food", "scenery"]
}
},
"required": ["subject"],
"additionalProperties": false
},
"strict": true
}
},
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Classify the main subject of this image as one of: person, animal, vehicle, food, or scenery. Return JSON only."
},
{
"type": "image_url",
"image_url": {
"url": "https://d24edro6ichpbm.thehive.ai/example-images/vlm-example-image.jpeg"
}
}
]
}
]
}'
curl --location --request POST 'https://api.thehive.ai/api/v3/chat/completions' \
--header 'authorization: Bearer <SECRET_KEY>' \
--header 'Content-Type: application/json' \
--data-binary $'{
"model": "hive/vision-language-model",
"max_tokens": 50,
"response_format": {
"type": "json_schema",
"json_schema": {
"schema": {
"type": "object",
"properties": {
"subject": {
"type": "string",
"enum": ["person", "animal", "vehicle", "food", "scenery"]
}
},
"required": ["subject"],
"additionalProperties": false
},
"strict": true
}
},
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Classify the main subject of this image as one of: person, animal, vehicle, food, or scenery. Return JSON only."
},
{
"type": "image_url",
"image_url": {
"url": "data:image/jpeg;base64,<BASE64_DATA>"
}
}
]
}
]
}'
After making a request, youβll receive a JSON response with the model's output text. Hereβs a sample output:
{
"id": "1234567890-abcdefg",
"object": "chat.completion",
"model": "hive/vision-language-model",
"created": 1749840139221,
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "{ \"subject\": \"scenery\" }"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 1818,
"completion_tokens": 11,
"total_tokens": 1829
}
}
Parameters
Below are the definitions of possible relevant input and output fields. Some fields have default values that will be assigned if the user does not assign a value themselves.
Input
Field | Type | Definition |
---|---|---|
messages | array of objects | Required. A structured array containing the conversation history. Each object includes a role and content. |
model | string | Required. The name of the model to call. |
role | string | The role of the participant in the conversation. Must be system, user, or assistant. |
content | string OR array of objects | Your content string. If array, each object must have a type and corresponding data, as shown in the examples above. |
text | string | Referenced inside content arrays, containing the text message to be sent. |
image_url | object | Contains the image URL or Base64-encoded string, inside the subfield url. |
response_format | object | response_format constrains the model response to follow the JSON Schema you define.Note: this setting may increase latency. |
max_tokens | int | Limits the number of tokens in the output. Range: [1 to 2048] Default: 512 |
temperature | float | Controls randomness in the output. Lower values make output more deterministic. A value of 0 means that VLM outputs are deterministic. Range: [0 to 1] Default: 0 |
top_p | float | Nucleus sampling parameter to limit the probability space of token selection. Range: [0 to 1] Default: 0.1 |
top_k | int | Limits token sampling to the top K most probable tokens. Default: 1 |
Output
Field | Type | Definition |
---|---|---|
id | string | The Task ID of the submitted task. |
model | string | The name of the model used. |
created | int | The timestamp (in epoch milliseconds) when the task was created. |
choices | array of objects | Contains the modelβs responses. Each object includes the index, message, and finish_reason. |
usage | object | Contains input/output token usage information for the request and response. |
Response Formats
Many moderation and general vision use-cases need JSON-readable answers, not free-form text.
response_format
lets you tell the VLM exactly what JSON shape to return by embedding a JSON Schema in your request. The model then constrains its output against that schema, so you can parse the response with confidence.
"response_format": {
"type": "json_schema",
"json_schema": {
"schema": {
"type": "object",
"properties": {
"subject": {
"type": "string",
"enum": ["person", "animal", "vehicle", "food", "scenery"]
}
},
"required": ["subject"],
"additionalProperties": false
},
"strict": true
}
},
Typical response:
{
"choices": [
{
"message": {
"role": "assistant",
"content": "{\"subject\":\"scenery\"}"
}
}
],
β¦
}
Contact us if you have any questions with writing JSON response formats.
Common Errors
The VLM has a default starting rate limit of 1 request per second. You may see this error below if you submit higher than the rate limit.
To request a higher rate limit please contact us!
{
"status_code": 429,
"message": "Too Many Requests"
}
A positive Organization Credit balance is required to continue using Hive Models. Once you run out of credits requests will fail with the following error.
{
"status_code":405,
"message":"Your Organization is currently paused. Please check your account balance, our terms and conditions, or contact [email protected] for more information."
}
Pricing
Hive VLM is priced per input and output token. Because latency also scales with tokens, smaller images β fewer patches β faster responses.
For image tokenization, the following logic is used:
- Start with the imageβs aspect ratio and area.
- Example: a 1,024 Γ 1,024 picture has ratio = 1.0 and area = 1M px.
- Consider every logical way to slice the picture into tiles.
- A βtileβ is a square crop that the model turns into a fixed 256-token chunk.
- We allow from 1 tile up to 6 tiles in total.
So the grids considered are 1 Γ 1, 1 Γ 2, 2 Γ 1, 2 Γ 2, 1 Γ 3, 3 Γ 1 β¦
- Rank those grids by two rules (in order):
- Match the image's original aspect ratio.
e.g., a 3 Γ 2 grid (ratio = 1.5) fits an 800 Γ 600 image (ratio β 1.33) better than a 2 Γ 2 grid (ratio = 1.0). - If two grids tie on ratio, pick the one that uses more tiles.
- Match the image's original aspect ratio.
- Count how many tiles the chosen grid has, and multiply by 256 tokens per tile.
- Finally, add 260 more tokens to images with greater than 1 patch.
Image resolution | Chosen grid | Tiles | Tokens |
---|---|---|---|
633x633 (fastest latency) | 1x1 | 1 | Only 260 tokens |
1024x1024 | 2x2 | 4 | (4x256) + 260 = 1284 tokens |
800x600 | 3x2 | 6 | (6x256) + 260 = 1796 tokens |
Updated about 24 hours ago