Skip to main content

API Reference

Base URL: https://api.ovexa.ai/v1

All endpoints (except health) require authentication via the Authorization header:

Authorization: Bearer vpx_live_YOUR_API_KEY

Chat Completions

POST /v1/chat/completions

The primary endpoint. Fully compatible with the OpenAI Chat Completions API format.

Request Headers:

HeaderRequiredDescription
AuthorizationYesBearer vpx_live_...
Content-TypeYesapplication/json
X-Show-Raw-PromptNoSet to true to include the anonymized prompt in the response

Request Body:

{
"model": "gpt-4o",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello, how are you?"}
],
"temperature": 0.7,
"max_tokens": 1000,
"top_p": 1.0,
"stream": false,
"stop": null
}
FieldTypeRequiredDefaultDescription
modelstringYes--Model identifier (e.g., gpt-4o, anthropic/claude-4.6-sonnet)
messagesarrayYes--Array of message objects with role and content
temperaturefloatNo1.0Sampling temperature (0.0 to 2.0)
max_tokensintegerNoModel defaultMaximum tokens in the response
top_pfloatNo1.0Nucleus sampling threshold
streambooleanNofalseEnable Server-Sent Events streaming
stopstring/arrayNonullStop sequences
frequency_penaltyfloatNo0.0Frequency penalty (-2.0 to 2.0)
presence_penaltyfloatNo0.0Presence penalty (-2.0 to 2.0)

Response (200 OK):

{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1700000000,
"model": "gpt-4o",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! I'm doing well. How can I help you?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 25,
"completion_tokens": 15,
"total_tokens": 40
}
}

With X-Show-Raw-Prompt: true:

When the X-Show-Raw-Prompt: true request header is sent, the response body includes an additional anonymization field with details about what PII was detected and how the prompt was anonymized before reaching the AI model.

Response (200 OK, with anonymization):

{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1700000000,
"model": "gpt-4o",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The agreement concerns Jan Kowalski, PESEL 85031501234..."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 150,
"completion_tokens": 200,
"total_tokens": 350
},
"anonymization": {
"entities_found": 3,
"entity_types": ["PERSON", "PESEL", "EMAIL_ADDRESS"],
"anonymized_prompt": "Summarize the agreement for <PERSON_1>, PESEL <PESEL_1>, email <EMAIL_ADDRESS_1>",
"raw_llm_response": "The agreement concerns <PERSON_1>, PESEL <PESEL_1>..."
}
}
FieldTypeDescription
anonymization.entities_foundintegerTotal number of PII entities detected and anonymized
anonymization.entity_typesarrayList of unique PII types found (e.g., PERSON, PESEL, NIP)
anonymization.anonymized_promptstringThe last user message as sent to the AI (with PII tags)
anonymization.raw_llm_responsestringThe AI's raw response before de-anonymization (with PII tags)

Without the X-Show-Raw-Prompt header, the anonymization field is null.

PydanticAI Integration

If you use PydanticAI with OpenAI provider, create a subclass to capture the anonymization field:

class AnonymizationAwareModel(OpenAIChatModel):
def _process_provider_details(self, response):
details = super()._process_provider_details(response) or {}
anon = getattr(response, 'anonymization', None)
if anon:
details['anonymization'] = anon
return details or None

# Usage:
model = AnonymizationAwareModel(request.model, provider=provider)
result = await agent.run(...)
anon = result.response.provider_details.get('anonymization')

Legacy: X-Anonymization-Info response header

For backward compatibility, the same data is also returned as a base64-encoded JSON in the X-Anonymization-Info response header. New integrations should use the anonymization response body field instead.

Error Responses:

StatusError TypeDescription
400bad_requestInvalid request body, missing required fields, or no provider key configured for the requested model's provider
401authentication_errorInvalid or missing API key
403forbiddenInsufficient permissions
429rate_limit_exceededToo many requests
500internal_errorServer error
502provider_errorUpstream AI provider returned an error
504provider_timeoutUpstream AI provider did not respond in time

Error response format:

{
"error": {
"type": "authentication_error",
"message": "Invalid API key provided.",
"code": 401
}
}

Health

GET /v1/health

Check service status. No authentication required.

Response (200 OK):

{
"status": "ok",
"version": "1.0.0"
}

API Keys

POST /v1/keys

Create a new Ovexa API key.

Request Body:

{
"name": "Production Backend",
"permissions": ["chat:write", "usage:read"]
}
FieldTypeRequiredDescription
namestringYesHuman-readable name for the key
permissionsarrayNoList of permissions. Default: all permissions

Available permissions: chat:write, usage:read, keys:manage, settings:manage.

Response (201 Created):

{
"id": "key_abc123",
"name": "Production Backend",
"api_key": "vpx_live_xxxxxxxxxxxxxxxxxxxxxxxx",
"permissions": ["chat:write", "usage:read"],
"created_at": "2025-01-15T10:30:00Z"
}
warning

The full API key is only returned once at creation time. Store it securely.

GET /v1/keys

List all API keys for your account.

Response (200 OK):

{
"keys": [
{
"id": "key_abc123",
"name": "Production Backend",
"key_prefix": "vpx_live_xxxx...xxxx",
"permissions": ["chat:write", "usage:read"],
"created_at": "2025-01-15T10:30:00Z",
"last_used_at": "2025-01-20T14:22:00Z"
}
]
}

DELETE /v1/keys/{key_id}

Revoke an API key. Takes effect immediately.

Response (200 OK):

{
"id": "key_abc123",
"deleted": true
}

Provider Keys

POST /v1/provider-keys

Add a provider API key. The key is encrypted with AES-256 before storage.

Request Body:

{
"provider": "openai",
"api_key": "sk-proj-xxxxxxxxxxxxxxxx",
"name": "OpenAI Production"
}
FieldTypeRequiredDescription
providerstringYesOne of: openai, anthropic, google, mistral, groq, deepseek, cohere, xai, perplexity
api_keystringYesThe provider's API key
namestringNoHuman-readable name

Response (201 Created):

{
"id": "pk_abc123",
"provider": "openai",
"name": "OpenAI Production",
"key_suffix": "...x7fQ",
"created_at": "2025-01-15T10:30:00Z"
}

GET /v1/provider-keys

List all provider keys. Keys are masked -- only the last 4 characters are shown.

Response (200 OK):

{
"provider_keys": [
{
"id": "pk_abc123",
"provider": "openai",
"name": "OpenAI Production",
"key_suffix": "...x7fQ",
"created_at": "2025-01-15T10:30:00Z"
}
]
}

DELETE /v1/provider-keys/{pk_id}

Remove a provider key.

Response (200 OK):

{
"id": "pk_abc123",
"deleted": true
}

Usage

GET /v1/usage/summary

Get aggregated usage statistics for your account.

Query Parameters:

ParameterTypeRequiredDescription
start_datestringNoStart date (YYYY-MM-DD). Default: 30 days ago
end_datestringNoEnd date (YYYY-MM-DD). Default: today

Response (200 OK):

{
"period": {
"start": "2025-01-01",
"end": "2025-01-31"
},
"total_requests": 15234,
"total_tokens": 4521890,
"prompt_tokens": 2341200,
"completion_tokens": 2180690,
"estimated_cost_usd": 42.15,
"by_model": {
"gpt-4o": {
"requests": 8500,
"tokens": 2500000
},
"anthropic/claude-4.6-sonnet": {
"requests": 4200,
"tokens": 1500000
}
},
"pii_detections": {
"total": 3421,
"by_type": {
"names": 1200,
"pesel": 890,
"phone": 654,
"email": 432,
"nip": 245
}
}
}

GET /v1/usage/daily

Get daily usage breakdown.

Query Parameters:

ParameterTypeRequiredDescription
start_datestringNoStart date (YYYY-MM-DD). Default: 30 days ago
end_datestringNoEnd date (YYYY-MM-DD). Default: today

Response (200 OK):

{
"daily": [
{
"date": "2025-01-15",
"requests": 523,
"tokens": 156200,
"prompt_tokens": 82100,
"completion_tokens": 74100,
"pii_detections": 112
},
{
"date": "2025-01-16",
"requests": 487,
"tokens": 142800,
"prompt_tokens": 75300,
"completion_tokens": 67500,
"pii_detections": 98
}
]
}

PII Settings

GET /v1/settings/pii

Get current PII detection configuration.

Response (200 OK):

{
"enabled": true,
"enabled_types": [
"names", "pesel", "nip", "regon", "id_card", "passport",
"phone", "email", "address", "postal_code", "date_of_birth",
"iban", "credit_card"
],
"art9_detection": true,
"art9_action": "flag"
}

PATCH /v1/settings/pii

Update PII detection configuration.

Request Body:

{
"enabled": true,
"enabled_types": ["pesel", "nip", "names", "phone", "email"],
"art9_detection": true,
"art9_action": "flag"
}
FieldTypeRequiredDescription
enabledbooleanNoEnable/disable PII detection globally
enabled_typesarrayNoList of PII types to detect
art9_detectionbooleanNoEnable/disable Art. 9 detection
art9_actionstringNoflag, block, or ignore

Response (200 OK):

{
"enabled": true,
"enabled_types": ["pesel", "nip", "names", "phone", "email"],
"art9_detection": true,
"art9_action": "flag",
"updated_at": "2025-01-15T10:30:00Z"
}

Subscription Plans

API access and rate limits depend on your subscription plan:

PlanPriceLimitsModels
Free0 PLN/mo50 req/day, 3000 char limitgpt-5.4-nano + gemini-flash-lite only
Solo199 PLN/moUnlimited UI usage, BYOKAll models
Business499 PLN/moAPI access, 100k req/moAll models
EnterpriseCustomCustomAll models
info

Only the Business and Enterprise plans include programmatic API access. The Free and Solo plans are limited to the Ovexa UI (chat playground). The sole API endpoint is /v1/chat/completions.

Rate Limit Headers

All authenticated responses include rate limit information:

HeaderDescription
X-RateLimit-LimitMaximum requests per minute
X-RateLimit-RemainingRemaining requests in current window
X-RateLimit-ResetUnix timestamp when the rate limit resets