Skip to main content

JavaScript / TypeScript Integration

VaultProxy works with the OpenAI Node.js SDK, the native fetch API, or any HTTP client.

OpenAI Node.js SDK

The recommended approach. Install the SDK and change two lines:

npm install openai

Basic Usage

import OpenAI from "openai";

const client = new OpenAI({
baseURL: "https://api.vaultproxy.ai/v1",
apiKey: "vpx_live_YOUR_API_KEY",
});

const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "What is the capital of Poland?" },
],
temperature: 0.7,
max_tokens: 500,
});

console.log(response.choices[0].message.content);

Streaming

import OpenAI from "openai";

const client = new OpenAI({
baseURL: "https://api.vaultproxy.ai/v1",
apiKey: "vpx_live_YOUR_API_KEY",
});

const stream = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Write a short poem about Warsaw." }],
stream: true,
});

for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content;
if (content) {
process.stdout.write(content);
}
}

Using Different Providers

// Anthropic
const response = await client.chat.completions.create({
model: "anthropic/claude-4.6-sonnet",
messages: [{ role: "user", content: "Hello from Claude!" }],
});

// Google Gemini
const response2 = await client.chat.completions.create({
model: "google/gemini-2.5-pro",
messages: [{ role: "user", content: "Hello from Gemini!" }],
});

Native Fetch API

For environments where you do not want to install extra dependencies:

const VAULTPROXY_URL = "https://api.vaultproxy.ai/v1/chat/completions";
const API_KEY = "vpx_live_YOUR_API_KEY";

interface ChatMessage {
role: "system" | "user" | "assistant";
content: string;
}

interface ChatCompletionResponse {
id: string;
object: string;
created: number;
model: string;
choices: {
index: number;
message: ChatMessage;
finish_reason: string;
}[];
usage: {
prompt_tokens: number;
completion_tokens: number;
total_tokens: number;
};
}

async function chat(messages: ChatMessage[], model = "gpt-4o"): Promise<string> {
const response = await fetch(VAULTPROXY_URL, {
method: "POST",
headers: {
Authorization: `Bearer ${API_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({ model, messages }),
});

if (!response.ok) {
throw new Error(`VaultProxy error: ${response.status} ${response.statusText}`);
}

const data: ChatCompletionResponse = await response.json();
return data.choices[0].message.content;
}

// Usage
const answer = await chat([
{ role: "user", content: "Summarize: Jan Kowalski, PESEL 02271409862" },
]);
console.log(answer);

TypeScript Types

If you want full type safety without the OpenAI SDK:

interface VaultProxyConfig {
baseUrl: string;
apiKey: string;
}

interface ChatRequest {
model: string;
messages: ChatMessage[];
temperature?: number;
max_tokens?: number;
stream?: boolean;
}

interface ChatMessage {
role: "system" | "user" | "assistant";
content: string;
}

interface ChatChoice {
index: number;
message: ChatMessage;
finish_reason: "stop" | "length" | "content_filter";
}

interface UsageInfo {
prompt_tokens: number;
completion_tokens: number;
total_tokens: number;
}

interface ChatResponse {
id: string;
object: "chat.completion";
created: number;
model: string;
choices: ChatChoice[];
usage: UsageInfo;
}

class VaultProxyClient {
private config: VaultProxyConfig;

constructor(config: VaultProxyConfig) {
this.config = config;
}

async chat(request: ChatRequest): Promise<ChatResponse> {
const response = await fetch(`${this.config.baseUrl}/chat/completions`, {
method: "POST",
headers: {
Authorization: `Bearer ${this.config.apiKey}`,
"Content-Type": "application/json",
},
body: JSON.stringify(request),
});

if (!response.ok) {
const error = await response.json().catch(() => ({}));
throw new Error(`VaultProxy error ${response.status}: ${JSON.stringify(error)}`);
}

return response.json();
}
}

// Usage
const vp = new VaultProxyClient({
baseUrl: "https://api.vaultproxy.ai/v1",
apiKey: "vpx_live_YOUR_API_KEY",
});

const result = await vp.chat({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello!" }],
});

Error Handling

import OpenAI from "openai";

const client = new OpenAI({
baseURL: "https://api.vaultproxy.ai/v1",
apiKey: "vpx_live_YOUR_API_KEY",
});

try {
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello" }],
});
console.log(response.choices[0].message.content);
} catch (error) {
if (error instanceof OpenAI.AuthenticationError) {
console.error("Invalid API key. Check your VaultProxy API key.");
} else if (error instanceof OpenAI.RateLimitError) {
console.error("Rate limit exceeded. Wait and retry.");
} else if (error instanceof OpenAI.APIError) {
console.error(`API error: ${error.status} - ${error.message}`);
} else {
console.error("Unexpected error:", error);
}
}

Next.js / Edge Runtime

VaultProxy works in Next.js API routes and edge functions:

// app/api/chat/route.ts
import { NextResponse } from "next/server";
import OpenAI from "openai";

const client = new OpenAI({
baseURL: "https://api.vaultproxy.ai/v1",
apiKey: process.env.VAULTPROXY_API_KEY!,
});

export async function POST(request: Request) {
const { message } = await request.json();

const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: message }],
});

return NextResponse.json({
reply: response.choices[0].message.content,
});
}
tip

Store your VaultProxy API key in environment variables (VAULTPROXY_API_KEY). Never hard-code API keys in client-side code.