Assisters API
Getting Started

Quickstart

Get up and running with Assisters API in under 5 minutes

Quickstart

TL;DR

  1. Sign up at assisters.dev and get an API key (starts with ask_). 2) Install OpenAI SDK (pip install openai). 3) Set base_url to https://api.assisters.dev/v1. 4) Use model assisters-chat-v1 for chat completions. That's it—you're ready to build!

This guide will help you make your first API call in under 5 minutes.

Prerequisites

An Assisters account (sign up free)
An API key from your dashboard
Python 3.8+ or Node.js 18+ installed

Step 1: Get Your API Key

  1. Log in to your Assisters Dashboard
  2. Navigate to API Keys
  3. Click Create New Key
  4. Copy your key (it starts with ask_)

Keep your API key secure! Never expose it in client-side code or commit it to version control.

Step 2: Install the SDK

Since Assisters API is OpenAI-compatible, you can use the official OpenAI SDK:

pip install openai
npm install openai
pnpm add openai

Step 3: Make Your First Request

from openai import OpenAI

# Initialize the client with Assisters API
client = OpenAI(
    api_key="ask_your_api_key_here",
    base_url="https://api.assisters.dev/v1"
)

# Create a chat completion
response = client.chat.completions.create(
    model="assisters-chat-v1",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What is the capital of France?"}
    ]
)

print(response.choices[0].message.content)
# Output: The capital of France is Paris.
import OpenAI from 'openai';

// Initialize the client with Assisters API
const client = new OpenAI({
  apiKey: 'ask_your_api_key_here',
  baseURL: 'https://api.assisters.dev/v1'
});

// Create a chat completion
const response = await client.chat.completions.create({
  model: 'assisters-chat-v1',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'What is the capital of France?' }
  ]
});

console.log(response.choices[0].message.content);
// Output: The capital of France is Paris.
curl https://api.assisters.dev/v1/chat/completions \
  -H "Authorization: Bearer ask_your_api_key_here" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "assisters-chat-v1",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "What is the capital of France?"}
    ]
  }'

Step 4: Use Streaming (Optional)

For real-time responses, enable streaming:

from openai import OpenAI

client = OpenAI(
    api_key="ask_your_api_key_here",
    base_url="https://api.assisters.dev/v1"
)

# Stream the response
stream = client.chat.completions.create(
    model="assisters-chat-v1",
    messages=[
        {"role": "user", "content": "Write a haiku about coding"}
    ],
    stream=True
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")
import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'ask_your_api_key_here',
  baseURL: 'https://api.assisters.dev/v1'
});

// Stream the response
const stream = await client.chat.completions.create({
  model: 'assisters-chat-v1',
  messages: [
    { role: 'user', content: 'Write a haiku about coding' }
  ],
  stream: true
});

for await (const chunk of stream) {
  const content = chunk.choices[0]?.delta?.content || '';
  process.stdout.write(content);
}

Step 5: Try Other Endpoints

Create Embeddings

response = client.embeddings.create(
    model="assisters-embed-v1",
    input="The quick brown fox jumps over the lazy dog"
)

print(f"Embedding dimensions: {len(response.data[0].embedding)}")
# Output: Embedding dimensions: 1024

Content Moderation

response = client.moderations.create(
    model="assisters-moderation-v1",
    input="Hello, how are you today?"
)

print(f"Flagged: {response.results[0].flagged}")
# Output: Flagged: False

Vision Analysis

response = client.chat.completions.create(
    model="assisters-vision-v1",
    messages=[{
        "role": "user",
        "content": [
            {"type": "text", "text": "What's in this image?"},
            {"type": "image_url", "image_url": {"url": "https://example.com/image.jpg"}}
        ]
    }]
)

print(response.choices[0].message.content)

Code Generation

response = client.chat.completions.create(
    model="assisters-code-v1",
    messages=[
        {"role": "user", "content": "Write a Python function to calculate fibonacci numbers"}
    ]
)

print(response.choices[0].message.content)

Understanding the Response

A typical chat completion response looks like this:

{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1706745600,
  "model": "assisters-chat-v1",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "The capital of France is Paris."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 25,
    "completion_tokens": 8,
    "total_tokens": 33
  }
}

The usage field shows token consumption, which determines your billing. See token counting for details.

Environment Variables

For production, use environment variables instead of hardcoding your API key:

ASSISTERS_API_KEY=ask_your_api_key_here
import os
from openai import OpenAI

client = OpenAI(
    api_key=os.environ.get("ASSISTERS_API_KEY"),
    base_url="https://api.assisters.dev/v1"
)
import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: process.env.ASSISTERS_API_KEY,
  baseURL: 'https://api.assisters.dev/v1'
});

Next Steps

Troubleshooting