Assisters API
SDKs

SDKs Overview

Use Assisters API with your favorite programming language

SDKs Overview

TL;DR

Use official OpenAI SDKs—Python, JavaScript, Ruby, Go, .NET. Just set base_url to api.assisters.dev/v1 and use your ask_ API key. No special SDK needed; 100% OpenAI-compatible.

Assisters API is OpenAI-compatible, so you can use the official OpenAI SDK with any language. Just change the base URL and API key.

Quick Start

from openai import OpenAI

client = OpenAI(
    api_key="ask_your_api_key",
    base_url="https://api.assisters.dev/v1"
)

response = client.chat.completions.create(
    model="assisters-chat-v1",
    messages=[{"role": "user", "content": "Hello!"}]
)
import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'ask_your_api_key',
  baseURL: 'https://api.assisters.dev/v1'
});

const response = await client.chat.completions.create({
  model: 'assisters-chat-v1',
  messages: [{ role: 'user', content: 'Hello!' }]
});
curl https://api.assisters.dev/v1/chat/completions \
  -H "Authorization: Bearer ask_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{"model": "assisters-chat-v1", "messages": [{"role": "user", "content": "Hello!"}]}'

Supported Languages

Community SDKs

The OpenAI API specification has SDKs for many languages:

LanguagePackageStatus
Pythonopenai✅ Official
Node.jsopenai✅ Official
Gosashabaranov/go-openaiCommunity
Rubyruby-openaiCommunity
PHPopenai-php/clientCommunity
Javaopenai-javaCommunity
Rustasync-openaiCommunity
C#OpenAI-DotNetCommunity

Community SDKs work with Assisters by changing the base URL, but they're not officially tested by us.

Configuration

Base URL

All requests should use:

https://api.assisters.dev/v1

Authentication

Use Bearer token authentication:

Authorization: Bearer ask_your_api_key

Environment Variables

Recommended setup for any language:

export ASSISTERS_API_KEY="ask_your_api_key"
export ASSISTERS_BASE_URL="https://api.assisters.dev/v1"

SDK Features

Streaming

All SDKs support streaming responses:

stream = client.chat.completions.create(
    model="assisters-chat-v1",
    messages=[{"role": "user", "content": "Tell me a story"}],
    stream=True
)

for chunk in stream:
    print(chunk.choices[0].delta.content, end="")

Async Support

Python and JavaScript SDKs support async/await:

from openai import AsyncOpenAI

client = AsyncOpenAI(
    api_key="ask_...",
    base_url="https://api.assisters.dev/v1"
)

async def main():
    response = await client.chat.completions.create(
        model="assisters-chat-v1",
        messages=[{"role": "user", "content": "Hello!"}]
    )
    print(response.choices[0].message.content)

Automatic Retries

The official SDKs include built-in retry logic:

from openai import OpenAI

client = OpenAI(
    api_key="ask_...",
    base_url="https://api.assisters.dev/v1",
    max_retries=3  # Automatic retries on errors
)

Timeouts

Configure request timeouts:

client = OpenAI(
    api_key="ask_...",
    base_url="https://api.assisters.dev/v1",
    timeout=30.0  # 30 second timeout
)

Supported Features

FeatureStatus
Chat Completions✅ Full Support
Vision (Images in chat)✅ Full Support
Embeddings✅ Full Support
Moderation✅ Full Support
Reranking✅ Full Support
Audio Transcription✅ Full Support
Text-to-Speech✅ Full Support
Image Generation✅ Full Support
Function CallingComing Soon
Fine-tuningContact Us

Framework Integrations

LangChain

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    model="assisters-chat-v1",
    openai_api_key="ask_...",
    openai_api_base="https://api.assisters.dev/v1"
)

LlamaIndex

from llama_index.llms.openai import OpenAI

llm = OpenAI(
    model="assisters-chat-v1",
    api_key="ask_...",
    api_base="https://api.assisters.dev/v1"
)

Vercel AI SDK

import { createOpenAI } from '@ai-sdk/openai';

const assisters = createOpenAI({
  apiKey: 'ask_...',
  baseURL: 'https://api.assisters.dev/v1'
});

const result = await generateText({
  model: assisters('assisters-chat-v1'),
  prompt: 'Hello!'
});

Getting Help