Assisters API
Models

Embedding Models

Text embedding models for semantic search and similarity

Embedding Models

Create vector representations of text for semantic search, clustering, recommendations, and RAG applications with Assisters Embed, our multilingual embedding model.

Assisters Embed v1

Model IDstring

assisters-embed-v1

Our state-of-the-art multilingual embedding model supporting 100+ languages with industry-leading performance.

SpecificationValue
Model IDassisters-embed-v1
Dimensions1024
Max Tokens8,192
Input Price$0.01 / million tokens
Similarity MetricCosine

Capabilities

  • Multilingual: Native support for 100+ languages
  • Long Context: Process up to 8,192 tokens per request
  • High Quality: State-of-the-art performance on MTEB benchmark
  • Cross-lingual: Match queries and documents across languages
  • Versatile: Optimized for search, clustering, and classification

Example Usage

from openai import OpenAI

client = OpenAI(
    base_url="https://api.assisters.dev/v1",
    api_key="your-api-key"
)

response = client.embeddings.create(
    model="assisters-embed-v1",
    input="The quick brown fox jumps over the lazy dog"
)

# Returns 1024-dimensional vector
print(f"Dimensions: {len(response.data[0].embedding)}")

Batch Embedding

# Embed multiple texts in one request for better throughput
texts = [
    "First document to embed",
    "Second document to embed",
    "Third document to embed"
]

response = client.embeddings.create(
    model="assisters-embed-v1",
    input=texts
)

# Access each embedding
for i, embedding in enumerate(response.data):
    print(f"Text {i}: {len(embedding.embedding)} dimensions")

Parameters

ParameterTypeDefaultDescription
inputstring/arrayrequiredText(s) to embed
modelstringrequiredModel ID (assisters-embed-v1)
encoding_formatstring"float"Output format: "float" or "base64"

Use Cases

Best Practices

Batch Requests

Embed multiple texts in one request for better throughput

Cache Embeddings

Store embeddings to avoid recomputing for the same text

Normalize Vectors

Our model outputs normalized vectors; verify for your use case

Match Query/Doc Models

Always use the same model for queries and documents

Vector Databases

Store and search embeddings efficiently:

DatabaseTypeFeatures
PineconeManagedFast, scalable, serverless
WeaviateSelf-hostedOpen-source, hybrid search
QdrantSelf-hostedRust-based, efficient
MilvusSelf-hostedDistributed, GPU support
pgvectorExtensionPostgreSQL integration
SupabaseManagedPostgreSQL with pgvector built-in