Error Handling
Understanding and handling API errors
Error Handling
TL;DR
Errors return JSON with error.message, error.type, error.code. Common codes: 401 (invalid key), 429 (rate limit), 500 (server error). Always check X-Request-ID header for debugging. Implement exponential backoff for 429/500 errors.
Learn how to handle errors from the Assisters API gracefully. All errors follow a consistent format to make debugging easier.
Error Response Format
All API errors return a JSON response with this structure:
{
"error": {
"message": "Human-readable error description",
"type": "error_type",
"code": "error_code",
"param": "parameter_name"
}
}| Field | Description |
|---|---|
message | Human-readable description of the error |
type | Category of the error |
code | Machine-readable error code |
param | The parameter that caused the error (if applicable) |
HTTP Status Codes
| Status | Description |
|---|---|
200 | Success |
400 | Bad Request - Invalid parameters |
401 | Unauthorized - Invalid or missing API key |
403 | Forbidden - Valid key but no access |
404 | Not Found - Resource doesn't exist |
429 | Too Many Requests - Rate limit exceeded |
500 | Internal Server Error - Server-side issue |
503 | Service Unavailable - Temporarily overloaded |
Error Types
invalid_request_error
The request was malformed or missing required parameters.
{
"error": {
"message": "Invalid value for 'model': 'unknown-model' is not a valid model",
"type": "invalid_request_error",
"code": "invalid_model",
"param": "model"
}
}Common causes:
- Missing required parameters
- Invalid parameter values
- Malformed JSON body
- Unsupported model name
authentication_error
Issues with your API key.
{
"error": {
"message": "Invalid API key provided",
"type": "authentication_error",
"code": "invalid_api_key"
}
}Common causes:
- API key is missing
- API key is malformed
- API key has been revoked
rate_limit_error
You've exceeded your rate limits.
{
"error": {
"message": "Rate limit exceeded. Please retry after 5 seconds.",
"type": "rate_limit_error",
"code": "rate_limit_exceeded"
}
}Additional headers:
Retry-After: 5
X-RateLimit-Limit-RPM: 100
X-RateLimit-Remaining-RPM: 0tokens_exhausted_error
You've used all your monthly tokens.
{
"error": {
"message": "Monthly token limit exceeded. Upgrade your plan or wait for reset.",
"type": "tokens_exhausted_error",
"code": "tokens_exhausted",
"upgrade_url": "https://assisters.dev/pricing"
}
}insufficient_funds_error
Your wallet balance is too low (for pay-as-you-go).
{
"error": {
"message": "Insufficient wallet balance. Please top up your account.",
"type": "insufficient_funds_error",
"code": "insufficient_funds",
"topup_url": "https://assisters.dev/dashboard/billing"
}
}content_policy_error
Content was blocked by moderation.
{
"error": {
"message": "Your request was rejected due to content policy violation",
"type": "content_policy_error",
"code": "content_filtered"
}
}server_error
An issue on our side.
{
"error": {
"message": "An internal error occurred. Please try again.",
"type": "server_error",
"code": "internal_error"
}
}Error Codes Reference
| Code | Status | Description |
|---|---|---|
invalid_api_key | 401 | API key is invalid or missing |
api_key_revoked | 401 | API key has been revoked |
origin_not_allowed | 403 | Request origin not in allowed domains |
invalid_model | 400 | Specified model doesn't exist |
invalid_request | 400 | Request body is malformed |
missing_parameter | 400 | Required parameter is missing |
invalid_parameter | 400 | Parameter has invalid value |
context_length_exceeded | 400 | Input exceeds model's context limit |
rate_limit_exceeded | 429 | RPM or TPM limit exceeded |
tokens_exhausted | 429 | Monthly token limit reached |
insufficient_funds | 402 | Wallet balance too low |
content_filtered | 400 | Content blocked by moderation |
prompt_injection_detected | 400 | Potential prompt injection blocked |
model_overloaded | 503 | Model is temporarily overloaded |
internal_error | 500 | Server-side error |
Handling Errors in Code
from openai import OpenAI, APIError, RateLimitError, AuthenticationError
client = OpenAI(
api_key="ask_your_api_key",
base_url="https://api.assisters.dev/v1"
)
try:
response = client.chat.completions.create(
model="assisters-chat-v1",
messages=[{"role": "user", "content": "Hello"}]
)
except AuthenticationError as e:
print(f"Authentication failed: {e}")
# Check your API key
except RateLimitError as e:
print(f"Rate limited: {e}")
# Wait and retry
except APIError as e:
print(f"API error: {e}")
# Handle other errorsimport OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'ask_your_api_key',
baseURL: 'https://api.assisters.dev/v1'
});
try {
const response = await client.chat.completions.create({
model: 'assisters-chat-v1',
messages: [{ role: 'user', content: 'Hello' }]
});
} catch (error) {
if (error instanceof OpenAI.AuthenticationError) {
console.log('Authentication failed:', error.message);
} else if (error instanceof OpenAI.RateLimitError) {
console.log('Rate limited:', error.message);
} else if (error instanceof OpenAI.APIError) {
console.log('API error:', error.message);
}
}Retry Strategy
Implement exponential backoff for transient errors:
import time
import random
def make_request_with_retry(func, max_retries=3):
for attempt in range(max_retries):
try:
return func()
except RateLimitError as e:
if attempt == max_retries - 1:
raise
# Exponential backoff with jitter
wait_time = (2 ** attempt) + random.uniform(0, 1)
print(f"Rate limited. Retrying in {wait_time:.2f}s...")
time.sleep(wait_time)
except APIError as e:
if e.status_code >= 500 and attempt < max_retries - 1:
# Retry server errors
wait_time = (2 ** attempt) + random.uniform(0, 1)
time.sleep(wait_time)
else:
raiseBest Practices
Implement Retries
Use exponential backoff for rate limits and server errors
Log Request IDs
Save the X-Request-ID header for debugging with support
Handle All Error Types
Don't just catch generic exceptions - handle specific error types
Show User-Friendly Messages
Don't expose raw error messages to end users
Getting Help
If you encounter persistent errors:
- Check the status page for outages
- Review your dashboard for usage limits
- Include the
X-Request-IDwhen contacting support - Join our Discord for community help
Contact Support
Email us with your request ID for faster resolution