OrkaJS
Orka.JS

Configuration

Configure Orka JS defaults, adapters, memory, observability, andprompt management.

Full Configuration

Here's a complete example showing all available configuration options:

Idea

Import Methods

Standard Import

import { createOrka, OpenAIAdapter, PineconeAdapter } from 'orkajs';

Optimized Import

Recommended
import { createOrka } from '@orka-js/core';
import { OpenAIAdapter, PineconeAdapter } from '@orka-js/openai';
// Or
//import { PineconeAdapter } from '@orka-js/pinecone';

Enables tree-shaking for minimal bundle size in production.

import { createOrka, OpenAIAdapter, PineconeAdapter } from 'orkajs';
 
const orka = createOrka({
// Required LLM provider:
llm: new OpenAIAdapter({
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o-mini',
embeddingModel: 'text-embedding-3-small',
}),
 
// Required Vector database:
vectorDB: new PineconeAdapter({
apiKey: process.env.PINECONE_API_KEY!,
indexHost: 'https://your-index.svc.pinecone.io',
}),
 
// Optional: Default parameters
defaults: {
chunkSize: 1000,
chunkOverlap: 200,
topK: 5,
temperature: 0.7,
maxTokens: 1024,
},
 
// Optional: Memory configuration
memory: {
maxMessages: 50,
strategy: 'sliding_window',
},
 
// Optional: Observability
observability: {
logLevel: 'info',
hooks: [{
onTraceEnd: (trace) => console.log(`Done: ${trace.totalLatencyMs}ms`),
}],
},
});

1. LLM Provider (Required)

The LLM adapter handles text generation and embeddings. Orka JS supports multiple providers:

OpenAI

GPT-4o, GPT-4o-mini, o1

OpenAI
import { OpenAIAdapter } from 'orkajs';
 
llm: new OpenAIAdapter({
model: 'gpt-4o-mini',
embeddingModel: 'text-embedding-3-small',
})
Get API Key

Anthropic

Claude 3.5 Sonnet, Opus

Anthropic
import { AnthropicAdapter } from 'orkajs';
 
llm: new AnthropicAdapter({
model: 'claude-3-5-sonnet-latest',
})
Get API Key

Mistral AI

Mistral Large, Pixtral

Mistral AI
import { MistralAdapter } from 'orkajs';
 
llm: new MistralAdapter({
model: 'mistral-large-latest',
})
Get API Key

OllamaLocal

Llama 3.2, Mistral, Gemma

Ollama
import { OllamaAdapter } from 'orkajs';
 
llm: new OllamaAdapter({
baseURL: 'http://localhost:11434',
model: 'llama3.2',
})
Get API Key

2. Vector Database (Required)

The vector database stores document embeddings for semantic search. Choose based on your deployment needs:

MemoryVectorAdapter

Best for: Development, testing, prototyping

import { MemoryVectorAdapter } from 'orkajs';
 
vectorDB: new MemoryVectorAdapter()

⚠️ Data is lost on restart

PineconeAdapter

Best for: Production, cloud-native, scalable

import { PineconeAdapter } from 'orkajs';
 
vectorDB: new PineconeAdapter({
apiKey: process.env.PINECONE_API_KEY!,
indexHost: 'https://your-index.svc.pinecone.io',
})
→ Get Started with Pinecone

QdrantAdapter

Best for: Self-hosted, on-premise, Docker

import { QdrantAdapter } from 'orkajs';
 
vectorDB: new QdrantAdapter({
url: 'http://localhost:6333',
apiKey: process.env.QDRANT_API_KEY,
})
→ Qdrant Documentation

ChromaAdapter

Best for: Local development, embedded use

import { ChromaAdapter } from 'orkajs';
 
vectorDB: new ChromaAdapter({
url: 'http://localhost:8000',
})
→ Chroma Documentation

3. Default Parameters (Optional)

Set default values for chunking, retrieval, and generation. These can be overridden per-call.

4. Memory Configuration (Optional)

Configure conversation memory to maintain context across multiple interactions. The sliding_window strategy keeps the most recent messages.

5. Observability (Optional)

Monitor your application with logging and trace hooks. Use hooks to integrate with external monitoring tools like DataDog, New Relic, or custom analytics.

Configuration Reference

PropertyTypeDefaultDescription
chunkSizenumber1000

Characters per chunk

chunkOverlapnumber200

Overlap between chunks

topKnumber5

Results for RAG queries

temperaturenumber0.7

LLM creativity (0 to 1)

maxTokensnumber1024

Max tokens per response

💡 Pro Tip

All default values can be overridden per-call. For example, orka.ask({ topK: 10 }) overrides the default topK for that specific query.