Configuration
Configure Orka JS defaults, adapters, memory, observability, andprompt management.
Full Configuration
Here's a complete example showing all available configuration options:
Import Methods
Standard Import
import { createOrka, OpenAIAdapter, PineconeAdapter } from 'orkajs';Optimized Import
Recommendedimport { createOrka } from '@orka-js/core';import { OpenAIAdapter, PineconeAdapter } from '@orka-js/openai';// Or //import { PineconeAdapter } from '@orka-js/pinecone';Enables tree-shaking for minimal bundle size in production.
import { createOrka, OpenAIAdapter, PineconeAdapter } from 'orkajs'; const orka = createOrka({ // Required LLM provider: llm: new OpenAIAdapter({ apiKey: process.env.OPENAI_API_KEY!, model: 'gpt-4o-mini', embeddingModel: 'text-embedding-3-small', }), // Required Vector database: vectorDB: new PineconeAdapter({ apiKey: process.env.PINECONE_API_KEY!, indexHost: 'https://your-index.svc.pinecone.io', }), // Optional: Default parameters defaults: { chunkSize: 1000, chunkOverlap: 200, topK: 5, temperature: 0.7, maxTokens: 1024, }, // Optional: Memory configuration memory: { maxMessages: 50, strategy: 'sliding_window', }, // Optional: Observability observability: { logLevel: 'info', hooks: [{ onTraceEnd: (trace) => console.log(`Done: ${trace.totalLatencyMs}ms`), }], },});1. LLM Provider (Required)
The LLM adapter handles text generation and embeddings. Orka JS supports multiple providers:
OpenAI
GPT-4o, GPT-4o-mini, o1
import { OpenAIAdapter } from 'orkajs'; llm: new OpenAIAdapter({ model: 'gpt-4o-mini',embeddingModel: 'text-embedding-3-small',})Anthropic
Claude 3.5 Sonnet, Opus
import { AnthropicAdapter } from 'orkajs'; llm: new AnthropicAdapter({ model: 'claude-3-5-sonnet-latest',})Mistral AI
Mistral Large, Pixtral
import { MistralAdapter } from 'orkajs'; llm: new MistralAdapter({ model: 'mistral-large-latest',})OllamaLocal
Llama 3.2, Mistral, Gemma
import { OllamaAdapter } from 'orkajs'; llm: new OllamaAdapter({ baseURL: 'http://localhost:11434',model: 'llama3.2',})2. Vector Database (Required)
The vector database stores document embeddings for semantic search. Choose based on your deployment needs:
MemoryVectorAdapter
Best for: Development, testing, prototyping
import { MemoryVectorAdapter } from 'orkajs'; vectorDB: new MemoryVectorAdapter()⚠️ Data is lost on restart
PineconeAdapter
Best for: Production, cloud-native, scalable
import { PineconeAdapter } from 'orkajs'; vectorDB: new PineconeAdapter({ apiKey: process.env.PINECONE_API_KEY!, indexHost: 'https://your-index.svc.pinecone.io',})QdrantAdapter
Best for: Self-hosted, on-premise, Docker
import { QdrantAdapter } from 'orkajs'; vectorDB: new QdrantAdapter({ url: 'http://localhost:6333', apiKey: process.env.QDRANT_API_KEY,})ChromaAdapter
Best for: Local development, embedded use
import { ChromaAdapter } from 'orkajs'; vectorDB: new ChromaAdapter({ url: 'http://localhost:8000',})3. Default Parameters (Optional)
Set default values for chunking, retrieval, and generation. These can be overridden per-call.
4. Memory Configuration (Optional)
Configure conversation memory to maintain context across multiple interactions. The sliding_window strategy keeps the most recent messages.
5. Observability (Optional)
Monitor your application with logging and trace hooks. Use hooks to integrate with external monitoring tools like DataDog, New Relic, or custom analytics.
Configuration Reference
| Property | Type | Default | Description |
|---|---|---|---|
chunkSize | number | 1000 | Characters per chunk |
chunkOverlap | number | 200 | Overlap between chunks |
topK | number | 5 | Results for RAG queries |
temperature | number | 0.7 | LLM creativity (0 to 1) |
maxTokens | number | 1024 | Max tokens per response |
💡 Pro Tip
All default values can be overridden per-call. For example, orka.ask({ topK: 10 }) overrides the default topK for that specific query.