🦙Official

LlamaIndex 어댑터

SeizRetriever는 LlamaIndex의 BaseRetriever를 상속하여 query_engine, chat_engine, 또는 커스텀 파이프라인에 바로 사용할 수 있습니다.

0160초 개요

네이티브 LlamaIndex 리트리버로 기존 워크플로우에 원활하게 통합됩니다. 하이브리드 검색, 재순위, 스트리밍을 지원합니다.

  • Native integration with LlamaIndex query engines
  • Supports streaming responses out of the box
  • Compatible with LlamaIndex node postprocessors

02설치

Install the Seizn SDK alongside LlamaIndex.

Installationbash
# TypeScript / JavaScript
npm install seizn llamaindex

# Python
pip install seizn llama-index

035분 예제

Build a query engine with SeiznRetriever for your RAG application.

TypeScripttypescript
import { SeiznRetriever } from 'seizn/llamaindex';
import { OpenAI } from 'llamaindex';
import { VectorStoreIndex, RetrieverQueryEngine } from 'llamaindex';

// Initialize the Seizn retriever
const retriever = new SeiznRetriever({
  apiKey: process.env.SEIZN_API_KEY,
  dataset: 'my-docs',
  topK: 5,
  threshold: 0.7,
});

// Create a query engine with the retriever
const llm = new OpenAI({ model: 'gpt-4' });
const queryEngine = new RetrieverQueryEngine(retriever, llm);

// Query your documents
const response = await queryEngine.query(
  'How do I configure rate limiting?'
);

console.log(response.response);
// Access the trace for debugging
console.log('Trace:', response.metadata?.seiznTrace);
Pythonpython
import os
from seizn.llamaindex import SeiznRetriever
from llama_index.llms.openai import OpenAI
from llama_index.core.query_engine import RetrieverQueryEngine

# Initialize the Seizn retriever
retriever = SeiznRetriever(
    api_key=os.environ["SEIZN_API_KEY"],
    dataset="my-docs",
    top_k=5,
    threshold=0.7,
)

# Create a query engine with the retriever
llm = OpenAI(model="gpt-4")
query_engine = RetrieverQueryEngine.from_args(
    retriever=retriever,
    llm=llm,
)

# Query your documents
response = query_engine.query(
    "How do I configure rate limiting?"
)

print(response.response)
# Access the trace for debugging
print("Trace:", response.metadata.get("seizn_trace"))

04Production Tips

Streaming Responses

Enable streaming for better user experience with long responses.

typescript
const queryEngine = new RetrieverQueryEngine(retriever, llm);

// Enable streaming response
const stream = await queryEngine.query(
  'Explain the authentication flow',
  { streaming: true }
);

for await (const chunk of stream) {
  process.stdout.write(chunk.response);
}

Hybrid Search

Combine vector and keyword search for better recall.

typescript
const retriever = new SeiznRetriever({
  apiKey: process.env.SEIZN_API_KEY,
  dataset: 'my-docs',
  searchMode: 'hybrid', // vector + keyword
  hybridAlpha: 0.7,     // 70% vector, 30% keyword
});

Node Postprocessors

Chain postprocessors for advanced filtering and reranking.

typescript
import { SimilarityPostprocessor, KeywordNodePostprocessor } from 'llamaindex';

const queryEngine = new RetrieverQueryEngine(retriever, llm, {
  nodePostprocessors: [
    new SimilarityPostprocessor({ similarityCutoff: 0.7 }),
    new KeywordNodePostprocessor({
      requiredKeywords: ['authentication'],
      excludeKeywords: ['deprecated'],
    }),
  ],
});

05문제 해결

ErrorCauseSolution
SEIZN_AUTH_ERRORInvalid or missing API keyCheck SEIZN_API_KEY environment variable
SEIZN_DATASET_NOT_FOUNDDataset name not foundVerify dataset exists in dashboard
Low relevance scoresQuery-document mismatchTry hybrid search or adjust threshold