Drop-in retriever for LangChain RAG pipelines with full tracing support.
SeiznRetriever is a drop-in replacement for any LangChain retriever. It provides vector search with built-in tracing, caching, and reranking.
Install the Seizn SDK alongside LangChain.
# TypeScript / JavaScript
npm install seizn @langchain/core
# Python
pip install seizn langchainCreate a RAG chain with SeiznRetriever in just a few lines of code.
import { SeiznRetriever } from 'seizn/langchain';
import { ChatOpenAI } from '@langchain/openai';
import { createRetrievalChain } from 'langchain/chains/retrieval';
import { createStuffDocumentsChain } from 'langchain/chains/combine_documents';
import { ChatPromptTemplate } from '@langchain/core/prompts';
// Initialize the Seizn retriever
const retriever = new SeiznRetriever({
apiKey: process.env.SEIZN_API_KEY,
dataset: 'my-docs',
topK: 5,
threshold: 0.7,
});
// Create a RAG chain
const llm = new ChatOpenAI({ model: 'gpt-4' });
const prompt = ChatPromptTemplate.fromTemplate(`
Answer the question based on the following context:
{context}
Question: {input}
`);
const documentChain = await createStuffDocumentsChain({ llm, prompt });
const retrievalChain = await createRetrievalChain({
combineDocsChain: documentChain,
retriever,
});
// Run the chain
const response = await retrievalChain.invoke({
input: 'How do I configure rate limiting?',
});
console.log(response.answer);
// Trace ID available for debugging
console.log('Trace:', response.seiznTrace);import os
from seizn.langchain import SeiznRetriever
from langchain_openai import ChatOpenAI
from langchain.chains import create_retrieval_chain
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain_core.prompts import ChatPromptTemplate
# Initialize the Seizn retriever
retriever = SeiznRetriever(
api_key=os.environ["SEIZN_API_KEY"],
dataset="my-docs",
top_k=5,
threshold=0.7,
)
# Create a RAG chain
llm = ChatOpenAI(model="gpt-4")
prompt = ChatPromptTemplate.from_template("""
Answer the question based on the following context:
{context}
Question: {input}
""")
document_chain = create_stuff_documents_chain(llm, prompt)
retrieval_chain = create_retrieval_chain(retriever, document_chain)
# Run the chain
response = retrieval_chain.invoke({
"input": "How do I configure rate limiting?"
})
print(response["answer"])
# Trace ID available for debugging
print("Trace:", response.get("seizn_trace"))Reduce latency and costs by caching repeated queries.
const retriever = new SeiznRetriever({
apiKey: process.env.SEIZN_API_KEY,
dataset: 'my-docs',
cache: {
enabled: true,
ttl: 3600, // 1 hour
},
});Improve result quality with cross-encoder reranking.
const retriever = new SeiznRetriever({
apiKey: process.env.SEIZN_API_KEY,
dataset: 'my-docs',
rerank: {
enabled: true,
model: 'cohere-rerank-v3',
topN: 3,
},
});Filter results by metadata fields before vector search.
const retriever = new SeiznRetriever({
apiKey: process.env.SEIZN_API_KEY,
dataset: 'my-docs',
filter: {
category: 'api-docs',
language: 'en',
},
});| Error | Cause | Solution |
|---|---|---|
SEIZN_AUTH_ERROR | Invalid or missing API key | Check SEIZN_API_KEY environment variable |
SEIZN_RATE_LIMIT | Too many requests per second | Implement exponential backoff or upgrade plan |
Empty results | Threshold too high or no matching documents | Lower threshold or check dataset contents |