🦜Official

LangChain Integration

Drop-in retriever for LangChain RAG pipelines with full tracing support.

0160-Second Overview

SeiznRetriever is a drop-in replacement for any LangChain retriever. It provides vector search with built-in tracing, caching, and reranking.

  • βœ“Compatible with all LangChain chains and agents
  • βœ“Built-in tracing for every retrieval operation
  • βœ“Supports filtering, reranking, and hybrid search

02Installation

Install the Seizn SDK alongside LangChain.

Installationbash
# TypeScript / JavaScript
npm install seizn @langchain/core

# Python
pip install seizn langchain

035-Minute Example

Create a RAG chain with SeiznRetriever in just a few lines of code.

TypeScripttypescript
import { SeiznRetriever } from 'seizn/langchain';
import { ChatOpenAI } from '@langchain/openai';
import { createRetrievalChain } from 'langchain/chains/retrieval';
import { createStuffDocumentsChain } from 'langchain/chains/combine_documents';
import { ChatPromptTemplate } from '@langchain/core/prompts';

// Initialize the Seizn retriever
const retriever = new SeiznRetriever({
  apiKey: process.env.SEIZN_API_KEY,
  dataset: 'my-docs',
  topK: 5,
  threshold: 0.7,
});

// Create a RAG chain
const llm = new ChatOpenAI({ model: 'gpt-4' });

const prompt = ChatPromptTemplate.fromTemplate(`
Answer the question based on the following context:
{context}

Question: {input}
`);

const documentChain = await createStuffDocumentsChain({ llm, prompt });
const retrievalChain = await createRetrievalChain({
  combineDocsChain: documentChain,
  retriever,
});

// Run the chain
const response = await retrievalChain.invoke({
  input: 'How do I configure rate limiting?',
});

console.log(response.answer);
// Trace ID available for debugging
console.log('Trace:', response.seiznTrace);
Pythonpython
import os
from seizn.langchain import SeiznRetriever
from langchain_openai import ChatOpenAI
from langchain.chains import create_retrieval_chain
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain_core.prompts import ChatPromptTemplate

# Initialize the Seizn retriever
retriever = SeiznRetriever(
    api_key=os.environ["SEIZN_API_KEY"],
    dataset="my-docs",
    top_k=5,
    threshold=0.7,
)

# Create a RAG chain
llm = ChatOpenAI(model="gpt-4")

prompt = ChatPromptTemplate.from_template("""
Answer the question based on the following context:
{context}

Question: {input}
""")

document_chain = create_stuff_documents_chain(llm, prompt)
retrieval_chain = create_retrieval_chain(retriever, document_chain)

# Run the chain
response = retrieval_chain.invoke({
    "input": "How do I configure rate limiting?"
})

print(response["answer"])
# Trace ID available for debugging
print("Trace:", response.get("seizn_trace"))

04Production Tips

Enable Caching

Reduce latency and costs by caching repeated queries.

typescript
const retriever = new SeiznRetriever({
  apiKey: process.env.SEIZN_API_KEY,
  dataset: 'my-docs',
  cache: {
    enabled: true,
    ttl: 3600, // 1 hour
  },
});

Enable Reranking

Improve result quality with cross-encoder reranking.

typescript
const retriever = new SeiznRetriever({
  apiKey: process.env.SEIZN_API_KEY,
  dataset: 'my-docs',
  rerank: {
    enabled: true,
    model: 'cohere-rerank-v3',
    topN: 3,
  },
});

Metadata Filtering

Filter results by metadata fields before vector search.

typescript
const retriever = new SeiznRetriever({
  apiKey: process.env.SEIZN_API_KEY,
  dataset: 'my-docs',
  filter: {
    category: 'api-docs',
    language: 'en',
  },
});

05Troubleshooting

ErrorCauseSolution
SEIZN_AUTH_ERRORInvalid or missing API keyCheck SEIZN_API_KEY environment variable
SEIZN_RATE_LIMITToo many requests per secondImplement exponential backoff or upgrade plan
Empty resultsThreshold too high or no matching documentsLower threshold or check dataset contents