Prasanth Janardhanan

Self-Hosted RAG Architecture for Privacy-Sensitive Document Search with Golang

Remember the days of boolean search operators? The frustrating experience of trying to find that one document by guessing the exact keywords it might contain? For decades, our ability to search through documents has been limited by keyword matching—a primitive approach that fails to capture the richness of human language and intent.

Traditional search technology has evolved through several stages: from basic keyword matching to more sophisticated approaches involving stemming, lemmatization, and eventually, statistical methods like TF-IDF (Term Frequency-Inverse Document Frequency). Each iteration brought incremental improvements, but they all shared a fundamental limitation—they didn’t truly understand the meaning behind our queries.

Limitations of Traditional Search Approaches

Keyword-Based Search Shortcomings

Keyword based vs Semantic

Keyword search engines operate on a simple principle: match the exact words in the query with words in the documents. This approach fails in numerous scenarios:

Despite sophisticated ranking algorithms, keyword search fundamentally operates at the surface level of text, not at the level of meaning.

Semantic Search Limitations

The next evolution was semantic search, which attempted to address some of these limitations by understanding relationships between words and concepts. Technologies like word embeddings (Word2Vec, GloVe) and more recently, contextual embeddings like BERT, improved search relevance by capturing semantic similarities.

However, even semantic search has significant limitations:

The “Data Silo” Problem

Data Silo Problem

Beyond technological limitations, traditional search approaches often suffer from the “data silo” problem. Organizations typically store information across multiple disconnected systems: document management systems, email archives, CRM databases, shared drives, and more. This fragmentation makes comprehensive search nearly impossible, as each system has its own search capabilities and limitations.

The result? Critical information becomes isolated, inaccessible, or simply too difficult to find. Knowledge workers spend an estimated 20% of their time searching for information, with much of that time wasted on unsuccessful queries.

Enter Large Language Models (LLMs) like GPT-4, Claude, and Llama. These models have fundamentally changed what’s possible in information retrieval by understanding language at a much deeper level than previous technologies.

How LLMs Understand Context and Meaning

LLMs are trained on vast corpora of text, developing an internal representation of language that captures subtleties of meaning, context, and relationships between concepts. They understand:

Most importantly, LLMs don’t just match patterns—they understand meaning and can generate new text based on that understanding.

Natural Language Queries vs. Keyword Queries

With LLMs, users can finally search as they think. Instead of trying to guess the perfect keywords, they can ask questions in natural language:

The ability to use natural language dramatically lowers the cognitive load on users and improves search effectiveness.

Real-World Examples Where LLM-Powered Search Excels

LLM-powered search demonstrates clear advantages in numerous scenarios:

  1. Legal discovery: Finding relevant case precedents based on factual similarities rather than keyword matches
  2. Healthcare: Retrieving patient records based on natural language descriptions of symptoms or treatment outcomes
  3. Customer support: Instantly finding relevant knowledge base articles that answer a customer’s question
  4. Research: Connecting disparate studies and findings across a large corpus of academic papers
  5. Technical troubleshooting: Matching error messages and symptoms to solutions even when terminology differs

In each case, LLMs excel by understanding the intent behind queries and the meaning within documents—not just matching words.

The Privacy Challenge with Commercial LLM Solutions

Despite their power, commercial LLM solutions present a significant challenge for organizations dealing with sensitive information: privacy.

Most commercial LLM services require sending your documents and queries to third-party servers. For many organizations, this is simply unacceptable:

Even with assurances about data handling, the very act of sending sensitive documents outside the organization’s security perimeter introduces unacceptable risk.

The Case for Self-Hosted RAG Architecture

Retrieval-Augmented Generation (RAG) offers a compelling solution to both the search limitations and privacy challenges. RAG combines:

  1. A retrieval component that finds relevant documents
  2. A generation component that synthesizes information into coherent, useful responses

By implementing RAG architecture on self-hosted infrastructure, organizations can:

Unified data control

Overview of Our Approach Using Golang

This article presents a comprehensive approach to building a self-hosted RAG system using Golang. We’ve chosen Go for several compelling reasons:

In the following sections, we’ll explore the key components of our architecture:

  1. Understanding the fundamentals of RAG and how it improves search
  2. Establishing clear system requirements and design goals
  3. Selecting and configuring the right vector database
  4. Choosing appropriate embedding models for private deployments
  5. Building an efficient document processing pipeline
  6. Developing a powerful search API
  7. Integrating with language models while maintaining privacy

By the end of this article, you’ll have a blueprint for implementing a privacy-preserving, high-performance document search system that leverages the power of modern language models without compromising security.

Let’s begin by diving deeper into the RAG architecture and how it fundamentally changes document search capabilities.

What is Retrieval-Augmented Generation?

Retrieval-Augmented Generation (RAG) represents a paradigm shift in how we approach information retrieval and natural language processing tasks. At its core, RAG combines two powerful capabilities:

  1. Retrieval: Finding relevant information from a knowledge base
  2. Generation: Creating coherent, contextually appropriate responses based on that information

Unlike traditional search that simply returns links or document snippets, RAG synthesizes information into direct, comprehensive answers. And unlike pure language models that rely solely on their internal parameters, RAG grounds its responses in specific, retrievable documents.

The term “Retrieval-Augmented Generation” was first introduced by researchers at Facebook AI (now Meta AI) in 2020, who demonstrated that augmenting language models with retrieval mechanisms significantly improved factual accuracy and reduced hallucinations. Since then, RAG has evolved into a fundamental architecture for numerous applications, from question-answering systems to conversational AI.

Core Components of a RAG System

A fully functional RAG system consists of several key components working in concert:

Document Processing Pipeline

Before any retrieval can happen, documents must be processed into a format suitable for semantic search. This pipeline typically includes:

The chunking step is particularly important, as it determines the granularity of retrieval. Chunks that are too large may contain irrelevant information; chunks that are too small may lose context.

Vector Embeddings and Their Importance

At the heart of modern RAG systems are vector embeddings—dense numerical representations of text that capture semantic meaning. These embeddings are generated by neural network models trained on vast corpora of text.

The key property of these embeddings is that texts with similar meanings have similar vector representations in the embedding space, regardless of the specific words used. This enables semantic search based on meaning rather than keywords.

For example, the queries “how to speed up my database” and “techniques for database performance improvement” would have similar vector representations despite using different words.

Several embedding models are available, ranging from open-source options like SentenceTransformers to commercial offerings from OpenAI, Cohere, and others. The quality of these embeddings directly impacts retrieval accuracy.

Vector Database for Efficient Retrieval

Once documents are processed and embeddings generated, they need to be stored in a way that enables efficient similarity search. This is where vector databases come in.

Vector databases are specialized data stores optimized for:

Popular vector databases include Weaviate, Milvus, Chroma, and Postgres with pgvector extension, each with different features, performance characteristics, and integration capabilities.

Language Model for Generation

The final component is a language model that generates coherent responses based on the retrieved information. This could be:

The language model takes the user query and the retrieved context as input, then generates a natural language response that answers the query based on the provided information.

How RAG Improves Search Accuracy and Relevance

RAG offers several significant advantages over traditional search approaches:

  1. Semantic understanding: RAG captures the meaning of queries and documents, not just keywords
  2. Context awareness: The language model considers all retrieved information together, making connections across documents
  3. Natural language answers: Instead of requiring users to click through results, RAG provides direct answers
  4. Reduced hallucinations: By grounding responses in retrieved documents, RAG minimizes the LLM’s tendency to generate plausible but incorrect information
  5. Adaptability: The knowledge base can be updated without retraining the entire model

These improvements translate to measurably better search experiences, with studies showing RAG systems can achieve 20-30% higher accuracy on question-answering tasks compared to traditional search or pure LLM approaches.

The Typical RAG Workflow: From Document to Answer

Let’s trace the complete journey of information through a RAG system:

  1. Document Ingestion:

    • Documents are collected from various sources
    • Text is extracted and cleaned
    • Documents are split into manageable chunks
    • Metadata is captured
  2. Embedding Generation:

    • Each document chunk is passed through an embedding model
    • The model generates a vector representation (typically 768-1536 dimensions)
    • These vectors capture the semantic meaning of the chunk
  3. Vector Storage:

    • Vectors and their associated text chunks are stored in a vector database
    • The database creates indices for efficient similarity search
    • Metadata is attached to each vector for filtering
  4. Query Processing:

    • A user submits a natural language query
    • The query is converted to a vector using the same embedding model
    • The vector database finds chunks with similar vectors
  5. Context Assembly:

    • The most relevant chunks are retrieved
    • They are combined to form a context document
    • This context is structured to provide the LLM with relevant information
  6. Response Generation:

    • The original query and the assembled context are sent to the language model
    • The model generates a response based on the query and context
    • The response is returned to the user
  7. Optional: Citation and Explanation:

    • The system can track which documents contributed to the answer
    • Citations can be provided to justify the response
    • The user can drill down into source documents if needed

This workflow can be implemented with varying levels of sophistication, from simple prototype systems to enterprise-grade solutions with caching, authorization, and advanced retrieval techniques.

Privacy Considerations in Traditional RAG Implementations

While RAG offers tremendous benefits for search and knowledge retrieval, most commercial implementations come with significant privacy concerns:

Data Exposure in API-Based Systems

When using API-based RAG services (like those from OpenAI, Google, or Anthropic):

This presents unacceptable risks for sensitive information like:

Limited Control Over Implementation Details

Commercial RAG solutions often operate as “black boxes,” with limited visibility into:

For organizations with strict compliance requirements, this lack of transparency creates risk.

The Need for Self-Hosted Solutions

These privacy concerns drive the need for self-hosted RAG architectures where:

Before diving into implementation details, we need to clearly define what we’re building and why. This chapter establishes the requirements and design goals for our self-hosted RAG system, creating a foundation for the technical decisions we’ll make in subsequent chapters.

Privacy Requirements for Sensitive Documents

Privacy is the primary driver for our self-hosted approach. Let’s define specific requirements to ensure our system properly safeguards sensitive information:

Data Sovereignty and Control

Authentication and Authorization

Compliance Enablement

Data Isolation

These privacy requirements will guide our architectural decisions, particularly around deployment models and component selection.

Performance Expectations: Response Time vs. Accuracy

RAG systems must balance speed and accuracy. Let’s establish clear performance targets:

Response Time Goals

Accuracy Targets

Trade-off Considerations

These metrics provide concrete targets for testing and optimization throughout development.

Cost Considerations for Self-Hosted Solutions

One advantage of self-hosted RAG is cost control, especially for high-volume usage. Here are our cost-related requirements:

Infrastructure Efficiency

Operational Costs

Cost vs. Capability Balance

These cost considerations ensure the system remains practical for organizations of various sizes and budgets.

Scalability Needs for Growing Document Collections

As document collections grow, our system must scale accordingly:

Document Volume Scalability

Query Volume Scalability

Storage Scalability

These scalability requirements ensure the system can grow from modest beginnings to enterprise-scale deployments.

Technical Stack Selection Criteria

Our technical choices must align with our requirements. Here’s what we’re looking for in our technology stack:

Language and Runtime

Libraries and Dependencies

Integration Capabilities

These criteria will guide our specific technology choices in subsequent chapters.

Why Golang? Advantages for RAG Implementation

Based on our requirements and selection criteria, Golang emerges as an excellent choice for implementing our self-hosted RAG system. Here’s why:

Performance and Efficiency

Go’s combination of compiled code and garbage collection provides near-native performance with memory safety. The language offers:

These characteristics ensure our system can handle high-throughput query processing without excessive resource consumption.

Concurrency Model

Go’s goroutines and channels provide an elegant approach to concurrent programming:

This concurrency model is ideal for RAG systems, which involve multiple parallel operations during document processing and query handling.

Ecosystem Maturity

The Go ecosystem offers several advantages for our implementation:

The ecosystem provides the building blocks we need without excessive dependencies.

Operational Advantages

Go applications are particularly well-suited to production environments:

These characteristics reduce operational overhead and maintenance requirements.

Developer Experience

Go balances performance with developer productivity:

This balance is especially important for complex systems like RAG, where clear code organization helps manage complexity.

Given these advantages, Golang provides an excellent foundation for our self-hosted RAG implementation, aligning well with our privacy, performance, cost, and scalability requirements.

System Architecture Overview

Based on our requirements, we can now outline the high-level architecture of our system:

  1. Document Processor: Golang services for document ingestion, parsing, and chunking
  2. Embedding Generator: Local embedding models wrapped with Go APIs
  3. Vector Store: Self-hosted vector database with Go-based interface
  4. Search API: Go HTTP/gRPC service for query processing
  5. LLM Connector: Integration with local or remote language models
  6. Security Layer: Authentication, authorization, and audit logging

This modular architecture allows for:

Selecting the Right Vector Database

The heart of any RAG system is its vector database. Think of it as the engine that powers your entire search experience – choose poorly, and you’ll be stuck with a sluggish, unreliable system that frustrates users and developers alike. Choose wisely, and you’ll have a foundation that can scale elegantly while maintaining lightning-fast search speeds.

But here’s the challenge: the vector database landscape is evolving at breakneck speed. New options emerge monthly, each claiming to be faster, more scalable, or more feature-rich than the last. How do you cut through the hype and find the database that will actually serve your needs?

Let’s dive in.

The Role of Vector Databases in RAG

Before we compare specific databases, let’s understand what makes vector databases special in the first place.

Imagine you’re trying to find a book in a massive library without any organization system. You’d have to scan every single book, opening each one to see if it contained what you needed. That’s essentially how computers approach text search without vectors – comparing strings character by character, a slow and literal process that misses semantic meaning.

Vector databases solve this by transforming the problem into one of spatial proximity. Each document chunk becomes a point in high-dimensional space, positioned so that similar content clusters together. Your query becomes another point in that same space, and finding relevant content is as simple as locating the nearest neighbors to your query point.

This approach is transformative because:

  1. It captures semantic similarity, not just keyword matches
  2. It scales efficiently to millions or billions of documents
  3. It maintains fast query speeds regardless of corpus size
  4. It works across languages and domains with the right embeddings

In our privacy-focused RAG system, the vector database doesn’t just store embeddings – it becomes the secure vault for all our sensitive document vectors, the gatekeeper that determines what information reaches our language model, and ultimately, the user.

Comparing Self-Hosted Vector Database Options

Let’s get practical and compare three leading contenders for our self-hosted vector database: Milvus, Weaviate, and Chroma. Rather than an exhaustive feature comparison, I’ll focus on what actually matters for our specific use case.

Milvus: The Heavyweight Champion

Milvus emerged from the AI research community and has matured into a robust, production-ready vector database with an impressive pedigree. It’s now a graduated project under the LF AI & Data Foundation, which speaks to its stability and community support.

When I first worked with Milvus on a legal document search project, what struck me was its sheer scalability. The system handled over 10 million document chunks with query response times that remained consistently under 100ms. That’s the kind of performance that keeps users engaged rather than watching progress bars.

Milvus shines in several areas:

Distributed architecture that scales horizontally across commodity hardware – perfect for growing document collections without expensive hardware upgrades. One project I consulted on started with a modest corpus of internal documentation and grew to encompass their entire knowledge base over two years. Milvus scaled alongside them without architectural changes.

Sophisticated index types that let you tune the precision/speed tradeoff. The HNSW (Hierarchical Navigable Small World) index in particular delivers remarkable query speeds while maintaining high recall rates.

Excellent Golang SDK that feels native and idiomatic. The code flows naturally with Go’s concurrency patterns, making integration straightforward for Go developers.

But Milvus also comes with tradeoffs. Its distributed architecture, while powerful, introduces deployment complexity – you’re essentially running a specialized database cluster. For smaller deployments, this can feel like overkill. It also has a steeper learning curve than some alternatives, with concepts like segments, partitions, and collection schemas to master.

Weaviate: The Developer’s Darling

If Milvus is the industrial-strength option, Weaviate positions itself as the developer-friendly alternative – and delivers on that promise. Built in Go from the ground up, it feels like it was designed specifically for Go developers.

Working with Weaviate on a healthcare compliance search tool revealed its standout quality: a thoughtfully designed API that just makes sense. The object-based data model feels natural, letting you structure your data with classes and properties that mirror your domain model.

Weaviate’s strengths include:

Hybrid search capabilities that combine vector similarity with BM25 keyword matching – giving you the best of both worlds without complicated setup. This proved invaluable when searching medical documents where specific terms needed exact matches alongside conceptual similarity.

GraphQL API that simplifies complex queries and feels modern compared to some alternatives. This makes it easier to build sophisticated front-end experiences on top of your vector search.

Modular architecture with pluggable modules for different embedding models, rerankers, and other components. This flexibility lets you swap components as technology evolves – critical in the fast-moving embedding space.

The tradeoff? Weaviate tends to be more memory-hungry than some alternatives, and its performance can degrade more noticeably under heavy load. It also doesn’t quite match Milvus in raw scalability for truly massive datasets, though recent versions have significantly improved in this area.

Chroma: The Lightweight Contender

Sometimes simplicity trumps feature lists. Chroma entered the vector database scene more recently but has gained rapid adoption thanks to its focus on developer experience and lightweight deployment.

I recently used Chroma for a proof-of-concept RAG system for a financial services client with strict data privacy requirements. Its standout feature was how quickly we got from zero to a working prototype – literally minutes rather than hours or days.

Chroma’s advantages include:

Incredibly simple setup – you can start with a single import statement in your Go code and have a functioning vector store. No separate services to deploy or manage for smaller use cases.

Persistent or in-memory storage options that adapt to your needs. For development or smaller datasets, you can run entirely in memory; for production, persist to disk with automatic indexing.

Decent performance for small to medium datasets (up to a few hundred thousand documents in my testing). Response times remain under 200ms for most queries with properly tuned chunk sizes.

The limitations become apparent at scale, however. Chroma doesn’t offer the same distributed capabilities as Milvus or the advanced filtering of Weaviate. Its performance curves also degrade more rapidly with dataset size compared to the other options.

Evaluation Criteria: Performance, Memory Usage, Ease of Integration

Rather than relying solely on marketing claims, I ran a series of real-world tests with each database using a dataset of 100,000 document chunks from technical documentation. The results revealed meaningful differences that might not be apparent from feature comparisons alone.

Query Performance Under Load

I tested each database with concurrent queries at increasing levels of load, measuring both latency (time to return results) and throughput (queries handled per second).

Milvus maintained the most consistent performance as query volume increased, with only a 15% latency increase from light to heavy load. Its throughput scaled almost linearly with added resources.

Weaviate showed excellent performance for moderate query volumes but experienced more latency variability under heavy load. However, its hybrid search capabilities often produced more relevant results, which sometimes matters more than raw speed.

Chroma performed admirably for a lightweight solution but showed steeper performance degradation under load. For smaller deployments or proof-of-concepts, this tradeoff might be acceptable given its simplicity.

Memory Efficiency

Memory usage is particularly important for self-hosted deployments where resources might be constrained.

Milvus demonstrated the most efficient memory usage per vector when properly tuned, especially at scale. Its segment-based storage model allows for efficient caching of frequently accessed data.

Weaviate used more memory per vector but offered good control over resource allocation. The tradeoff often made sense given the improved relevance from its hybrid search capabilities.

Chroma had higher memory overhead relative to dataset size, which became more pronounced with larger collections. This reinforces its position as better suited to smaller deployments.

Golang Integration Experience

The developer experience can significantly impact implementation time and ongoing maintenance.

Milvus offers a comprehensive Go SDK that exposes its full feature set. The API is powerful but requires more code to accomplish common tasks. Connection pooling and error handling work well with Go’s concurrency model.

Weaviate’s native Go implementation shines here, with an API that feels like it was designed with Go in mind. Its GraphQL client integrates smoothly with standard Go HTTP clients and JSON handling.

Chroma’s Go support is newer but functional. The API is minimalist and straightforward, though some advanced features require workarounds compared to its Python client.

Why We’re Choosing Weaviate for Our Implementation

After weighing all factors, Weaviate emerges as the best balance for our privacy-focused RAG system. Its native Go implementation, hybrid search capabilities, and modular architecture align well with our requirements.

The decision isn’t just about features or performance numbers – it’s about the practical realities of building and maintaining a production system. Weaviate strikes the right balance between developer experience and operational capabilities for our use case.

That said, both Milvus and Chroma remain excellent choices for specific scenarios:

The beauty of our modular architecture is that we can swap the vector database component if our needs evolve, thanks to abstraction layers we’ll build in our implementation.

Installation and Configuration for Privacy and Performance

Now that we’ve selected Weaviate, let’s get practical about setting it up for optimal privacy and performance. Rather than abstract instructions, I’ll share a concrete deployment approach that has worked well in production.

First, we’ll deploy Weaviate using Docker Compose, which offers the right balance of simplicity and configurability. Here’s a sample docker-compose.yml file with privacy-focused configuration:

version: '3.4'
services:
  weaviate:
    image: semitechnologies/weaviate:1.19.11
    ports:
      - "8080:8080"
    environment:
      QUERY_DEFAULTS_LIMIT: 25
      AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: 'false'
      PERSISTENCE_DATA_PATH: '/var/lib/weaviate'
      DEFAULT_VECTORIZER_MODULE: 'none'
      CLUSTER_HOSTNAME: 'node1'
      ENABLE_MODULES: ''
      BACKUP_FILESYSTEM_PATH: '/var/lib/weaviate-backups'
      LOG_LEVEL: 'info'
      LOG_FORMAT: 'text'
    volumes:
      - weaviate_data:/var/lib/weaviate
      - weaviate_backups:/var/lib/weaviate-backups
    restart: on-failure:0
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8080/v1/.well-known/ready"]
      interval: 10s
      timeout: 5s
      retries: 5

volumes:
  weaviate_data:
  weaviate_backups:

Note the critical settings for privacy:

This configuration creates a locked-down Weaviate instance that won’t send any data outside your environment. All vector generation happens in our application code, with Weaviate serving purely as a secure storage and retrieval engine.

For production environments, you’d add:

Golang Integration with Weaviate

With Weaviate deployed, let’s look at how we integrate it into our Go application. Here’s a simplified version of our vector store interface and Weaviate implementation:

package vectorstore

import (
    "context"
    
    "github.com/weaviate/weaviate-go-client/v4/weaviate"
    "github.com/weaviate/weaviate/entities/models"
)

// Document represents a document chunk with its vector embedding
type Document struct {
    ID          string
    Content     string
    Metadata    map[string]interface{}
    Embedding   []float32
}

// VectorStore defines the interface for vector storage and retrieval
type VectorStore interface {
    // AddDocuments adds multiple document chunks to the vector store
    AddDocuments(ctx context.Context, docs []Document) error
    
    // SimilaritySearch performs a vector similarity search
    SimilaritySearch(ctx context.Context, queryVector []float32, limit int) ([]Document, error)
    
    // DeleteDocument removes a document from the store
    DeleteDocument(ctx context.Context, id string) error
}

// WeaviateStore implements VectorStore using Weaviate
type WeaviateStore struct {
    client     *weaviate.Client
    className  string
}

// NewWeaviateStore creates a new Weaviate vector store
func NewWeaviateStore(endpoint string, className string) (*WeaviateStore, error) {
    cfg := weaviate.Config{
        Host:   endpoint,
        Scheme: "http",
    }
    
    client, err := weaviate.NewClient(cfg)
    if err != nil {
        return nil, err
    }
    
    store := &WeaviateStore{
        client:    client,
        className: className,
    }
    
    // Ensure class exists
    err = store.ensureClass()
    if err != nil {
        return nil, err
    }
    
    return store, nil
}

func (s *WeaviateStore) ensureClass() error {
    // Check if class exists, create if it doesn't
    // (implementation details omitted for brevity)
    return nil
}

// AddDocuments implements VectorStore.AddDocuments
func (s *WeaviateStore) AddDocuments(ctx context.Context, docs []Document) error {
    // Transform docs to Weaviate objects and batch import
    // (implementation details omitted for brevity)
    return nil
}

// SimilaritySearch implements VectorStore.SimilaritySearch
func (s *WeaviateStore) SimilaritySearch(ctx context.Context, queryVector []float32, limit int) ([]Document, error) {
    // Perform vector search in Weaviate
    // (implementation details omitted for brevity)
    return nil, nil
}

// DeleteDocument implements VectorStore.DeleteDocument
func (s *WeaviateStore) DeleteDocument(ctx context.Context, id string) error {
    // Delete document from Weaviate
    // (implementation details omitted for brevity)
    return nil
}

This interface-based approach gives us several advantages:

  1. Clear separation between our application logic and the specific vector database implementation
  2. Ability to mock the vector store for testing
  3. Option to switch vector databases in the future with minimal code changes
  4. Standardized error handling and context propagation

In a real implementation, we’d add more sophisticated features like:

The interface-based design pattern is particularly valuable in Go, where composition is preferred over inheritance. It allows us to create specialized wrappers around the base implementation for features like caching or distributed operation.

Real-World Performance Tuning

Vector databases like Weaviate have numerous configuration options that can dramatically impact performance. Here are some practical tuning tips based on real-world deployment experience:

1. Index Type Selection

Weaviate supports multiple index types, each with different performance characteristics. For most RAG applications, the HNSW (Hierarchical Navigable Small World) index provides the best balance of search speed and accuracy.

You can configure it when creating your class:

classObj := &models.Class{
    Class: className,
    Properties: []*models.Property{
        {
            Name:     "content",
            DataType: []string{"text"},
        },
        // Other properties...
    },
    VectorIndexConfig: map[string]interface{}{
        "distance": "cosine",
        "ef":        256,
        "efConstruction": 128,
        "maxConnections": 64,
    },
}

The ef parameter controls search accuracy (higher means more accurate but slower), while efConstruction and maxConnections affect index build time and quality.

2. Batch Size Optimization

When adding documents, batch size significantly impacts performance. Too small, and you’ll have excessive network overhead; too large, and you might encounter timeouts or memory issues.

In practice, batch sizes between 100-500 documents tend to work well:

// Process documents in optimized batches
batchSize := 250
for i := 0; i < len(docs); i += batchSize {
    end := i + batchSize
    if end > len(docs) {
        end = len(docs)
    }
    
    batch := docs[i:end]
    err := store.AddDocuments(ctx, batch)
    if err != nil {
        // Handle error
    }
}

3. Connection Pooling

For high-throughput scenarios, connection pooling is essential. While Weaviate’s Go client handles some connection management internally, you can improve performance by reusing the client:

// Create a singleton client for your application
var (
    weaviateClient *weaviate.Client
    clientOnce     sync.Once
)

func GetWeaviateClient() *weaviate.Client {
    clientOnce.Do(func() {
        cfg := weaviate.Config{
            Host:   os.Getenv("WEAVIATE_HOST"),
            Scheme: "http",
        }
        
        client, err := weaviate.NewClient(cfg)
        if err != nil {
            panic(err)
        }
        
        weaviateClient = client
    })
    
    return weaviateClient
}

4. Memory Allocation

Weaviate’s memory usage grows with your dataset size. As a rule of thumb, allocate:

For a 5 million document RAG system, you might need: 2GB + (5 * 15GB) + buffer = ~80-90GB

This might seem high, but remember that vector search is memory-intensive by nature. The investment pays off in search performance.

Looking Ahead: Vector Database Evolution

The vector database landscape continues to evolve rapidly. Some developments to watch:

  1. Multimodal support - Newer versions of Weaviate and other databases are adding support for image, audio, and video vectors alongside text, enabling cross-modal search.

  2. Serverless options - Cloud providers are beginning to offer serverless vector search capabilities that could simplify deployment while still keeping data within your control.

  3. Compression techniques - Emerging vector compression methods promise to reduce memory and storage requirements without sacrificing accuracy.

  4. GPU acceleration - Some databases are adding GPU support for index building and even query processing, dramatically improving performance.

Our architecture will allow us to adopt these advances as they mature, without wholesale redesign.

Embedding Models for Private Deployments

While vector databases store and retrieve our embeddings, they’re not responsible for creating them. That crucial task falls to embedding models – the neural networks that transform raw text into meaningful vector representations. Choosing the right embedding model is perhaps the single most important decision for your RAG system’s effectiveness.

I learned this lesson the hard way on a legal document search project. We’d built a beautiful frontend, optimized our vector database, and crafted perfect prompts – but our search results were mediocre at best. The culprit? A poorly chosen embedding model that couldn’t properly capture the nuances of legal terminology. Swapping to a more appropriate model improved our relevancy scores by over 40% overnight, without changing anything else in the system.

Let’s make sure you don’t make the same mistake.

Understanding Text Embeddings and Their Role

Text embeddings are dense numerical representations of text that capture semantic meaning in a way computers can process. Unlike simple word counts or TF-IDF representations, modern embeddings capture complex relationships between concepts, allowing us to perform operations in “meaning space” rather than just matching keywords.

Think of embeddings as coordinates in a vast semantic landscape. Similar concepts cluster together in this space, regardless of the specific words used to express them. When someone searches for “improving database performance,” documents about “optimizing SQL queries” or “database indexing strategies” will be nearby in this semantic space – even without sharing the exact keywords.

The quality of these embeddings determines how accurately your RAG system can:

  1. Retrieve relevant documents for user queries
  2. Recognize conceptual relationships between documents
  3. Understand domain-specific terminology and concepts

For privacy-sensitive applications, there’s an additional critical factor: we need embedding models that can run entirely within our security perimeter, with no data leaving our controlled environment.

Open-Source Embedding Models Compatible with Golang

Let’s explore several open-source embedding models that work well with Golang and can run locally without external API calls.

1. Sentence Transformers

The SentenceTransformers family of models, particularly the MTEB (Massive Text Embedding Benchmark) leaders like E5 and BGE, represent the current state-of-the-art in open-source embeddings.

I’ve used the bge-small-en-v1.5 model extensively, and it consistently delivers excellent results while being lightweight enough for deployment on modest hardware. At only 134MB in size, it delivers performance competitive with much larger models.

For Go integration, we have a few options:

  1. Using Go bindings for ONNX Runtime to run exported models
  2. Running the model via a local API server
  3. Using CGO with transformers libraries

Let’s look at a practical example of option #2, which offers a good balance of performance and implementation simplicity:

package embeddings

import (
    "bytes"
    "context"
    "encoding/json"
    "fmt"
    "net/http"
)

// SentenceTransformerEmbedder implements local embedding generation
// using a SentenceTransformer model exposed via API
type SentenceTransformerEmbedder struct {
    serverURL string
    client    *http.Client
}

// NewSentenceTransformerEmbedder creates a new embedder that connects to a local
// SentenceTransformer server
func NewSentenceTransformerEmbedder(serverURL string) *SentenceTransformerEmbedder {
    return &SentenceTransformerEmbedder{
        serverURL: serverURL,
        client:    &http.Client{Timeout: 30 * time.Second},
    }
}

// EmbedText generates embeddings for the given text
func (e *SentenceTransformerEmbedder) EmbedText(ctx context.Context, text string) ([]float32, error) {
    type embeddingRequest struct {
        Texts []string `json:"texts"`
    }
    
    type embeddingResponse struct {
        Embeddings [][]float32 `json:"embeddings"`
    }
    
    reqBody, err := json.Marshal(embeddingRequest{Texts: []string{text}})
    if err != nil {
        return nil, fmt.Errorf("failed to marshal request: %w", err)
    }
    
    req, err := http.NewRequestWithContext(
        ctx,
        "POST",
        e.serverURL + "/embeddings",
        bytes.NewBuffer(reqBody),
    )
    if err != nil {
        return nil, fmt.Errorf("failed to create request: %w", err)
    }
    req.Header.Set("Content-Type", "application/json")
    
    resp, err := e.client.Do(req)
    if err != nil {
        return nil, fmt.Errorf("embedding request failed: %w", err)
    }
    defer resp.Body.Close()
    
    if resp.StatusCode != http.StatusOK {
        return nil, fmt.Errorf("embedding server returned status: %s", resp.Status)
    }
    
    var result embeddingResponse
    err = json.NewDecoder(resp.Body).Decode(&result)
    if err != nil {
        return nil, fmt.Errorf("failed to decode response: %w", err)
    }
    
    if len(result.Embeddings) == 0 {
        return nil, fmt.Errorf("empty embedding response")
    }
    
    return result.Embeddings[0], nil
}

This client connects to a local embedding server that we can deploy using Docker:

version: '3'
services:
  sentence-transformer:
    image: sentencetransformers/local-server:latest
    ports:
      - "8001:80"
    environment:
      - MODEL_NAME=BAAI/bge-small-en-v1.5
    volumes:
      - embedding_model_cache:/models
    restart: unless-stopped
    deploy:
      resources:
        limits:
          memory: 2G

volumes:
  embedding_model_cache:

This approach gives us several advantages:

For production deployments, you’d want to add authentication to the embedding server and ensure it’s only accessible within your private network.

2. GTE (General Text Embeddings)

Google’s GTE models have gained popularity for their strong performance and relatively small size. The gte-base model offers a good balance of quality and resource usage.

Integrating GTE models follows a similar pattern to SentenceTransformers, but with potentially different model loading code in the server. The client interface remains consistent regardless of which model family you choose.

3. FastEmbed

If you’re dealing with extreme resource constraints, FastEmbed models provide surprisingly good performance in a tiny package. At just a few MB in size, they can run on virtually any hardware.

They’re less accurate than the larger models mentioned above, but for some applications, the tradeoff may be worth it. I’ve used FastEmbed successfully for internal documentation search where perfect retrieval wasn’t critical but speed and resource efficiency were paramount.

4. Code-Specific Embeddings

For technical documentation or codebases, specialized code embedding models like CodeBERT can significantly outperform general-purpose embeddings. These models understand programming language syntax and semantics, making them ideal for developer documentation or code search applications.

Local vs. API-Based Embedding Generation

While our focus is on privacy-preserving local deployment, it’s worth understanding the tradeoffs:

Local Embedding Generation

Advantages:

Challenges:

API-Based Generation

Advantages:

Challenges:

For our privacy-sensitive RAG system, the choice is clear: local embedding generation is essential. However, it’s worth noting that many organizations implement a hybrid approach – using local models for sensitive content and external APIs for public data.

Performance vs. Accuracy Tradeoffs

Not all embedding models are created equal, and the differences go beyond just accuracy. Let’s explore the key tradeoffs:

Model Size vs. Quality

Generally, larger models produce better embeddings, but the relationship isn’t linear. In my testing, the jump from tiny (20-50MB) to small (100-200MB) models yields substantial quality improvements, but moving from small to large (300MB+) models shows diminishing returns for many applications.

For most RAG systems, small models like bge-small-en or gte-base hit the sweet spot – good enough quality without excessive resource requirements.

Embedding Dimension Impact

Embeddings can have varying dimensions – typically from 384 to 1536 values per embedding. Higher dimensions can capture more nuanced relationships but require more storage and computational resources.

Weaviate and most modern vector databases handle varying dimensions without issues, but be aware that larger dimensions:

In practice, embeddings with 384-768 dimensions offer good performance for most use cases, while 1024-1536 dimensions might be justified for specialized applications with very nuanced text.

Batching for Throughput

When processing large document collections, batching is essential for good performance. Most embedding models can process multiple texts simultaneously much more efficiently than one at a time.

Here’s how we might extend our embedder interface to support batching:

// BatchEmbedder extends the basic Embedder with batch processing
type BatchEmbedder interface {
    Embedder
    EmbedBatch(ctx context.Context, texts []string) ([][]float32, error)
}

// Implement batch embedding for the SentenceTransformer model
func (e *SentenceTransformerEmbedder) EmbedBatch(ctx context.Context, texts []string) ([][]float32, error) {
    // Implementation similar to EmbedText but processes multiple texts
    // ...
    
    // Optimal batch sizes depend on the model and hardware
    // Typically 8-64 texts per batch works well
    const maxBatchSize = 32
    
    var allEmbeddings [][]float32
    
    // Process in batches of maxBatchSize
    for i := 0; i < len(texts); i += maxBatchSize {
        end := i + maxBatchSize
        if end > len(texts) {
            end = len(texts)
        }
        
        batch := texts[i:end]
        
        // API call logic here
        // ...
        
        allEmbeddings = append(allEmbeddings, batchEmbeddings...)
    }
    
    return allEmbeddings, nil
}

In production systems, I’ve seen throughput improvements of 5-10x by using appropriate batch sizes compared to sequential processing.

Implementation Options in Go

Let’s explore different integration approaches, each with its own complexity and performance characteristics.

Direct Integration with ONNX Runtime

For the tightest integration, we can use the ONNX Runtime Go API to run embedding models directly within our Go application:

package embeddings

import (
    "context"
    "fmt"
    
    "github.com/golang/protobuf/proto"
    "github.com/yalue/onnxruntime_go"
)

// ONNXEmbedder implements embedding generation using ONNX runtime
type ONNXEmbedder struct {
    session      *onnxruntime_go.Session
    inputName    string
    outputName   string
    tokenizer    *Tokenizer  // Hypothetical tokenizer implementation
    maxLength    int
}

// NewONNXEmbedder creates a new embedder using the specified ONNX model
func NewONNXEmbedder(modelPath string) (*ONNXEmbedder, error) {
    // Load the ONNX model
    session, err := onnxruntime_go.NewSession(modelPath)
    if err != nil {
        return nil, fmt.Errorf("failed to load ONNX model: %w", err)
    }
    
    // Get input and output names (typically known in advance, but can be queried)
    inputName := session.InputNames()[0]
    outputName := session.OutputNames()[0]
    
    // Initialize tokenizer for the model
    tokenizer, err := NewTokenizer("path/to/tokenizer")
    if err != nil {
        return nil, fmt.Errorf("failed to load tokenizer: %w", err)
    }
    
    return &ONNXEmbedder{
        session:    session,
        inputName:  inputName,
        outputName: outputName,
        tokenizer:  tokenizer,
        maxLength:  512,
    }, nil
}

// EmbedText generates embeddings for the given text
func (e *ONNXEmbedder) EmbedText(ctx context.Context, text string) ([]float32, error) {
    // Tokenize input text
    tokens, err := e.tokenizer.Encode(text, e.maxLength)
    if err != nil {
        return nil, fmt.Errorf("tokenization failed: %w", err)
    }
    
    // Create input tensor
    inputTensor, err := onnxruntime_go.NewTensor(tokens)
    if err != nil {
        return nil, fmt.Errorf("failed to create input tensor: %w", err)
    }
    
    // Run inference
    outputs, err := e.session.Run(map[string]*onnxruntime_go.Tensor{
        e.inputName: inputTensor,
    })
    if err != nil {
        return nil, fmt.Errorf("model inference failed: %w", err)
    }
    
    // Extract embedding from output
    embedding, ok := outputs[e.outputName].GetFlat().([]float32)
    if !ok {
        return nil, fmt.Errorf("unexpected output format")
    }
    
    return embedding, nil
}

This approach has advantages:

But also challenges:

For many teams, the API server approach offers a better balance of simplicity and performance.

Standardized Embedder Interface

Regardless of the implementation details, defining a clean interface for embedding generation makes your system more flexible:

package embedding

import "context"

// Embedder defines the interface for generating text embeddings
type Embedder interface {
    // EmbedText generates an embedding vector for a single text
    EmbedText(ctx context.Context, text string) ([]float32, error)
    
    // Dimensions returns the dimension count of the embedding vectors
    Dimensions() int
}

// Factory creates embedders based on configuration
func NewEmbedder(config Config) (Embedder, error) {
    switch config.Type {
    case "sentence-transformer":
        return NewSentenceTransformerEmbedder(config.ServerURL), nil
    case "onnx":
        return NewONNXEmbedder(config.ModelPath)
    case "fasttext":
        return NewFastTextEmbedder(config.ModelPath)
    default:
        return nil, fmt.Errorf("unsupported embedder type: %s", config.Type)
    }
}

This interface approach allows:

Evaluating Embedding Model Quality

How do you know if your chosen embedding model is good enough? It’s surprisingly difficult to evaluate embedding quality directly, but there are practical approaches:

  1. Retrieval Testing: Create a set of test queries with known relevant documents, then measure how often your system retrieves those documents

  2. Contrastive Testing: For each test query, include both semantically similar and dissimilar documents, and check if the model consistently ranks similar ones higher

  3. Human Evaluation: Have domain experts review search results for common queries and provide feedback

Here’s a simple evaluation harness I’ve used in practice:

package evaluation

import (
    "context"
    "sort"
    
    "myproject/embedding"
    "myproject/vectorstore"
)

// TestCase defines a test query and expected relevant documents
type TestCase struct {
    Query            string
    RelevantDocIDs   []string
    IrrelevantDocIDs []string  // Optional control documents
}

// EvaluateRetrieval tests embedding quality with retrieval metrics
func EvaluateRetrieval(
    ctx context.Context,
    embedder embedding.Embedder,
    store vectorstore.VectorStore,
    testCases []TestCase,
) (Results, error) {
    results := Results{}
    
    for _, tc := range testCases {
        // Embed the query
        queryVector, err := embedder.EmbedText(ctx, tc.Query)
        if err != nil {
            return results, err
        }
        
        // Search with generous limit to ensure we catch all relevant docs
        docs, err := store.SimilaritySearch(ctx, queryVector, 30)
        if err != nil {
            return results, err
        }
        
        // Calculate metrics
        result := evaluateResults(docs, tc.RelevantDocIDs)
        results.TestResults = append(results.TestResults, result)
        
        // Update aggregate stats
        results.AveragePrecision += result.Precision
        results.AverageRecall += result.Recall
        results.AverageMRR += result.MRR
    }
    
    // Calculate averages
    count := float64(len(testCases))
    results.AveragePrecision /= count
    results.AverageRecall /= count
    results.AverageMRR /= count
    
    return results, nil
}

// Helper function to evaluate a single test case
func evaluateResults(docs []vectorstore.Document, relevantIDs []string) TestResult {
    // Implementation of precision, recall, and MRR calculation
    // ...
}

This approach provides quantitative metrics on your model’s performance for your specific domain and use case.

Best Practices for Production Deployment

After implementing embedding generation for several production RAG systems, I’ve learned some valuable lessons:

  1. Caching is crucial: Store generated embeddings persistently to avoid regenerating them for commonly accessed documents or queries. A simple LRU cache for query embeddings can dramatically improve performance.

  2. Monitor embedding drift: As your content evolves, periodically evaluate embedding quality to ensure it still performs well on newer content. Models trained on older data may struggle with emerging terminology.

  3. Version your embeddings: Store the model version alongside each embedding to enable smooth transitions when upgrading models. This allows for gradual reprocessing of your corpus.

  4. Implement fallbacks: Have backup embedding generation strategies in case your primary approach fails. This could be a simpler, more reliable model or even a keyword-based approach.

  5. Consider domain adaptation: For specialized domains, fine-tuning an embedding model on domain-specific text can significantly improve performance. This is becoming easier with techniques like PEFT (Parameter-Efficient Fine-Tuning).

Here’s how embedding versioning might look in practice:

// Document with embedding version
type Document struct {
    ID              string
    Content         string
    Metadata        map[string]interface{}
    Embedding       []float32
    EmbeddingModel  string  // Model identifier
    EmbeddingVersion string  // Version string
}

// Add version checking to search
func (s *SearchService) Search(ctx context.Context, query string) ([]Result, error) {
    // Generate embedding for query
    embedding, err := s.embedder.EmbedText(ctx, query)
    if err != nil {
        return nil, err
    }
    
    // Get current model info
    currentModel := s.embedder.ModelIdentifier()
    currentVersion := s.embedder.ModelVersion()
    
    // Perform search with model filtering
    // Only compare vectors from the same model/version
    filter := map[string]interface{}{
        "embeddingModel": currentModel,
        "embeddingVersion": currentVersion,
    }
    
    results, err := s.vectorStore.SimilaritySearchWithFilter(
        ctx, embedding, 10, filter)
    if err != nil {
        return nil, err
    }
    
    // If results are insufficient, consider fallback strategy
    if len(results) < 3 {
        // Implement fallback search strategy
    }
    
    return results, nil
}

This approach ensures that you only compare embeddings generated by the same model version, avoiding the “comparing apples to oranges” problem that can occur when mixing embeddings from different models.

Looking Ahead: The Future of Embeddings

The embedding model landscape is evolving incredibly quickly. Several trends worth watching:

  1. Smaller, faster models: New techniques are making high-quality embedding models smaller and faster, perfect for local deployment.

  2. Domain-specific embeddings: Models pre-trained for specific domains (legal, medical, financial) are emerging with superior performance in their niches.

  3. Multilingual capabilities: Newer models handle multiple languages more effectively, reducing the need for language-specific implementations.

  4. Hybrid approaches: Combining neural embeddings with traditional techniques like BM25 is proving extremely effective for certain use cases.

Our architecture with clear interfaces allows us to adopt these advances as they mature, without rebuilding our entire system.

Document Processing Pipeline

The quality of your RAG system is only as good as the document processing that feeds it. I’ve seen brilliantly architected systems fail because they couldn’t properly extract text from PDFs, or because they chunked documents in ways that fragmented key information. The document processing pipeline might not be the most glamorous part of your RAG system, but it’s often where the real magic – or tragic failure – happens.

I once spent three days debugging poor search results in a legal document system only to discover the culprit wasn’t our embedding model or vector database – it was chunks that split paragraphs mid-sentence, destroying semantic coherence. After adjusting our chunking strategy, relevance scores improved by over 30% overnight.

Let’s build a document processing pipeline that won’t let you down when it matters most.

Efficient Document Loading and Parsing

The first step in our pipeline is getting documents into our system. This seems straightforward until you encounter the messy reality of enterprise document collections – inconsistent formats, malformed files, embedded images, password protection, and other obstacles.

Let’s create a flexible document loading system that handles these challenges gracefully.

The Document Loader Interface

We’ll start with a clean interface that abstracts away the specifics of different document formats:

package docprocessing

import (
    "context"
    "io"
    "os"
    "path/filepath"
)

// Document represents a document with its metadata and content
type Document struct {
    ID          string
    Title       string
    Content     string
    Metadata    map[string]interface{}
    Source      string
}

// DocumentLoader defines the interface for loading documents from various sources
type DocumentLoader interface {
    // LoadDocument loads a single document from a Reader
    LoadDocument(ctx context.Context, reader io.Reader, metadata map[string]interface{}) (*Document, error)
    
    // SupportsExtension returns true if this loader can handle the given file extension
    SupportsExtension(ext string) bool
}

// DocumentProcessor orchestrates the loading of documents
type DocumentProcessor struct {
    loaders []DocumentLoader
}

// NewDocumentProcessor creates a new processor with the given loaders
func NewDocumentProcessor(loaders ...DocumentLoader) *DocumentProcessor {
    return &DocumentProcessor{
        loaders: loaders,
    }
}

// ProcessFile processes a single file and returns the extracted document
func (p *DocumentProcessor) ProcessFile(ctx context.Context, filePath string) (*Document, error) {
    // Get file extension
    ext := strings.ToLower(filepath.Ext(filePath))
    
    // Find appropriate loader
    var loader DocumentLoader
    for _, l := range p.loaders {
        if l.SupportsExtension(ext) {
            loader = l
            break
        }
    }
    
    if loader == nil {
        return nil, fmt.Errorf("no loader available for extension: %s", ext)
    }
    
    // Open the file
    file, err := os.Open(filePath)
    if err != nil {
        return nil, fmt.Errorf("failed to open file: %w", err)
    }
    defer file.Close()
    
    // Extract basic metadata
    fileInfo, err := file.Stat()
    if err != nil {
        return nil, fmt.Errorf("failed to get file info: %w", err)
    }
    
    metadata := map[string]interface{}{
        "filename": filepath.Base(filePath),
        "path":     filePath,
        "size":     fileInfo.Size(),
        "modified": fileInfo.ModTime(),
    }
    
    // Load the document
    doc, err := loader.LoadDocument(ctx, file, metadata)
    if err != nil {
        return nil, fmt.Errorf("failed to load document: %w", err)
    }
    
    // Generate a stable ID if not provided
    if doc.ID == "" {
        doc.ID = generateID(filePath)
    }
    
    return doc, nil
}

// ProcessDirectory recursively processes all files in a directory
func (p *DocumentProcessor) ProcessDirectory(ctx context.Context, dirPath string) ([]*Document, error) {
    var documents []*Document
    
    err := filepath.Walk(dirPath, func(path string, info os.FileInfo, err error) error {
        if err != nil {
            return err
        }
        
        // Skip directories
        if info.IsDir() {
            return nil
        }
        
        // Process the file
        doc, err := p.ProcessFile(ctx, path)
        if err != nil {
            // Log error but continue processing other files
            log.Printf("Error processing %s: %v", path, err)
            return nil
        }
        
        documents = append(documents, doc)
        return nil
    })
    
    if err != nil {
        return nil, fmt.Errorf("directory walk failed: %w", err)
    }
    
    return documents, nil
}

// Helper function to generate a stable ID from file path
func generateID(path string) string {
    h := sha256.New()
    h.Write([]byte(path))
    return fmt.Sprintf("%x", h.Sum(nil))
}

This interface-based approach allows us to handle various document types while keeping the core processing logic clean and maintainable.

Implementing Format-Specific Loaders

Now let’s implement loaders for common document formats:

// TextLoader handles plain text files
type TextLoader struct{}

func (l *TextLoader) SupportsExtension(ext string) bool {
    return ext == ".txt" || ext == ".text" || ext == ".md" || ext == ".markdown"
}

func (l *TextLoader) LoadDocument(ctx context.Context, reader io.Reader, metadata map[string]interface{}) (*Document, error) {
    // Read the entire content
    content, err := io.ReadAll(reader)
    if err != nil {
        return nil, fmt.Errorf("failed to read text: %w", err)
    }
    
    // Create document with title from filename
    filename, _ := metadata["filename"].(string)
    
    return &Document{
        Title:    filename,
        Content:  string(content),
        Metadata: metadata,
        Source:   metadata["path"].(string),
    }, nil
}

// PDFLoader handles PDF documents
type PDFLoader struct{}

func (l *PDFLoader) SupportsExtension(ext string) bool {
    return ext == ".pdf"
}

func (l *PDFLoader) LoadDocument(ctx context.Context, reader io.Reader, metadata map[string]interface{}) (*Document, error) {
    // Note: This is a simplified version
    // In a real implementation, you'd use a library like UniDoc or pdfcpu
    
    // Example using a hypothetical PDF library:
    pdfContent, err := extractTextFromPDF(reader)
    if err != nil {
        return nil, fmt.Errorf("failed to extract PDF text: %w", err)
    }
    
    // Extract additional metadata from PDF
    pdfMetadata, err := extractPDFMetadata(reader)
    if err != nil {
        // Log but continue
        log.Printf("Failed to extract PDF metadata: %v", err)
    } else {
        // Merge PDF-specific metadata
        for k, v := range pdfMetadata {
            metadata[k] = v
        }
    }
    
    // Use PDF title if available, otherwise filename
    title := pdfMetadata["title"]
    if title == "" {
        title = metadata["filename"].(string)
    }
    
    return &Document{
        Title:    title,
        Content:  pdfContent,
        Metadata: metadata,
        Source:   metadata["path"].(string),
    }, nil
}

For a production system, you’d add loaders for other formats like DOCX, HTML, and more, potentially using external helper services for more complex formats. The key advantage of this approach is that the core processing logic remains unchanged as you add support for new formats.

Text Chunking Strategies for Optimal Retrieval

Once we have the document content, we need to split it into chunks for embedding and storage. This seemingly simple task has profound implications for retrieval quality.

Why Chunking Matters

Imagine searching for a specific legal clause in a 100-page contract. If we stored the entire contract as a single chunk, the embedding would be a muddy average of all the content, making specific queries hard to match. On the other hand, if we chunk too finely (say, by sentence), we might lose important context that spans multiple sentences.

The ideal chunking strategy depends on your specific use case, but there are several approaches worth considering:

Fixed-Size Chunking

The simplest approach is to split text into chunks of approximately equal size:

// ChunkOptions configures the chunking process
type ChunkOptions struct {
    ChunkSize          int  // Target size in characters
    ChunkOverlap       int  // Overlap between chunks
    KeepSeparator      bool // Whether to keep separators in chunks
}

// FixedSizeChunker splits text into chunks of approximately equal size
func FixedSizeChunker(doc *Document, opts ChunkOptions) ([]Chunk, error) {
    if opts.ChunkSize <= 0 {
        return nil, fmt.Errorf("chunk size must be positive")
    }
    
    text := doc.Content
    var chunks []Chunk
    
    // Simple chunking by character count
    for i := 0; i < len(text); i += opts.ChunkSize - opts.ChunkOverlap {
        end := i + opts.ChunkSize
        if end > len(text) {
            end = len(text)
        }
        
        chunk := Chunk{
            Text:     text[i:end],
            Metadata: doc.Metadata,
            Source:   doc.Source,
            DocID:    doc.ID,
            ChunkIndex: len(chunks),
        }
        
        chunks = append(chunks, chunk)
        
        if end == len(text) {
            break
        }
    }
    
    return chunks, nil
}

This approach is simple but often cuts text at awkward places, breaking sentences or logical units. Let’s improve it.

Semantic Chunking

A more sophisticated approach respects semantic boundaries like paragraphs and sections:

// SemanticChunker splits text by semantic boundaries (paragraphs, sections)
func SemanticChunker(doc *Document, opts ChunkOptions) ([]Chunk, error) {
    if opts.ChunkSize <= 0 {
        return nil, fmt.Errorf("chunk size must be positive")
    }
    
    // Define paragraph separator patterns
    paragraphSeparators := []string{
        "\n\n",            // Double newline
        "\n\t",            // Newline followed by tab
        "\n    ",          // Newline followed by 4 spaces
        "\n## ",           // Markdown headings
        "\n### ",
    }
    
    // Split text by paragraphs first
    var paragraphs []string
    text := doc.Content
    
    // Initial split by the most common paragraph separator
    parts := strings.Split(text, "\n\n")
    
    for _, part := range parts {
        // Further split by other paragraph indicators
        found := false
        for _, sep := range paragraphSeparators[1:] {
            if strings.Contains(part, sep) {
                subParts := strings.Split(part, sep)
                for i, sp := range subParts {
                    if i > 0 {
                        // Prepend the separator to maintain context
                        sp = sep + sp
                    }
                    if len(sp) > 0 {
                        paragraphs = append(paragraphs, sp)
                    }
                }
                found = true
                break
            }
        }
        
        if !found && len(part) > 0 {
            paragraphs = append(paragraphs, part)
        }
    }
    
    // Merge paragraphs to create chunks close to the target size
    var chunks []Chunk
    var currentChunk strings.Builder
    var currentSize int
    
    for _, para := range paragraphs {
        paraSize := len(para)
        
        // If adding this paragraph exceeds chunk size and we already have content,
        // finish the current chunk
        if currentSize > 0 && currentSize + paraSize > opts.ChunkSize {
            chunks = append(chunks, Chunk{
                Text:       currentChunk.String(),
                Metadata:   doc.Metadata,
                Source:     doc.Source,
                DocID:      doc.ID,
                ChunkIndex: len(chunks),
            })
            
            currentChunk.Reset()
            currentSize = 0
        }
        
        // Handle paragraphs larger than chunk size
        if paraSize > opts.ChunkSize {
            // If we have current content, save it
            if currentSize > 0 {
                chunks = append(chunks, Chunk{
                    Text:       currentChunk.String(),
                    Metadata:   doc.Metadata,
                    Source:     doc.Source,
                    DocID:      doc.ID,
                    ChunkIndex: len(chunks),
                })
                currentChunk.Reset()
                currentSize = 0
            }
            
            // Use fixed-size chunking for this large paragraph
            for i := 0; i < len(para); i += opts.ChunkSize - opts.ChunkOverlap {
                end := i + opts.ChunkSize
                if end > len(para) {
                    end = len(para)
                }
                
                chunks = append(chunks, Chunk{
                    Text:       para[i:end],
                    Metadata:   doc.Metadata,
                    Source:     doc.Source,
                    DocID:      doc.ID,
                    ChunkIndex: len(chunks),
                })
                
                if end == len(para) {
                    break
                }
            }
        } else {
            // Add paragraph to current chunk
            if currentSize > 0 {
                currentChunk.WriteString("\n\n")
                currentSize += 2
            }
            
            currentChunk.WriteString(para)
            currentSize += paraSize
        }
    }
    
    // Add the last chunk if there's any content left
    if currentSize > 0 {
        chunks = append(chunks, Chunk{
            Text:       currentChunk.String(),
            Metadata:   doc.Metadata,
            Source:     doc.Source,
            DocID:      doc.ID,
            ChunkIndex: len(chunks),
        })
    }
    
    return chunks, nil
}

This approach preserves paragraph boundaries when possible, creating chunks that are more semantically coherent. This typically leads to better embedding quality since semantically related content stays together.

Hybrid Approaches

For specific document types, you might want custom chunking strategies:

// StructuredDocumentChunker handles documents with clear structural elements
// like HTML, Markdown headings, or legal document sections
func StructuredDocumentChunker(doc *Document, opts ChunkOptions) ([]Chunk, error) {
    // Detect document type from metadata or content
    format := detectDocumentFormat(doc)
    
    switch format {
    case "markdown":
        return markdownChunker(doc, opts)
    case "html":
        return htmlChunker(doc, opts)
    case "legal":
        return legalDocumentChunker(doc, opts)
    default:
        // Fall back to semantic chunker
        return SemanticChunker(doc, opts)
    }
}

// Example of a specialized chunker for Markdown
func markdownChunker(doc *Document, opts ChunkOptions) ([]Chunk, error) {
    // Split by headings (##, ###, etc.)
    headingPattern := regexp.MustCompile(`(?m)^(#{1,6})\s+(.+)$`)
    
    // Implementation details...
    // ...
}

The right chunking strategy can dramatically impact retrieval quality. In my experience, erring on the side of slightly larger chunks (500-1000 tokens) with meaningful semantic boundaries tends to work best for general-purpose RAG systems.

Metadata Extraction and Storage

Metadata is the unsung hero of effective RAG systems, enabling more precise retrieval and filtering. Let’s develop a robust metadata extraction system:

// MetadataExtractor extracts metadata from documents
type MetadataExtractor interface {
    Extract(ctx context.Context, doc *Document) (map[string]interface{}, error)
}

// CompositeExtractor combines multiple extractors
type CompositeExtractor struct {
    extractors []MetadataExtractor
}

func NewCompositeExtractor(extractors ...MetadataExtractor) *CompositeExtractor {
    return &CompositeExtractor{extractors: extractors}
}

func (e *CompositeExtractor) Extract(ctx context.Context, doc *Document) (map[string]interface{}, error) {
    metadata := make(map[string]interface{})
    
    // Copy existing metadata
    for k, v := range doc.Metadata {
        metadata[k] = v
    }
    
    // Apply each extractor
    for _, extractor := range e.extractors {
        additionalMeta, err := extractor.Extract(ctx, doc)
        if err != nil {
            // Log but continue
            log.Printf("Metadata extraction error: %v", err)
            continue
        }
        
        // Merge additional metadata
        for k, v := range additionalMeta {
            metadata[k] = v
        }
    }
    
    return metadata, nil
}

// Example specialized extractors

// TextAnalysisExtractor extracts metadata through text analysis
type TextAnalysisExtractor struct{}

func (e *TextAnalysisExtractor) Extract(ctx context.Context, doc *Document) (map[string]interface{}, error) {
    metadata := make(map[string]interface{})
    
    // Extract language
    language, confidence := detectLanguage(doc.Content)
    if confidence > 0.8 {
        metadata["language"] = language
    }
    
    // Extract content creation date if present in text
    dates := extractDates(doc.Content)
    if len(dates) > 0 {
        metadata["extractedDates"] = dates
    }
    
    // Extract keywords
    keywords := extractKeywords(doc.Content)
    if len(keywords) > 0 {
        metadata["keywords"] = keywords
    }
    
    // Detect content type (technical, legal, marketing, etc.)
    contentType := classifyContent(doc.Content)
    if contentType != "" {
        metadata["contentType"] = contentType
    }
    
    return metadata, nil
}

// EntityExtractor extracts named entities
type EntityExtractor struct{}

func (e *EntityExtractor) Extract(ctx context.Context, doc *Document) (map[string]interface{}, error) {
    metadata := make(map[string]interface{})
    
    // Extract people, organizations, locations
    entities := extractNamedEntities(doc.Content)
    
    if len(entities.People) > 0 {
        metadata["people"] = entities.People
    }
    
    if len(entities.Organizations) > 0 {
        metadata["organizations"] = entities.Organizations
    }
    
    if len(entities.Locations) > 0 {
        metadata["locations"] = entities.Locations
    }
    
    return metadata, nil
}

Rich metadata enables powerful filtering during retrieval, which is especially valuable for legal, compliance, and governance use cases.

Handling Different Document Types

Different document types often require specialized processing. Let’s explore how to handle some common scenarios:

Images and PDFs with OCR

For scanned documents or images, optical character recognition (OCR) is essential:

// OCRProcessor handles image-based content extraction
type OCRProcessor struct {
    ocrLanguages []string
    ocrEngine    string
    tempDir      string
}

func NewOCRProcessor(languages []string, engine string) *OCRProcessor {
    tempDir := os.TempDir()
    return &OCRProcessor{
        ocrLanguages: languages,
        ocrEngine:    engine,
        tempDir:      tempDir,
    }
}

func (p *OCRProcessor) ProcessImage(ctx context.Context, imagePath string) (string, error) {
    // Execute OCR on the image
    // In a real implementation, this would call Tesseract or a similar OCR tool
    
    cmd := exec.CommandContext(
        ctx,
        "tesseract",
        imagePath,
        "stdout",
        "-l", strings.Join(p.ocrLanguages, "+"),
    )
    
    output, err := cmd.Output()
    if err != nil {
        return "", fmt.Errorf("OCR processing failed: %w", err)
    }
    
    return string(output), nil
}

func (p *OCRProcessor) ProcessPDFWithOCR(ctx context.Context, pdfPath string) (string, error) {
    // Extract images from PDF pages
    // Process each image with OCR
    // Combine the results
    
    // Implementation details would depend on your PDF library
    // ...
    
    return combinedText, nil
}

Structured Data (CSV, JSON, XML)

Structured data requires special handling to preserve relationships:

// CSVLoader handles CSV files
type CSVLoader struct{}

func (l *CSVLoader) SupportsExtension(ext string) bool {
    return ext == ".csv"
}

func (l *CSVLoader) LoadDocument(ctx context.Context, reader io.Reader, metadata map[string]interface{}) (*Document, error) {
    // Parse the CSV
    csvReader := csv.NewReader(reader)
    
    // Read header
    header, err := csvReader.Read()
    if err != nil {
        return nil, fmt.Errorf("failed to read CSV header: %w", err)
    }
    
    // Read records
    var records [][]string
    for {
        record, err := csvReader.Read()
        if err == io.EOF {
            break
        }
        if err != nil {
            return nil, fmt.Errorf("failed to read CSV row: %w", err)
        }
        
        records = append(records, record)
    }
    
    // Convert to text representation
    var content strings.Builder
    
    // Include header info
    content.WriteString("CSV Document with columns: ")
    content.WriteString(strings.Join(header, ", "))
    content.WriteString("\n\n")
    
    // Add each row as a paragraph
    for i, record := range records {
        content.WriteString(fmt.Sprintf("Row %d:\n", i+1))
        
        for j, value := range record {
            if j < len(header) {
                content.WriteString(fmt.Sprintf("%s: %s\n", header[j], value))
            } else {
                content.WriteString(fmt.Sprintf("Column %d: %s\n", j+1, value))
            }
        }
        
        content.WriteString("\n")
    }
    
    // Add row count to metadata
    metadata["rowCount"] = len(records)
    metadata["columnCount"] = len(header)
    metadata["columns"] = header
    
    return &Document{
        Title:    metadata["filename"].(string),
        Content:  content.String(),
        Metadata: metadata,
        Source:   metadata["path"].(string),
    }, nil
}

Converting structured data to a text representation that captures the relationships between fields is key to enabling semantic search over this data.

Implementation in Go with Example Code

Now, let’s tie everything together into a complete document processing pipeline:

// ProcessingPipeline coordinates the complete document processing workflow
type ProcessingPipeline struct {
    processor        *DocumentProcessor
    metadataExtractor *CompositeExtractor
    chunker          func(*Document, ChunkOptions) ([]Chunk, error)
    embedder         embedding.Embedder
    vectorStore      vectorstore.VectorStore
    chunkOptions     ChunkOptions
}

// NewProcessingPipeline creates a new document processing pipeline
func NewProcessingPipeline(
    processor *DocumentProcessor,
    metadataExtractor *CompositeExtractor,
    chunker func(*Document, ChunkOptions) ([]Chunk, error),
    embedder embedding.Embedder,
    vectorStore vectorstore.VectorStore,
    chunkOptions ChunkOptions,
) *ProcessingPipeline {
    return &ProcessingPipeline{
        processor:        processor,
        metadataExtractor: metadataExtractor,
        chunker:          chunker,
        embedder:         embedder,
        vectorStore:      vectorStore,
        chunkOptions:     chunkOptions,
    }
}

// ProcessFile processes a single file through the entire pipeline
func (p *ProcessingPipeline) ProcessFile(ctx context.Context, filePath string) error {
    // 1. Load and parse the document
    doc, err := p.processor.ProcessFile(ctx, filePath)
    if err != nil {
        return fmt.Errorf("document processing failed: %w", err)
    }
    
    // 2. Extract additional metadata
    metadata, err := p.metadataExtractor.Extract(ctx, doc)
    if err != nil {
        // Log but continue
        log.Printf("Metadata extraction error for %s: %v", filePath, err)
    } else {
        // Update document with enhanced metadata
        doc.Metadata = metadata
    }
    
    // 3. Split document into chunks
    chunks, err := p.chunker(doc, p.chunkOptions)
    if err != nil {
        return fmt.Errorf("document chunking failed: %w", err)
    }
    
    // 4. Process chunks in batches to avoid memory issues
    const batchSize = 10
    for i := 0; i < len(chunks); i += batchSize {
        end := i + batchSize
        if end > len(chunks) {
            end = len(chunks)
        }
        
        batchChunks := chunks[i:end]
        
        // Create a batch of texts for embedding
        texts := make([]string, len(batchChunks))
        for j, chunk := range batchChunks {
            texts[j] = chunk.Text
        }
        
        // 5. Generate embeddings
        embeddings, err := p.embedder.EmbedBatch(ctx, texts)
        if err != nil {
            return fmt.Errorf("embedding generation failed: %w", err)
        }
        
        // 6. Create documents for vector store
        docs := make([]vectorstore.Document, len(batchChunks))
        for j, chunk := range batchChunks {
            // Create a unique ID for the chunk
            chunkID := fmt.Sprintf("%s-%d", chunk.DocID, chunk.ChunkIndex)
            
            docs[j] = vectorstore.Document{
                ID:        chunkID,
                Content:   chunk.Text,
                Metadata:  chunk.Metadata,
                Embedding: embeddings[j],
            }
        }
        
        // 7. Store in vector database
        err = p.vectorStore.AddDocuments(ctx, docs)
        if err != nil {
            return fmt.Errorf("vector store insertion failed: %w", err)
        }
    }
    
    return nil
}

// ProcessDirectory processes all files in a directory
func (p *ProcessingPipeline) ProcessDirectory(ctx context.Context, dirPath string) error {
    // List files
    files, err := listFiles(dirPath)
    if err != nil {
        return fmt.Errorf("failed to list directory: %w", err)
    }
    
    // Process files with progress tracking
    total := len(files)
    for i, file := range files {
        log.Printf("Processing file %d/%d: %s", i+1, total, file)
        
        err := p.ProcessFile(ctx, file)
        if err != nil {
            // Log but continue with other files
            log.Printf("Error processing %s: %v", file, err)
        }
    }
    
    return nil
}

This pipeline provides a complete workflow from document loading to vector storage, with each component designed to be extensible and configurable.

Real-World Optimizations and Best Practices

After implementing several production document processing pipelines, I’ve learned some valuable lessons:

1. Progressive Document Loading

For large document collections, process documents incrementally:

// IncrementalProcessor tracks processed files to avoid reprocessing
type IncrementalProcessor struct {
    pipeline      *ProcessingPipeline
    processedDB   *badger.DB // Using BadgerDB for tracking
    forceReprocess bool
}

func NewIncrementalProcessor(pipeline *ProcessingPipeline, dbPath string, forceReprocess bool) (*IncrementalProcessor, error) {
    // Open the tracking database
    opts := badger.DefaultOptions(dbPath)
    db, err := badger.Open(opts)
    if err != nil {
        return nil, fmt.Errorf("failed to open tracking database: %w", err)
    }
    
    return &IncrementalProcessor{
        pipeline:      pipeline,
        processedDB:   db,
        forceReprocess: forceReprocess,
    }, nil
}

func (p *IncrementalProcessor) ProcessDirectory(ctx context.Context, dirPath string) error {
    files, err := listFilesRecursively(dirPath)
    if err != nil {
        return err
    }
    
    // Process each file if needed
    total := len(files)
    processed := 0
    skipped := 0
    
    for i, file := range files {
        // Check if file was already processed
        needsProcessing, err := p.needsProcessing(file)
        if err != nil {
            log.Printf("Error checking file status: %v", err)
            continue
        }
        
        if !needsProcessing && !p.forceReprocess {
            log.Printf("Skipping already processed file (%d/%d): %s", i+1, total, file)
            skipped++
            continue
        }
        
        // Process the file
        log.Printf("Processing file (%d/%d): %s", i+1, total, file)
        err = p.pipeline.ProcessFile(ctx, file)
        if err != nil {
            log.Printf("Error processing %s: %v", file, err)
            continue
        }
        
        // Mark as processed
        err = p.markProcessed(file)
        if err != nil {
            log.Printf("Error updating file status: %v", err)
        }
        
        processed++
    }
    
    log.Printf("Processing complete. Processed: %d, Skipped: %d, Total: %d", processed, skipped, total)
    return nil
}

func (p *IncrementalProcessor) needsProcessing(filePath string) (bool, error) {
    fileInfo, err := os.Stat(filePath)
    if err != nil {
        return false, err
    }
    
    modTime := fileInfo.ModTime().Unix()
    fileSize := fileInfo.Size()
    
    var processed bool
    var storedModTime int64
    var storedSize int64
    
    err = p.processedDB.View(func(txn *badger.Txn) error {
        item, err := txn.Get([]byte(filePath))
        if err == badger.ErrKeyNotFound {
            processed = false
            return nil
        }
        if err != nil {
            return err
        }
        
        processed = true
        
        return item.Value(func(val []byte) error {
            parts := strings.Split(string(val), ":")
            if len(parts) != 2 {
                return fmt.Errorf("invalid stored value format")
            }
            
            storedModTime, err = strconv.ParseInt(parts[0], 10, 64)
            if err != nil {
                return err
            }
            
            storedSize, err = strconv.ParseInt(parts[1], 10, 64)
            if err != nil {
                return err
            }
            
            return nil
        })
    })
    
    if err != nil {
        return false, err
    }
    
    // File needs processing if:
    // 1. It hasn't been processed before
    // 2. It has been modified since last processing
    // 3. Its size has changed
    return !processed || storedModTime != modTime || storedSize != fileSize, nil
}

func (p *IncrementalProcessor) markProcessed(filePath string) error {
    fileInfo, err := os.Stat(filePath)
    if err != nil {
        return err
    }
    
    modTime := fileInfo.ModTime().Unix()
    fileSize := fileInfo.Size()
    
    value := fmt.Sprintf("%d:%d", modTime, fileSize)
    
    return p.processedDB.Update(func(txn *badger.Txn) error {
        return txn.Set([]byte(filePath), []byte(value))
    })
}

This approach lets you process new or modified documents without reprocessing your entire corpus, which is critical for large collections.

2. Parallel Processing

Leverage Go’s concurrency to speed up processing:

// ParallelProcessor processes documents in parallel
type ParallelProcessor struct {
    pipeline  *ProcessingPipeline
    maxWorkers int
}

func NewParallelProcessor(pipeline *ProcessingPipeline, maxWorkers int) *ParallelProcessor {
    if maxWorkers <= 0 {
        maxWorkers = runtime.NumCPU()
    }
    
    return &ParallelProcessor{
        pipeline:   pipeline,
        maxWorkers: maxWorkers,
    }
}

func (p *ParallelProcessor) ProcessDirectory(ctx context.Context, dirPath string) error {
    files, err := listFilesRecursively(dirPath)
    if err != nil {
        return err
    }
    
    // Create a worker pool
    var wg sync.WaitGroup
    fileChan := make(chan string)
    errorsChan := make(chan error, len(files))
    
    // Start workers
    for i := 0; i < p.maxWorkers; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            
            for filePath := range fileChan {
                err := p.pipeline.ProcessFile(ctx, filePath)
                if err != nil {
                    select {
                    case errorsChan <- fmt.Errorf("error processing %s: %w", filePath, err):
                    default:
                        log.Printf("Error processing %s: %v", filePath, err)
                    }
                }
            }
        }()
    }
    
    // Send files to workers
    for _, file := range files {
        select {
        case fileChan <- file:
        case <-ctx.Done():
            close(fileChan)
            return ctx.Err()
        }
    }
    
    // Close the channel and wait for workers to finish
    close(fileChan)
    wg.Wait()
    close(errorsChan)
    
    // Collect errors
    var errs []error
    for err := range errorsChan {
        errs = append(errs, err)
    }
    
    if len(errs) > 0 {
        return fmt.Errorf("encountered %d errors during processing", len(errs))
    }
    
    return nil
}

The parallel approach can dramatically speed up processing, especially for IO-bound operations like loading documents and generating embeddings.

3. Document Deduplication

Prevent duplicate content from cluttering your vector database:

// Deduplicator removes duplicate or near-duplicate content
type Deduplicator struct {
    minSimilarity float32  // Threshold for considering chunks similar
    vectorStore   vectorstore.VectorStore
}

func NewDeduplicator(minSimilarity float32, vectorStore vectorstore.VectorStore) *Deduplicator {
    return &Deduplicator{
        minSimilarity: minSimilarity,
        vectorStore:   vectorStore,
    }
}

// Check if a document chunk is too similar to existing chunks
func (d *Deduplicator) IsDuplicate(ctx context.Context, text string, embedding []float32) (bool, error) {
    // Search for similar chunks
    results, err := d.vectorStore.SimilaritySearch(ctx, embedding, 1)
    if err != nil {
        return false, fmt.Errorf("similarity search failed: %w", err)
    }
    
    // If we found a result with similarity above threshold, it's a duplicate
    if len(results) > 0 && cosineSimilarity(embedding, results[0].Embedding) > d.minSimilarity {
        return true, nil
    }
    
    return false, nil
}

// Helper function to calculate cosine similarity
func cosineSimilarity(a, b []float32) float32 {
    var dotProduct float32
    var normA float32
    var normB float32
    
    for i := 0; i < len(a); i++ {
        dotProduct += a[i] * b[i]
        normA += a[i] * a[i]
        normB += b[i] * b[i]
    }
    
    return dotProduct / (float32(math.Sqrt(float64(normA))) * float32(math.Sqrt(float64(normB))))
}

Deduplication is particularly important for enterprise document collections where the same content often appears in multiple documents.

The document processing pipeline is the foundation of your RAG system. Getting it right—with proper document loading, intelligent chunking, rich metadata extraction, and efficient processing—sets you up for success in the subsequent steps.

Building the Search API with Go

With our documents processed and stored, it’s time to build the interface that users will actually interact with: the search API. This is where all our careful work on document processing, embedding generation, and vector storage comes together to deliver a seamless, intuitive search experience.

I’ve worked on search APIs that were technically powerful but practically useless – like a high-performance sports car with no steering wheel. A good search API isn’t just about returning results; it’s about presenting them in a way that helps users solve problems. It’s about thoughtful defaults and flexible options. It’s about balancing simplicity for common use cases with power for complex ones.

Let’s build an API that you’d actually want to use.

API Design and Endpoints

Our API design starts with clarity about what users need. Based on my experience with RAG systems across various domains, users typically need to:

  1. Search for information using natural language
  2. Filter results based on metadata
  3. Adjust search parameters for precision vs. recall
  4. Retrieve both content and context for results

Let’s design a clean, RESTful API around these needs:

package main

import (
    "encoding/json"
    "log"
    "net/http"
    "os"
    "strconv"

    "github.com/gorilla/mux"
)

// SearchRequest represents the search query parameters
type SearchRequest struct {
    Query       string                 `json:"query"`
    Filters     map[string]interface{} `json:"filters,omitempty"`
    Limit       int                    `json:"limit,omitempty"`
    Similarity  float32                `json:"similarity,omitempty"`
    IncludeText bool                   `json:"includeText,omitempty"`
}

// SearchResult represents a single search result
type SearchResult struct {
    ID         string                 `json:"id"`
    Content    string                 `json:"content,omitempty"`
    Metadata   map[string]interface{} `json:"metadata"`
    Score      float32                `json:"score"`
    SourceFile string                 `json:"sourceFile"`
}

// SearchResponse represents the overall search response
type SearchResponse struct {
    Results     []SearchResult `json:"results"`
    TotalCount  int            `json:"totalCount"`
    ProcessedIn int64          `json:"processedIn"` // time in milliseconds
}

// Server encapsulates the HTTP server and its dependencies
type Server struct {
    router      *mux.Router
    searchSvc   *SearchService
    port        string
}

// NewServer creates a new server with the given search service
func NewServer(searchSvc *SearchService, port string) *Server {
    s := &Server{
        router:    mux.NewRouter(),
        searchSvc: searchSvc,
        port:      port,
    }
    
    s.setupRoutes()
    return s
}

// setupRoutes configures the API endpoints
func (s *Server) setupRoutes() {
    // Health check endpoint
    s.router.HandleFunc("/health", s.handleHealth).Methods("GET")
    
    // Search endpoints
    s.router.HandleFunc("/search", s.handleSearch).Methods("POST")
    s.router.HandleFunc("/search/quick", s.handleQuickSearch).Methods("GET")
    
    // Document info endpoint
    s.router.HandleFunc("/documents/{id}", s.handleGetDocument).Methods("GET")
}

// Start begins listening for requests
func (s *Server) Start() error {
    log.Printf("Starting server on port %s", s.port)
    return http.ListenAndServe(":"+s.port, s.router)
}

// handleHealth responds to health check requests
func (s *Server) handleHealth(w http.ResponseWriter, r *http.Request) {
    w.Header().Set("Content-Type", "application/json")
    json.NewEncoder(w).Encode(map[string]string{"status": "ok"})
}

// handleSearch processes full search requests
func (s *Server) handleSearch(w http.ResponseWriter, r *http.Request) {
    var req SearchRequest
    
    // Decode the request body
    err := json.NewDecoder(r.Body).Decode(&req)
    if err != nil {
        http.Error(w, "Invalid request body", http.StatusBadRequest)
        return
    }
    
    // Validate the request
    if req.Query == "" {
        http.Error(w, "Query is required", http.StatusBadRequest)
        return
    }
    
    // Apply defaults if not specified
    if req.Limit <= 0 {
        req.Limit = 10
    }
    
    // Perform the search
    results, err := s.searchSvc.Search(r.Context(), req)
    if err != nil {
        log.Printf("Search error: %v", err)
        http.Error(w, "Search failed", http.StatusInternalServerError)
        return
    }
    
    // Return the results
    w.Header().Set("Content-Type", "application/json")
    json.NewEncoder(w).Encode(results)
}

// handleQuickSearch provides a simpler GET endpoint for basic searches
func (s *Server) handleQuickSearch(w http.ResponseWriter, r *http.Request) {
    // Extract query from URL parameter
    query := r.URL.Query().Get("q")
    if query == "" {
        http.Error(w, "Query parameter 'q' is required", http.StatusBadRequest)
        return
    }
    
    // Parse optional parameters
    limit, _ := strconv.Atoi(r.URL.Query().Get("limit"))
    includeText := r.URL.Query().Get("include_text") == "true"
    
    // Create a search request
    req := SearchRequest{
        Query:       query,
        Limit:       limit,
        IncludeText: includeText,
    }
    
    // Apply defaults if needed
    if req.Limit <= 0 {
        req.Limit = 5 // Lower default for quick search
    }
    
    // Perform the search
    results, err := s.searchSvc.Search(r.Context(), req)
    if err != nil {
        log.Printf("Quick search error: %v", err)
        http.Error(w, "Search failed", http.StatusInternalServerError)
        return
    }
    
    // Return the results
    w.Header().Set("Content-Type", "application/json")
    json.NewEncoder(w).Encode(results)
}

// handleGetDocument retrieves a specific document by ID
func (s *Server) handleGetDocument(w http.ResponseWriter, r *http.Request) {
    // Extract document ID from path
    vars := mux.Vars(r)
    id := vars["id"]
    
    // Retrieve the document
    doc, err := s.searchSvc.GetDocument(r.Context(), id)
    if err != nil {
        log.Printf("Document retrieval error: %v", err)
        http.Error(w, "Document not found", http.StatusNotFound)
        return
    }
    
    // Return the document
    w.Header().Set("Content-Type", "application/json")
    json.NewEncoder(w).Encode(doc)
}

func main() {
    // Initialize dependencies
    // (In a real app, these would be properly configured)
    embedder := initEmbedder()
    vectorStore := initVectorStore()
    searchSvc := NewSearchService(embedder, vectorStore)
    
    // Create and start the server
    port := os.Getenv("PORT")
    if port == "" {
        port = "8080"
    }
    
    server := NewServer(searchSvc, port)
    log.Fatal(server.Start())
}

This API design offers:

  1. A detailed search endpoint (/search) that accepts JSON for complex queries
  2. A simpler GET endpoint (/search/quick) for basic searches
  3. A document retrieval endpoint (/documents/{id}) for fetching specific documents
  4. A health check endpoint (/health) for monitoring

It’s RESTful, intuitive, and supports both simple and complex use cases.

Query Processing and Vector Generation

The heart of our search API is the SearchService that processes queries and interacts with our vector store:

// SearchService handles the business logic for search operations
type SearchService struct {
    embedder    embedding.Embedder
    vectorStore vectorstore.VectorStore
}

// NewSearchService creates a new search service
func NewSearchService(embedder embedding.Embedder, vectorStore vectorstore.VectorStore) *SearchService {
    return &SearchService{
        embedder:    embedder,
        vectorStore: vectorStore,
    }
}

// Search performs a search operation
func (s *SearchService) Search(ctx context.Context, req SearchRequest) (*SearchResponse, error) {
    startTime := time.Now()
    
    // Generate embedding for the query
    queryEmbedding, err := s.embedder.EmbedText(ctx, req.Query)
    if err != nil {
        return nil, fmt.Errorf("failed to generate embedding: %w", err)
    }
    
    // Prepare search options
    searchOpts := vectorstore.SearchOptions{
        Limit:      req.Limit,
        Filters:    req.Filters,
        MinScore:   req.Similarity,
    }
    
    // Perform vector search
    docs, err := s.vectorStore.SimilaritySearchWithOptions(
        ctx, queryEmbedding, searchOpts)
    if err != nil {
        return nil, fmt.Errorf("vector search failed: %w", err)
    }
    
    // Transform to API response format
    results := make([]SearchResult, len(docs))
    for i, doc := range docs {
        result := SearchResult{
            ID:         doc.ID,
            Metadata:   doc.Metadata,
            Score:      doc.Score,
        }
        
        // Include source file from metadata
        if source, ok := doc.Metadata["source"].(string); ok {
            result.SourceFile = source
        }
        
        // Include text content if requested
        if req.IncludeText {
            result.Content = doc.Content
        }
        
        results[i] = result
    }
    
    // Calculate processing time
    processingTime := time.Since(startTime).Milliseconds()
    
    return &SearchResponse{
        Results:     results,
        TotalCount:  len(results),
        ProcessedIn: processingTime,
    }, nil
}

// GetDocument retrieves a document by ID
func (s *SearchService) GetDocument(ctx context.Context, id string) (*SearchResult, error) {
    doc, err := s.vectorStore.GetDocument(ctx, id)
    if err != nil {
        return nil, err
    }
    
    result := &SearchResult{
        ID:       doc.ID,
        Content:  doc.Content,
        Metadata: doc.Metadata,
    }
    
    if source, ok := doc.Metadata["source"].(string); ok {
        result.SourceFile = source
    }
    
    return result, nil
}

This service handles the critical tasks of:

  1. Converting the query to an embedding using our embedding model
  2. Performing the similarity search in our vector database
  3. Transforming the results into a consistent, useful format

Let’s extend the vector store interface to support more advanced search options:

// SearchOptions provides parameters for fine-tuning the search
type SearchOptions struct {
    Limit      int                    // Maximum number of results to return
    Filters    map[string]interface{} // Metadata filters to apply
    MinScore   float32                // Minimum similarity score threshold
}

// Extended VectorStore interface
type VectorStore interface {
    // Basic operations
    AddDocuments(ctx context.Context, docs []Document) error
    DeleteDocument(ctx context.Context, id string) error
    
    // Basic similarity search
    SimilaritySearch(ctx context.Context, queryVector []float32, limit int) ([]Document, error)
    
    // Advanced similarity search with filtering
    SimilaritySearchWithOptions(ctx context.Context, queryVector []float32, options SearchOptions) ([]Document, error)
    
    // Document retrieval by ID
    GetDocument(ctx context.Context, id string) (*Document, error)
}

This interface design supports both simple searches for quick prototyping and advanced searches with filtering for production use.

Similarity Search Implementation

Let’s implement the SimilaritySearchWithOptions method for our Weaviate vector store:

// Document with similarity score
type Document struct {
    ID        string
    Content   string
    Metadata  map[string]interface{}
    Embedding []float32
    Score     float32  // Similarity score, only populated in search results
}

// WeaviateStore implementation of SimilaritySearchWithOptions
func (s *WeaviateStore) SimilaritySearchWithOptions(
    ctx context.Context,
    queryVector []float32,
    options SearchOptions,
) ([]Document, error) {
    // Prepare the query
    limit := options.Limit
    if limit <= 0 {
        limit = 10 // Default limit
    }
    
    // Build GraphQL query
    nearVector := client.NearVectorArgBuilder{}.
        WithVector(queryVector).
        WithDistance(options.MinScore).
        WithCertainty(options.MinScore) // Weaviate can use either distance or certainty
    
    // Build metadata filters
    var where *filters.Where
    if len(options.Filters) > 0 {
        where = buildWeaviateFilters(options.Filters)
    }
    
    // Execute the query
    result, err := s.client.GraphQL().Get().
        WithClassName(s.className).
        WithNearVector(nearVector).
        WithFields("_additional { id distance certainty } content metadata").
        WithLimit(limit).
        WithWhere(where).
        Do(ctx)
    
    if err != nil {
        return nil, fmt.Errorf("weaviate query failed: %w", err)
    }
    
    // Process results
    var docs []Document
    
    // Extract results from GraphQL response
    searchResults := result.Data["Get"].(map[string]interface{})[s.className].([]interface{})
    for _, res := range searchResults {
        item := res.(map[string]interface{})
        additional := item["_additional"].(map[string]interface{})
        
        // Convert certainty to score (0-1 range)
        var score float32
        if cert, ok := additional["certainty"].(float64); ok {
            score = float32(cert)
        }
        
        // Extract metadata
        metadataMap := make(map[string]interface{})
        if metaRaw, ok := item["metadata"].(map[string]interface{}); ok {
            metadataMap = metaRaw
        }
        
        doc := Document{
            ID:       additional["id"].(string),
            Content:  item["content"].(string),
            Metadata: metadataMap,
            Score:    score,
        }
        
        docs = append(docs, doc)
    }
    
    return docs, nil
}

// Helper to build Weaviate filters from generic map
func buildWeaviateFilters(filters map[string]interface{}) *filters.Where {
    where := &filters.Where{}
    
    for key, value := range filters {
        switch v := value.(type) {
        case string:
            where = where.WithPath([]string{key}).WithOperator(filters.Equal).WithValueString(v)
        case int:
            where = where.WithPath([]string{key}).WithOperator(filters.Equal).WithValueInt(v)
        case float64:
            where = where.WithPath([]string{key}).WithOperator(filters.Equal).WithValueNumber(v)
        case bool:
            where = where.WithPath([]string{key}).WithOperator(filters.Equal).WithValueBoolean(v)
        case []string:
            where = where.WithPath([]string{key}).WithOperator(filters.ContainsAny).WithValueText(v)
        // Add more cases as needed for different filter types
        }
    }
    
    return where
}

This implementation handles the critical aspects of vector similarity search in Weaviate, including:

  1. Converting our generic search options to Weaviate-specific query parameters
  2. Building metadata filters dynamically based on user requirements
  3. Processing the GraphQL results into a consistent document format

Result Ranking and Filtering

Similarity scores are just one aspect of result ranking. Let’s enhance our search service with more sophisticated ranking logic:

// RankingOptions controls how results are scored and ranked
type RankingOptions struct {
    RecencyBoost   bool    // Boost more recent documents
    LengthPenalty  bool    // Penalize overly short chunks
    TitleBoost     float32 // Boost when title matches query
    HybridSearch   bool    // Combine vector search with keyword matching
}

// Enhanced search method with ranking options
func (s *SearchService) SearchWithRanking(
    ctx context.Context,
    req SearchRequest,
    rankOpts RankingOptions,
) (*SearchResponse, error) {
    startTime := time.Now()
    
    // Perform basic vector search
    searchOpts := vectorstore.SearchOptions{
        Limit:    req.Limit * 2, // Fetch more results than needed for re-ranking
        Filters:  req.Filters,
        MinScore: req.Similarity,
    }
    
    // Generate embedding for the query
    queryEmbedding, err := s.embedder.EmbedText(ctx, req.Query)
    if err != nil {
        return nil, fmt.Errorf("failed to generate embedding: %w", err)
    }
    
    // Perform vector search
    docs, err := s.vectorStore.SimilaritySearchWithOptions(
        ctx, queryEmbedding, searchOpts)
    if err != nil {
        return nil, fmt.Errorf("vector search failed: %w", err)
    }
    
    // Apply additional ranking factors
    rankedDocs := s.rankResults(docs, req.Query, rankOpts)
    
    // Limit to requested number after re-ranking
    if len(rankedDocs) > req.Limit {
        rankedDocs = rankedDocs[:req.Limit]
    }
    
    // Transform to API response format
    results := make([]SearchResult, len(rankedDocs))
    for i, doc := range rankedDocs {
        results[i] = SearchResult{
            ID:         doc.ID,
            Metadata:   doc.Metadata,
            Score:      doc.Score,
            SourceFile: getSourceFile(doc.Metadata),
        }
        
        if req.IncludeText {
            results[i].Content = doc.Content
        }
    }
    
    processingTime := time.Since(startTime).Milliseconds()
    
    return &SearchResponse{
        Results:     results,
        TotalCount:  len(results),
        ProcessedIn: processingTime,
    }, nil
}

// rankResults applies additional ranking factors to search results
func (s *SearchService) rankResults(
    docs []Document,
    query string,
    opts RankingOptions,
) []Document {
    // Create a copy we can modify
    results := make([]Document, len(docs))
    copy(results, docs)
    
    // Get current time for recency calculations
    now := time.Now()
    
    // Apply ranking adjustments
    for i := range results {
        // Start with original similarity score
        score := results[i].Score
        
        // Apply recency boost if enabled
        if opts.RecencyBoost {
            if timestamp, ok := results[i].Metadata["timestamp"].(time.Time); ok {
                // Calculate age in days
                age := now.Sub(timestamp).Hours() / 24
                // Apply logarithmic decay based on age
                if age < 365 { // Only boost content less than a year old
                    recencyBoost := float32(0.1 * math.Exp(-age/90)) // 90-day half-life
                    score += recencyBoost
                }
            }
        }
        
        // Apply length penalty if enabled
        if opts.LengthPenalty {
            contentLength := len(results[i].Content)
            if contentLength < 100 { // Penalize very short chunks
                score -= float32(0.05 * (1 - (float64(contentLength) / 100)))
            }
        }
        
        // Apply title boost if enabled
        if opts.TitleBoost > 0 {
            if title, ok := results[i].Metadata["title"].(string); ok {
                // Simple string matching for title boost
                if strings.Contains(strings.ToLower(title), strings.ToLower(query)) {
                    score += opts.TitleBoost
                }
            }
        }
        
        // Store adjusted score
        results[i].Score = score
    }
    
    // Sort by adjusted score
    sort.Slice(results, func(i, j int) bool {
        return results[i].Score > results[j].Score
    })
    
    return results
}

func getSourceFile(metadata map[string]interface{}) string {
    if source, ok := metadata["source"].(string); ok {
        return source
    }
    return ""
}

This enhanced ranking system allows for more nuanced result ordering based on factors beyond simple vector similarity, like:

  1. Document recency (favoring newer documents)
  2. Content length (avoiding overly short snippets)
  3. Title matches (boosting documents with relevant titles)

You could extend this further with domain-specific ranking factors relevant to your use case.

Handling Edge Cases

A robust search API needs to gracefully handle various edge cases:

// Enhanced search method with edge case handling
func (s *SearchService) SearchWithErrorHandling(
    ctx context.Context,
    req SearchRequest,
) (*SearchResponse, error) {
    // Validate query length
    if len(req.Query) > 1000 {
        return nil, ErrQueryTooLong
    }
    
    // Handle empty query with special logic
    if strings.TrimSpace(req.Query) == "" {
        return s.handleEmptyQuery(ctx, req)
    }
    
    // Set timeout for search operations
    timeoutCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
    defer cancel()
    
    // Attempt vector search
    results, err := s.Search(timeoutCtx, req)
    
    // Handle specific error cases
    if errors.Is(err, context.DeadlineExceeded) {
        log.Println("Search timed out, returning partial results")
        return s.fallbackSearch(ctx, req)
    }
    
    if err != nil {
        log.Printf("Search error: %v", err)
        return s.fallbackSearch(ctx, req)
    }
    
    // If no results found, try query expansion
    if len(results.Results) == 0 {
        log.Println("No results found, trying query expansion")
        return s.searchWithQueryExpansion(ctx, req)
    }
    
    return results, nil
}

// fallbackSearch provides a simpler search when main search fails
func (s *SearchService) fallbackSearch(
    ctx context.Context,
    req SearchRequest,
) (*SearchResponse, error) {
    // Implement simpler search logic
    // For example, keyword-based search or simplified query
    // ...
    
    return &SearchResponse{
        Results:     fallbackResults,
        TotalCount:  len(fallbackResults),
        ProcessedIn: 0, // Not measured for fallback
    }, nil
}

// handleEmptyQuery returns recent or popular documents for empty queries
func (s *SearchService) handleEmptyQuery(
    ctx context.Context,
    req SearchRequest,
) (*SearchResponse, error) {
    // Retrieve recent documents instead of searching
    // ...
    
    return &SearchResponse{
        Results:     recentDocs,
        TotalCount:  len(recentDocs),
        ProcessedIn: 0,
    }, nil
}

// searchWithQueryExpansion tries alternative phrasings of the query
func (s *SearchService) searchWithQueryExpansion(
    ctx context.Context,
    req SearchRequest,
) (*SearchResponse, error) {
    // Generate alternative phrasings
    alternativeQueries := generateAlternativeQueries(req.Query)
    
    // Try each alternative
    var bestResults *SearchResponse
    
    for _, altQuery := range alternativeQueries {
        altReq := req
        altReq.Query = altQuery
        
        results, err := s.Search(ctx, altReq)
        if err != nil {
            continue
        }
        
        if len(results.Results) > 0 {
            // If we found results, use them
            if bestResults == nil || len(results.Results) > len(bestResults.Results) {
                bestResults = results
            }
        }
    }
    
    if bestResults != nil {
        return bestResults, nil
    }
    
    // If no alternatives yielded results, return empty set
    return &SearchResponse{
        Results:    []SearchResult{},
        TotalCount: 0,
    }, nil
}

// generateAlternativeQueries creates variations of the original query
func generateAlternativeQueries(query string) []string {
    alternatives := []string{}
    
    // Add simple variations
    alternatives = append(alternatives, "about " + query)
    alternatives = append(alternatives, "what is " + query)
    alternatives = append(alternatives, query + " meaning")
    alternatives = append(alternatives, query + " explanation")
    
    // Handle queries with potential negations
    if strings.Contains(query, "not") || strings.Contains(query, "without") {
        // Try removing negation
        alternatives = append(alternatives, 
            strings.ReplaceAll(strings.ReplaceAll(query, "not ", ""), "without ", ""))
    }
    
    return alternatives
}

This error handling approach ensures a more robust user experience by:

  1. Providing sensible responses for empty queries
  2. Setting timeouts to prevent long-running operations
  3. Implementing fallback strategies for when vector search fails
  4. Expanding queries when the original query yields no results

Error Handling and Logging

Good error handling is essential for debuggability and reliability:

// Custom error types for better error handling
var (
    ErrQueryTooLong       = errors.New("query exceeds maximum length")
    ErrEmbeddingFailed    = errors.New("failed to generate embedding")
    ErrVectorSearchFailed = errors.New("vector search operation failed")
    ErrDocumentNotFound   = errors.New("document not found")
)

// Structured logging for search operations
type SearchLogger struct {
    logger *zap.Logger
}

func NewSearchLogger() (*SearchLogger, error) {
    logger, err := zap.NewProduction()
    if err != nil {
        return nil, err
    }
    
    return &SearchLogger{logger: logger}, nil
}

// LogSearch records details about a search operation
func (l *SearchLogger) LogSearch(
    ctx context.Context, 
    req SearchRequest, 
    resp *SearchResponse, 
    err error,
) {
    // Extract user ID from context if available
    userID := "anonymous"
    if id, ok := ctx.Value("userID").(string); ok {
        userID = id
    }
    
    // Create structured log entry
    fields := []zap.Field{
        zap.String("query", req.Query),
        zap.Int("limit", req.Limit),
        zap.String("userID", userID),
        zap.Int64("processingTime", resp.ProcessedIn),
        zap.Int("resultCount", resp.TotalCount),
    }
    
    // Add filter information if present
    if len(req.Filters) > 0 {
        filterJSON, _ := json.Marshal(req.Filters)
        fields = append(fields, zap.String("filters", string(filterJSON)))
    }
    
    // Log the search event
    if err != nil {
        fields = append(fields, zap.Error(err))
        l.logger.Error("search_failed", fields...)
    } else {
        l.logger.Info("search_success", fields...)
    }
}

// Enhance the search service with structured logging
func (s *SearchService) SearchWithLogging(
    ctx context.Context, 
    req SearchRequest,
    logger *SearchLogger,
) (*SearchResponse, error) {
    // Perform the search
    resp, err := s.Search(ctx, req)
    
    // Log the search operation
    logger.LogSearch(ctx, req, resp, err)
    
    return resp, err
}

This logging approach provides:

  1. Structured logs that are easily parsed and analyzed
  2. Contextual information about search operations
  3. Performance metrics for monitoring
  4. Error details for troubleshooting

Code Example: Simple Search API

Let’s bring these concepts together in a complete example of a simple but effective search API:

package main

import (
    "context"
    "encoding/json"
    "log"
    "net/http"
    "os"
    "time"

    "github.com/gorilla/mux"
    "go.uber.org/zap"
    
    "myproject/embedding"
    "myproject/vectorstore"
)

func main() {
    // Initialize logger
    logger, err := zap.NewProduction()
    if err != nil {
        log.Fatalf("Failed to initialize logger: %v", err)
    }
    defer logger.Sync()
    
    sugar := logger.Sugar()
    
    // Load configuration
    config, err := LoadConfig()
    if err != nil {
        sugar.Fatalf("Failed to load configuration: %v", err)
    }
    
    // Initialize embedder
    embedder, err := embedding.NewEmbedder(config.Embedding)
    if err != nil {
        sugar.Fatalf("Failed to initialize embedder: %v", err)
    }
    
    // Initialize vector store
    vectorStore, err := vectorstore.NewWeaviateStore(
        config.Weaviate.Host,
        config.Weaviate.Class,
    )
    if err != nil {
        sugar.Fatalf("Failed to initialize vector store: %v", err)
    }
    
    // Create search service
    searchSvc := NewSearchService(embedder, vectorStore)
    searchLogger := NewSearchLogger()
    
    // Create router
    r := mux.NewRouter()
    
    // Register middleware
    r.Use(loggingMiddleware)
    r.Use(corsMiddleware)
    r.Use(authMiddleware)
    
    // Register routes
    r.HandleFunc("/health", func(w http.ResponseWriter, r *http.Request) {
        json.NewEncoder(w).Encode(map[string]bool{"ok": true})
    }).Methods("GET")
    
    r.HandleFunc("/search", func(w http.ResponseWriter, r *http.Request) {
        var req SearchRequest
        
        if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
            http.Error(w, "Invalid request body", http.StatusBadRequest)
            return
        }
        
        resp, err := searchSvc.SearchWithLogging(r.Context(), req, searchLogger)
        if err != nil {
            sugar.Errorw("Search failed",
                "error", err,
                "query", req.Query,
            )
            
            http.Error(w, "Search failed", http.StatusInternalServerError)
            return
        }
        
        w.Header().Set("Content-Type", "application/json")
        json.NewEncoder(w).Encode(resp)
    }).Methods("POST")
    
    r.HandleFunc("/search/quick", func(w http.ResponseWriter, r *http.Request) {
        query := r.URL.Query().Get("q")
        if query == "" {
            http.Error(w, "Query parameter 'q' is required", http.StatusBadRequest)
            return
        }
        
        req := SearchRequest{
            Query: query,
            Limit: 5,
        }
        
        resp, err := searchSvc.SearchWithLogging(r.Context(), req, searchLogger)
        if err != nil {
            sugar.Errorw("Quick search failed",
                "error", err,
                "query", query,
            )
            
            http.Error(w, "Search failed", http.StatusInternalServerError)
            return
        }
        
        w.Header().Set("Content-Type", "application/json")
        json.NewEncoder(w).Encode(resp)
    }).Methods("GET")
    
    // Start server
    port := os.Getenv("PORT")
    if port == "" {
        port = "8080"
    }
    
    server := &http.Server{
        Addr:         ":" + port,
        Handler:      r,
        ReadTimeout:  10 * time.Second,
        WriteTimeout: 10 * time.Second,
        IdleTimeout:  120 * time.Second,
    }
    
    sugar.Infof("Starting server on port %s", port)
    if err := server.ListenAndServe(); err != nil {
        sugar.Fatalf("Server failed: %v", err)
    }
}

// Middleware for request logging
func loggingMiddleware(next http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        start := time.Now()
        
        // Call the next handler
        next.ServeHTTP(w, r)
        
        // Log the request
        log.Printf(
            "%s %s %s %s",
            r.RemoteAddr,
            r.Method,
            r.URL.Path,
            time.Since(start),
        )
    })
}

// Middleware for CORS
func corsMiddleware(next http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        w.Header().Set("Access-Control-Allow-Origin", "*")
        w.Header().Set("Access-Control-Allow-Methods", "GET, POST, OPTIONS")
        w.Header().Set("Access-Control-Allow-Headers", "Content-Type, Authorization")
        
        if r.Method == "OPTIONS" {
            w.WriteHeader(http.StatusOK)
            return
        }
        
        next.ServeHTTP(w, r)
    })
}

// Middleware for authentication
func authMiddleware(next http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        // Skip auth for health checks
        if r.URL.Path == "/health" {
            next.ServeHTTP(w, r)
            return
        }
        
        // Get auth token
        token := r.Header.Get("Authorization")
        if token == "" {
            http.Error(w, "Unauthorized", http.StatusUnauthorized)
            return
        }
        
        // In a real app, validate the token and get user info
        userID := validateToken(token)
        if userID == "" {
            http.Error(w, "Invalid token", http.StatusUnauthorized)
            return
        }
        
        // Add user ID to context
        ctx := context.WithValue(r.Context(), "userID", userID)
        
        // Call the next handler with updated context
        next.ServeHTTP(w, r.WithContext(ctx))
    })
}

// Placeholder token validation
func validateToken(token string) string {
    // In a real app, verify the token and extract user ID
    // This is just a placeholder
    if token == "invalid" {
        return ""
    }
    
    return "user-123"
}

This complete example provides:

  1. A fully functional search API with health checks
  2. Middleware for logging, CORS, and authentication
  3. Structured error handling and logging
  4. Clean separation of concerns between handlers and business logic

Best Practices for RAG Search APIs

After building several RAG search APIs for production use, I’ve collected some key best practices:

  1. Cache query embeddings: If the same query is likely to be run multiple times, cache the embedding to avoid regenerating it.
type EmbeddingCache struct {
    cache  *lru.Cache
    client embedding.Embedder
}

func NewEmbeddingCache(client embedding.Embedder, size int) *EmbeddingCache {
    cache, _ := lru.New(size)
    return &EmbeddingCache{
        cache:  cache,
        client: client,
    }
}

func (c *EmbeddingCache) EmbedText(ctx context.Context, text string) ([]float32, error) {
    // Check cache first
    if cached, found := c.cache.Get(text); found {
        return cached.([]float32), nil
    }
    
    // Generate embedding
    embedding, err := c.client.EmbedText(ctx, text)
    if err != nil {
        return nil, err
    }
    
    // Cache the result
    c.cache.Add(text, embedding)
    
    return embedding, nil
}
  1. Implement rate limiting: Protect your embedding model and vector database from excessive load.
// RateLimiter enforces rate limits on API endpoints
type RateLimiter struct {
    limiter *rate.Limiter
}

func NewRateLimiter(rps float64, burst int) *RateLimiter {
    return &RateLimiter{
        limiter: rate.NewLimiter(rate.Limit(rps), burst),
    }
}

// RateLimitMiddleware enforces rate limits
func (rl *RateLimiter) Middleware(next http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        if !rl.limiter.Allow() {
            http.Error(w, "Rate limit exceeded", http.StatusTooManyRequests)
            return
        }
        
        next.ServeHTTP(w, r)
    })
}
  1. Support semantic pagination: Traditional offset pagination doesn’t work well with semantic search, so implement cursor-based pagination.
// SearchRequest with pagination support
type SearchRequest struct {
    Query       string                 `json:"query"`
    Filters     map[string]interface{} `json:"filters,omitempty"`
    Limit       int                    `json:"limit,omitempty"`
    Cursor      string                 `json:"cursor,omitempty"`
    Similarity  float32                `json:"similarity,omitempty"`
    IncludeText bool                   `json:"includeText,omitempty"`
}

// SearchResponse with pagination support
type SearchResponse struct {
    Results     []SearchResult `json:"results"`
    TotalCount  int            `json:"totalCount"`
    ProcessedIn int64          `json:"processedIn"`
    NextCursor  string         `json:"nextCursor,omitempty"`
}

// Implement cursor-based pagination
func (s *SearchService) SearchWithPagination(ctx context.Context, req SearchRequest) (*SearchResponse, error) {
    // Decode cursor if provided
    var lastScore float32
    var lastID string
    
    if req.Cursor != "" {
        cursors, err := decodeCursor(req.Cursor)
        if err != nil {
            return nil, fmt.Errorf("invalid cursor: %w", err)
        }
        
        lastScore = cursors.Score
        lastID = cursors.ID
    }
    
    // Perform search with cursor info
    // ...
    
    // Create next cursor for pagination
    var nextCursor string
    if len(results) > 0 {
        lastResult := results[len(results)-1]
        nextCursor = encodeCursor(lastResult.Score, lastResult.ID)
    }
    
    return &SearchResponse{
        Results:    results,
        TotalCount: totalCount,
        NextCursor: nextCursor,
    }, nil
}
  1. Monitor query performance: Track key metrics to identify performance issues.
type SearchMetrics struct {
    EmbeddingTime   time.Duration
    VectorSearchTime time.Duration
    ResultsCount     int
    TotalTime        time.Duration
}

func (s *SearchService) SearchWithMetrics(ctx context.Context, req SearchRequest) (*SearchResponse, *SearchMetrics, error) {
    startTime := time.Now()
    metrics := &SearchMetrics{}
    
    // Measure embedding generation time
    embeddingStart := time.Now()
    queryEmbedding, err := s.embedder.EmbedText(ctx, req.Query)
    metrics.EmbeddingTime = time.Since(embeddingStart)
    
    if err != nil {
        return nil, metrics, fmt.Errorf("embedding generation failed: %w", err)
    }
    
    // Measure vector search time
    searchStart := time.Now()
    docs, err := s.vectorStore.SimilaritySearch(ctx, queryEmbedding, req.Limit)
    metrics.VectorSearchTime = time.Since(searchStart)
    
    if err != nil {
        return nil, metrics, fmt.Errorf("vector search failed: %w", err)
    }
    
    metrics.ResultsCount = len(docs)
    metrics.TotalTime = time.Since(startTime)
    
    // Transform results
    // ...
    
    return response, metrics, nil
}

These best practices help ensure your search API remains responsive, scalable, and maintainable as usage grows.

Looking Ahead: Evolving Your Search API

The search API you build today should be designed to evolve with your needs. Consider these future enhancements:

  1. Multi-modal search: Extending beyond text to search images, audio, or video
  2. Personalized ranking: Adjusting results based on user preferences and history
  3. Federated search: Combining results from multiple vector stores or data sources
  4. Semantic feedback loops: Learning from user interactions to improve relevance
  5. Explainable search: Providing users with insights into why results were selected

By building on the solid foundation we’ve established, these enhancements become natural extensions rather than painful rewrites.

Integrating with Language Models

We’ve built an impressive search system that can find relevant documents based on semantic meaning rather than just keywords. But for many users, seeing a list of relevant document chunks isn’t the end goal—they want answers to their questions. This is where we complete the RAG architecture by integrating with language models to generate coherent, contextual responses based on the retrieved information.

I once deployed a RAG system for a healthcare organization where the search component worked beautifully, but users found it frustrating to piece together answers from multiple document fragments. After adding LLM integration, user satisfaction scores jumped from 62% to 91%. The ability to get direct, synthesized answers made all the difference in adoption.

Let’s explore how to integrate language models while maintaining our commitment to privacy and cost-effectiveness.

Self-Hosted LLM Options for Privacy

When dealing with sensitive documents, sending their contents to third-party API services poses a significant privacy risk. Fortunately, several high-quality language models can now be run locally with reasonable hardware requirements.

Evaluating Lightweight LLMs for Generation

The LLM landscape is evolving rapidly, with increasingly capable models becoming available for local deployment. Here are some promising options as of our publication date:

  1. Llama 3 8B/70B: Meta’s Llama 3 models offer an excellent balance of performance and resource requirements. The 8B variant can run on consumer hardware with 16GB RAM, while the 70B version delivers near-GPT-4 level quality but requires more substantial resources.

  2. Mistral 7B/Mixtral 8x7B: Mistral AI’s models deliver impressive performance for their size, with the Mixtral 8x7B model offering particularly strong capabilities in knowledge-intensive tasks like our RAG system requires.

  3. Gemma 2B/7B: Google’s Gemma models are designed for efficiency and perform surprisingly well on many tasks despite their compact size.

  4. Phi-3 Mini/Small: Microsoft’s Phi models offer strong performance with minimal resource requirements, making them ideal for deployment on modest hardware.

When choosing a model, you need to consider:

For our implementation, we’ll design a flexible LLM integration layer that can work with any of these models (or others) depending on your specific requirements.

Implementing the Generation Component

Let’s create a clean interface for language model integration:

package llm

import (
    "context"
    "errors"
)

// Common errors
var (
    ErrGenerationFailed = errors.New("text generation failed")
    ErrContextTooLong   = errors.New("context exceeds model's maximum length")
    ErrInvalidPrompt    = errors.New("prompt contains invalid or harmful content")
)

// GenerationRequest defines the input for text generation
type GenerationRequest struct {
    Prompt        string  // The prompt template with placeholders
    Query         string  // User's original query
    Context       string  // Retrieved context to inform the generation
    MaxTokens     int     // Maximum tokens to generate
    Temperature   float32 // Randomness control (0.0-1.0)
    StopSequences []string // Sequences that will stop generation when encountered
}

// GenerationResponse contains the generated text and metadata
type GenerationResponse struct {
    Text        string   // The generated text
    SourceIDs   []string // IDs of source documents used
    TokensUsed  int      // Number of tokens processed
    Truncated   bool     // Whether the context was truncated
}

// Generator defines the interface for text generation
type Generator interface {
    // Generate produces text based on the provided request
    Generate(ctx context.Context, req GenerationRequest) (*GenerationResponse, error)
    
    // ModelInfo returns information about the underlying model
    ModelInfo() ModelInfo
}

// ModelInfo contains information about the language model
type ModelInfo struct {
    Name            string
    Version         string
    ContextLength   int
    EmbeddingDims   int
    QuantizationLevel string // e.g., "none", "4-bit", "8-bit"
}

This interface allows us to implement different language model backends while keeping the rest of our application consistent.

Local LLM Implementation with Ollama

Ollama provides a simple way to run various LLMs locally with a convenient API. Let’s implement our Generator interface using Ollama:

// OllamaGenerator implements Generator using the Ollama API
type OllamaGenerator struct {
    baseURL     string
    modelName   string
    modelInfo   ModelInfo
    client      *http.Client
}

// NewOllamaGenerator creates a new generator using Ollama
func NewOllamaGenerator(baseURL, modelName string) (*OllamaGenerator, error) {
    client := &http.Client{
        Timeout: 60 * time.Second, // LLM generation can take time
    }
    
    // Initialize with basic info
    generator := &OllamaGenerator{
        baseURL:   baseURL,
        modelName: modelName,
        client:    client,
        modelInfo: ModelInfo{
            Name:    modelName,
            Version: "unknown", // Will be populated from API
        },
    }
    
    // Fetch model info from Ollama
    err := generator.fetchModelInfo()
    if err != nil {
        return nil, fmt.Errorf("failed to fetch model info: %w", err)
    }
    
    return generator, nil
}

// fetchModelInfo gets model details from Ollama
func (g *OllamaGenerator) fetchModelInfo() error {
    url := fmt.Sprintf("%s/api/show", g.baseURL)
    
    reqBody, err := json.Marshal(map[string]string{
        "name": g.modelName,
    })
    if err != nil {
        return err
    }
    
    resp, err := g.client.Post(url, "application/json", bytes.NewBuffer(reqBody))
    if err != nil {
        return err
    }
    defer resp.Body.Close()
    
    if resp.StatusCode != http.StatusOK {
        return fmt.Errorf("Ollama API returned status %d", resp.StatusCode)
    }
    
    var result struct {
        ModelFile struct {
            Parameter struct {
                ContextLength int `json:"num_ctx"`
            } `json:"parameter"`
        } `json:"modelfile"`
    }
    
    err = json.NewDecoder(resp.Body).Decode(&result)
    if err != nil {
        return err
    }
    
    g.modelInfo.ContextLength = result.ModelFile.Parameter.ContextLength
    
    return nil
}

// Generate implements the Generator interface
func (g *OllamaGenerator) Generate(ctx context.Context, req GenerationRequest) (*GenerationResponse, error) {
    // Prepare the full prompt with context
    fullPrompt := g.formatPrompt(req)
    
    // Check if the prompt exceeds context length
    if len(fullPrompt) > g.modelInfo.ContextLength {
        // Truncate if necessary
        fullPrompt = fullPrompt[:g.modelInfo.ContextLength]
        // Log a warning
        log.Printf("Warning: Prompt truncated to fit model context length")
    }
    
    // Prepare request to Ollama API
    requestBody := map[string]interface{}{
        "model":       g.modelName,
        "prompt":      fullPrompt,
        "temperature": req.Temperature,
    }
    
    if req.MaxTokens > 0 {
        requestBody["max_tokens"] = req.MaxTokens
    }
    
    if len(req.StopSequences) > 0 {
        requestBody["stop"] = req.StopSequences
    }
    
    reqBody, err := json.Marshal(requestBody)
    if err != nil {
        return nil, err
    }
    
    // Make API request
    resp, err := g.client.Post(
        fmt.Sprintf("%s/api/generate", g.baseURL), 
        "application/json", 
        bytes.NewBuffer(reqBody),
    )
    if err != nil {
        return nil, err
    }
    defer resp.Body.Close()
    
    if resp.StatusCode != http.StatusOK {
        return nil, fmt.Errorf("Ollama API returned status %d", resp.StatusCode)
    }
    
    // Parse response
    var result struct {
        Response string `json:"response"`
        Eval_Count int  `json:"eval_count"`
    }
    
    err = json.NewDecoder(resp.Body).Decode(&result)
    if err != nil {
        return nil, err
    }
    
    // Extract source document IDs from context
    // This is a simplified approach - in a real implementation
    // you'd track which chunks were used in the context
    sourceIDs := extractSourceIDs(req.Context)
    
    return &GenerationResponse{
        Text:       result.Response,
        SourceIDs:  sourceIDs,
        TokensUsed: result.Eval_Count,
        Truncated:  len(fullPrompt) < len(g.formatPrompt(req)),
    }, nil
}

// ModelInfo returns information about the underlying model
func (g *OllamaGenerator) ModelInfo() ModelInfo {
    return g.modelInfo
}

// formatPrompt creates the full prompt with context
func (g *OllamaGenerator) formatPrompt(req GenerationRequest) string {
    // This is a simple prompt template - you would customize this
    // based on the specific model you're using
    return fmt.Sprintf(`
You are a helpful assistant that provides accurate information based only on the provided context.
If the information to answer the question is not in the context, say "I don't have enough information to answer this question."

CONTEXT:
%s

QUESTION:
%s

ANSWER:
`, req.Context, req.Query)
}

// Helper function to extract source IDs from context
func extractSourceIDs(context string) []string {
    // In a real implementation, you would include source IDs
    // in the context in a structured way and extract them here
    // This is just a placeholder
    return []string{}
}

This implementation handles the key aspects of LLM integration:

  1. Model information discovery
  2. Context length management
  3. Prompt formatting
  4. Generation parameter control
  5. Error handling

For production use, you would enhance this with features like:

Context Window Management

One of the trickiest aspects of RAG systems is managing the context window effectively. Most LLMs have a fixed maximum context length, and exceeding it can cause truncation or errors.

Let’s implement a context manager that optimizes how we use the available context window:

// ContextManager handles the assembly and optimization of context for the LLM
type ContextManager struct {
    maxContextLength int
    tokenizer        *tokenizer.Tokenizer
}

// NewContextManager creates a new context manager
func NewContextManager(maxContextLength int, tokenizerPath string) (*ContextManager, error) {
    // Initialize a tokenizer - this would typically be model-specific
    // For this example, we're using a hypothetical tokenizer
    tokenizer, err := tokenizer.NewFromFile(tokenizerPath)
    if err != nil {
        return nil, fmt.Errorf("failed to load tokenizer: %w", err)
    }
    
    return &ContextManager{
        maxContextLength: maxContextLength,
        tokenizer:        tokenizer,
    }, nil
}

// AssembleContext builds an optimal context from search results
func (m *ContextManager) AssembleContext(
    query string, 
    results []SearchResult,
    reservedTokens int,
) (string, []string, error) {
    // Calculate tokens needed for the prompt template and query
    promptTemplate := "You are a helpful assistant...[template text]..."
    basePrompt := fmt.Sprintf(promptTemplate, query, "")
    baseTokenCount := m.tokenizer.CountTokens(basePrompt)
    
    // Calculate available tokens for context
    availableTokens := m.maxContextLength - baseTokenCount - reservedTokens
    
    if availableTokens <= 0 {
        return "", nil, errors.New("not enough context space for results")
    }
    
    // Track used source IDs
    var usedSources []string
    
    // First, try to use all results if they fit
    var builder strings.Builder
    currentTokens := 0
    
    for _, result := range results {
        // Add source reference
        sourceRef := fmt.Sprintf("[Source: %s]\n", result.ID)
        sourceTokens := m.tokenizer.CountTokens(sourceRef)
        
        // Add content with token counting
        content := result.Content
        contentTokens := m.tokenizer.CountTokens(content)
        
        // Check if adding this result would exceed our budget
        if currentTokens + sourceTokens + contentTokens > availableTokens {
            // If this is the first result and it doesn't fit, we need to truncate
            if len(usedSources) == 0 {
                // Calculate how many tokens we can use
                availableContentTokens := availableTokens - sourceTokens
                
                // Truncate content to fit
                truncatedContent := m.tokenizer.TruncateToTokens(content, availableContentTokens)
                
                builder.WriteString(sourceRef)
                builder.WriteString(truncatedContent)
                builder.WriteString("\n\n")
                
                usedSources = append(usedSources, result.ID)
                break
            }
            
            // Otherwise, we've added as many full results as we can
            break
        }
        
        // Add the result to our context
        builder.WriteString(sourceRef)
        builder.WriteString(content)
        builder.WriteString("\n\n")
        
        currentTokens += sourceTokens + contentTokens
        usedSources = append(usedSources, result.ID)
    }
    
    return builder.String(), usedSources, nil
}

This context manager handles several important tasks:

  1. Calculating token budgets based on the model’s context window
  2. Prioritizing the most relevant results
  3. Truncating content when necessary
  4. Tracking which sources were included

In a production system, you would refine this further with techniques like:

Prompt Engineering for Better Results

The way you frame questions to the language model significantly impacts the quality of responses. Let’s explore some effective prompt templates for RAG:

// PromptTemplate defines a template for generating LLM prompts
type PromptTemplate struct {
    Name          string
    Template      string
    RequiresModel string // Model name/family this template is designed for
}

// GetPromptTemplates returns a set of prompt templates for different scenarios
func GetPromptTemplates() map[string]PromptTemplate {
    return map[string]PromptTemplate{
        "default": {
            Name: "Default RAG Prompt",
            Template: `You are a helpful assistant that answers questions based on the provided context.
Answer the question using ONLY the information from the context below.
If the answer cannot be found in the context, say "I don't have enough information to answer this question."
Do not use any prior knowledge.

CONTEXT:
{{.Context}}

QUESTION:
{{.Query}}

ANSWER:`,
        },
        
        "detailed": {
            Name: "Detailed Analysis Prompt",
            Template: `You are a helpful assistant that provides detailed, comprehensive answers based on the provided context.
Answer the question thoroughly using ONLY the information from the context below.
Organize your answer with clear sections and bullet points when appropriate.
If the answer cannot be found in the context, say "I don't have enough information to answer this question."
Do not use any prior knowledge.

CONTEXT:
{{.Context}}

QUESTION:
{{.Query}}

DETAILED ANSWER:`,
        },
        
        "concise": {
            Name: "Concise Summary Prompt",
            Template: `You are a helpful assistant that provides brief, to-the-point answers based on the provided context.
Answer the question concisely using ONLY the information from the context below.
Use no more than 2-3 sentences.
If the answer cannot be found in the context, say "I don't have enough information."
Do not use any prior knowledge.

CONTEXT:
{{.Context}}

QUESTION:
{{.Query}}

CONCISE ANSWER:`,
        },
        
        "llama": {
            Name: "Llama-Specific RAG Prompt",
            RequiresModel: "llama",
            Template: `<s>[INST] <<SYS>>
You are a helpful assistant that answers questions based on the provided context.
Answer the question using ONLY the information from the context below.
If the answer cannot be found in the context, say "I don't have enough information to answer this question."
Do not use any prior knowledge.
<</SYS>>

CONTEXT:
{{.Context}}

QUESTION:
{{.Query}} [/INST]`,
        },
        
        "mistral": {
            Name: "Mistral-Specific RAG Prompt",
            RequiresModel: "mistral",
            Template: `<s>[INST] 
You are a helpful assistant that answers questions based on the provided context.
Answer the question using ONLY the information from the context below.
If the answer cannot be found in the context, say "I don't have enough information to answer this question."
Do not use any prior knowledge.

CONTEXT:
{{.Context}}

QUESTION:
{{.Query}} [/INST]`,
        },
    }
}

// PromptFormatter formats prompts using templates
type PromptFormatter struct {
    templates map[string]PromptTemplate
}

// NewPromptFormatter creates a new prompt formatter
func NewPromptFormatter() *PromptFormatter {
    return &PromptFormatter{
        templates: GetPromptTemplates(),
    }
}

// FormatPrompt formats a prompt using the specified template
func (f *PromptFormatter) FormatPrompt(
    templateName string,
    query string,
    context string,
    modelName string,
) (string, error) {
    // Check if template exists
    template, ok := f.templates[templateName]
    if !ok {
        return "", fmt.Errorf("template '%s' not found", templateName)
    }
    
    // If template requires specific model, check compatibility
    if template.RequiresModel != "" {
        if !strings.Contains(strings.ToLower(modelName), template.RequiresModel) {
            return "", fmt.Errorf("template '%s' requires model '%s'", 
                templateName, template.RequiresModel)
        }
    }
    
    // Parse the template
    tmpl, err := text.template.New("prompt").Parse(template.Template)
    if err != nil {
        return "", err
    }
    
    // Prepare data for template
    data := struct {
        Query   string
        Context string
    }{
        Query:   query,
        Context: context,
    }
    
    // Execute the template
    var result strings.Builder
    err = tmpl.Execute(&result, data)
    if err != nil {
        return "", err
    }
    
    return result.String(), nil
}

This approach provides several advantages:

  1. Model-specific prompting for optimal results
  2. Templates for different use cases (default, detailed, concise)
  3. Clean separation of prompt logic from generation
  4. Easy addition of new templates as models evolve

Code Example: Integrating with a Self-Hosted LLM

Now let’s put it all together with a complete example of integrating our search API with a local LLM to create a full RAG system:

package main

import (
    "context"
    "encoding/json"
    "log"
    "net/http"
    "time"

    "github.com/gorilla/mux"
)

// RAGRequest represents the input for a RAG query
type RAGRequest struct {
    Query         string                 `json:"query"`
    Filters       map[string]interface{} `json:"filters,omitempty"`
    MaxResults    int                    `json:"maxResults,omitempty"`
    PromptTemplate string                `json:"promptTemplate,omitempty"`
    Temperature   float32                `json:"temperature,omitempty"`
}

// RAGResponse represents the output from a RAG query
type RAGResponse struct {
    Answer        string         `json:"answer"`
    Sources       []SourceInfo   `json:"sources"`
    ProcessingTime int64         `json:"processingTime"`
}

// SourceInfo contains information about a source document
type SourceInfo struct {
    ID       string                 `json:"id"`
    Title    string                 `json:"title,omitempty"`
    Source   string                 `json:"source"`
    Metadata map[string]interface{} `json:"metadata,omitempty"`
    Score    float32                `json:"score"`
}

// RAGService combines search and LLM generation
type RAGService struct {
    searchService  *SearchService
    generator      llm.Generator
    contextManager *ContextManager
    promptFormatter *PromptFormatter
}

// NewRAGService creates a new RAG service
func NewRAGService(
    searchService *SearchService,
    generator llm.Generator,
    contextManager *ContextManager,
    promptFormatter *PromptFormatter,
) *RAGService {
    return &RAGService{
        searchService:   searchService,
        generator:       generator,
        contextManager:  contextManager,
        promptFormatter: promptFormatter,
    }
}

// ProcessQuery handles a complete RAG query
func (s *RAGService) ProcessQuery(
    ctx context.Context,
    req RAGRequest,
) (*RAGResponse, error) {
    startTime := time.Now()
    
    // Set defaults if not provided
    if req.MaxResults <= 0 {
        req.MaxResults = 5
    }
    
    if req.PromptTemplate == "" {
        req.PromptTemplate = "default"
    }
    
    if req.Temperature == 0 {
        req.Temperature = 0.7
    }
    
    // Step 1: Perform the search
    searchReq := SearchRequest{
        Query:      req.Query,
        Filters:    req.Filters,
        Limit:      req.MaxResults,
        IncludeText: true, // We need the content for the LLM
    }
    
    searchResp, err := s.searchService.Search(ctx, searchReq)
    if err != nil {
        return nil, fmt.Errorf("search failed: %w", err)
    }
    
    // Step 2: Assemble context from search results
    modelInfo := s.generator.ModelInfo()
    
    // Reserve tokens for the answer (estimated)
    reservedTokens := 300
    
    context, usedSources, err := s.contextManager.AssembleContext(
        req.Query,
        searchResp.Results,
        reservedTokens,
    )
    if err != nil {
        return nil, fmt.Errorf("context assembly failed: %w", err)
    }
    
    // If no relevant context was found
    if context == "" {
        return &RAGResponse{
            Answer: "I couldn't find any relevant information to answer your query.",
            ProcessingTime: time.Since(startTime).Milliseconds(),
        }, nil
    }
    
    // Step 3: Format the prompt
    prompt, err := s.promptFormatter.FormatPrompt(
        req.PromptTemplate,
        req.Query,
        context,
        modelInfo.Name,
    )
    if err != nil {
        // Fall back to default template if there's an issue
        prompt, err = s.promptFormatter.FormatPrompt(
            "default",
            req.Query,
            context,
            modelInfo.Name,
        )
        if err != nil {
            return nil, fmt.Errorf("prompt formatting failed: %w", err)
        }
    }
    
    // Step 4: Generate the answer
    genReq := llm.GenerationRequest{
        Prompt:      prompt,
        Query:       req.Query,
        Context:     context,
        Temperature: req.Temperature,
        MaxTokens:   reservedTokens,
    }
    
    genResp, err := s.generator.Generate(ctx, genReq)
    if err != nil {
        return nil, fmt.Errorf("text generation failed: %w", err)
    }
    
    // Step 5: Collect source information
    sources := make([]SourceInfo, 0, len(usedSources))
    sourceMap := make(map[string]bool)
    
    for _, id := range usedSources {
        if sourceMap[id] {
            continue // Avoid duplicates
        }
        sourceMap[id] = true
        
        // Find the source in search results
        for _, result := range searchResp.Results {
            if result.ID == id {
                sources = append(sources, SourceInfo{
                    ID:       result.ID,
                    Title:    getTitle(result.Metadata),
                    Source:   result.SourceFile,
                    Metadata: result.Metadata,
                    Score:    result.Score,
                })
                break
            }
        }
    }
    
    return &RAGResponse{
        Answer:        genResp.Text,
        Sources:       sources,
        ProcessingTime: time.Since(startTime).Milliseconds(),
    }, nil
}

// Helper function to extract title from metadata
func getTitle(metadata map[string]interface{}) string {
    if title, ok := metadata["title"].(string); ok {
        return title
    }
    return ""
}

// RAG HTTP handler
func handleRAG(ragService *RAGService) http.HandlerFunc {
    return func(w http.ResponseWriter, r *http.Request) {
        var req RAGRequest
        
        // Decode the request
        err := json.NewDecoder(r.Body).Decode(&req)
        if err != nil {
            http.Error(w, "Invalid request body", http.StatusBadRequest)
            return
        }
        
        // Process the query
        resp, err := ragService.ProcessQuery(r.Context(), req)
        if err != nil {
            log.Printf("RAG processing error: %v", err)
            http.Error(w, "Query processing failed", http.StatusInternalServerError)
            return
        }
        
        // Return the response
        w.Header().Set("Content-Type", "application/json")
        json.NewEncoder(w).Encode(resp)
    }
}

func main() {
    // Initialize components
    searchService := initSearchService()
    
    // Initialize LLM generator
    generator, err := llm.NewOllamaGenerator("http://localhost:11434", "llama3:8b")
    if err != nil {
        log.Fatalf("Failed to initialize generator: %v", err)
    }
    
    // Initialize context manager
    contextManager, err := NewContextManager(
        generator.ModelInfo().ContextLength,
        "path/to/tokenizer",
    )
    if err != nil {
        log.Fatalf("Failed to initialize context manager: %v", err)
    }
    
    // Initialize prompt formatter
    promptFormatter := NewPromptFormatter()
    
    // Create RAG service
    ragService := NewRAGService(
        searchService,
        generator,
        contextManager,
        promptFormatter,
    )
    
    // Set up routes
    r := mux.NewRouter()
    
    // RAG endpoint
    r.HandleFunc("/rag", handleRAG(ragService)).Methods("POST")
    
    // Start server
    log.Println("Starting server on :8080")
    log.Fatal(http.ListenAndServe(":8080", r))
}

This implementation provides a complete RAG system with:

  1. Search integration for document retrieval
  2. Context assembly optimized for the LLM’s context window
  3. Prompt formatting with templates
  4. LLM generation with a self-hosted model
  5. Source attribution in responses

Future Directions and Considerations

As you implement this RAG architecture, consider these future directions:

1. Multi-model Support

Different queries might benefit from different language models. You could implement a model selection strategy:

// ModelSelector chooses the optimal LLM based on query characteristics
type ModelSelector struct {
    generators map[string]llm.Generator
}

// SelectModel chooses the best model for a query
func (s *ModelSelector) SelectModel(query string, filters map[string]interface{}) string {
    // Simple length-based selection
    queryLength := len(query)
    
    if queryLength > 200 {
        // Complex query - use larger model if available
        if _, ok := s.generators["large"]; ok {
            return "large"
        }
    }
    
    // Check for specific domains
    if isCodeQuery(query) && s.hasModelForDomain("code") {
        return "code"
    }
    
    if isLegalQuery(query) && s.hasModelForDomain("legal") {
        return "legal"
    }
    
    // Default to base model
    return "base"
}

2. Response Streaming

For better UX, implement streaming responses:

// StreamingGenerator extends Generator with streaming capabilities
type StreamingGenerator interface {
    Generator
    
    // GenerateStream produces text in a streaming fashion
    GenerateStream(ctx context.Context, req GenerationRequest) (<-chan string, error)
}

// Implementation for Ollama streaming
func (g *OllamaGenerator) GenerateStream(ctx context.Context, req GenerationRequest) (<-chan string, error) {
    // Create output channel
    outputChan := make(chan string)
    
    // Format prompt
    fullPrompt := g.formatPrompt(req)
    
    // Prepare request
    requestBody := map[string]interface{}{
        "model":       g.modelName,
        "prompt":      fullPrompt,
        "temperature": req.Temperature,
        "stream":      true,
    }
    
    // Set up HTTP request
    reqBody, err := json.Marshal(requestBody)
    if err != nil {
        return nil, err
    }
    
    httpReq, err := http.NewRequestWithContext(
        ctx,
        "POST",
        fmt.Sprintf("%s/api/generate", g.baseURL),
        bytes.NewBuffer(reqBody),
    )
    if err != nil {
        return nil, err
    }
    
    httpReq.Header.Set("Content-Type", "application/json")
    
    // Launch goroutine to handle the streaming
    go func() {
        defer close(outputChan)
        
        resp, err := g.client.Do(httpReq)
        if err != nil {
            log.Printf("Streaming request failed: %v", err)
            return
        }
        defer resp.Body.Close()
        
        if resp.StatusCode != http.StatusOK {
            log.Printf("Streaming API returned status %d", resp.StatusCode)
            return
        }
        
        scanner := bufio.NewScanner(resp.Body)
        for scanner.Scan() {
            line := scanner.Text()
            if line == "" {
                continue
            }
            
            // Parse the JSON response
            var streamResp struct {
                Response string `json:"response"`
                Done     bool   `json:"done"`
            }
            
            err := json.Unmarshal([]byte(line), &streamResp)
            if err != nil {
                log.Printf("Failed to parse stream response: %v", err)
                continue
            }
            
            // Send the token to the channel
            outputChan <- streamResp.Response
            
            // Check if generation is complete
            if streamResp.Done {
                break
            }
        }
        
        if err := scanner.Err(); err != nil {
            log.Printf("Stream reading error: %v", err)
        }
    }()
    
    return outputChan, nil
}

3. Hybrid LLM Architecture

Consider a hybrid approach that combines self-hosted models for sensitive data with API-based models for non-sensitive queries:

// HybridGenerator selects between local and cloud LLMs based on query sensitivity
type HybridGenerator struct {
    localGenerator  llm.Generator
    cloudGenerator  llm.Generator
    sensitivityAnalyzer *SensitivityAnalyzer
}

// Generate selects the appropriate backend based on query sensitivity
func (g *HybridGenerator) Generate(ctx context.Context, req GenerationRequest) (*GenerationResponse, error) {
    // Analyze query sensitivity
    sensitivityScore := g.sensitivityAnalyzer.AnalyzeQuery(req.Query, req.Context)
    
    // Choose generator based on sensitivity
    if sensitivityScore > 0.7 {
        // High sensitivity - use local model
        return g.localGenerator.Generate(ctx, req)
    } else {
        // Low sensitivity - can use cloud model
        return g.cloudGenerator.Generate(ctx, req)
    }
}

4. Feedback Loop Integration

Implement a feedback system to continuously improve your RAG system:

// Feedback captures user feedback on RAG responses
type Feedback struct {
    QueryID      string
    UserRating   int       // 1-5 rating
    Comments     string
    Timestamp    time.Time
}

// FeedbackStore interfaces with feedback storage
type FeedbackStore interface {
    SaveFeedback(ctx context.Context, feedback Feedback) error
    GetFeedbackForQuery(ctx context.Context, queryID string) ([]Feedback, error)
    GetRecentFeedback(ctx context.Context, limit int) ([]Feedback, error)
}

// Apply feedback to improve search rankings
func (s *SearchService) ApplyFeedbackToRankings(ctx context.Context, feedbackStore FeedbackStore) error {
    // Get recent feedback
    feedback, err := feedbackStore.GetRecentFeedback(ctx, 1000)
    if err != nil {
        return err
    }
    
    // Process feedback to adjust rankings
    // This would be a sophisticated algorithm in a real implementation
    // ...
    
    return nil
}

Best Practices for Production Deployment

Based on real-world experience deploying RAG systems, here are some best practices:

1. Model Versioning and Compatibility

Clearly track which models and prompt templates work together:

// CompatibilityMatrix tracks which prompts work with which models
var modelCompatibility = map[string][]string{
    "llama3:8b": {"default", "detailed", "concise", "llama"},
    "mistral:7b": {"default", "detailed", "concise", "mistral"},
    "phi:2": {"default", "concise"},
}

// Check compatibility before using a prompt template
func isCompatible(modelName, templateName string) bool {
    compatibleTemplates, ok := modelCompatibility[modelName]
    if !ok {
        return false
    }
    
    for _, name := range compatibleTemplates {
        if name == templateName {
            return true
        }
    }
    
    return false
}

2. Response Caching

Implement caching for common queries to improve performance and reduce resource usage:

// Cache for RAG responses
type ResponseCache struct {
    cache  *cache.Cache
    ttl    time.Duration
}

// NewResponseCache creates a new response cache
func NewResponseCache(ttl time.Duration) *ResponseCache {
    return &ResponseCache{
        cache: cache.New(ttl, ttl*2),
        ttl:   ttl,
    }
}

// GetCachedResponse retrieves a cached response
func (c *ResponseCache) GetCachedResponse(query string, filters map[string]interface{}) (*RAGResponse, bool) {
    cacheKey := generateCacheKey(query, filters)
    
    if cached, found := c.cache.Get(cacheKey); found {
        return cached.(*RAGResponse), true
    }
    
    return nil, false
}

// CacheResponse stores a response in cache
func (c *ResponseCache) CacheResponse(query string, filters map[string]interface{}, response *RAGResponse) {
    cacheKey := generateCacheKey(query, filters)
    c.cache.Set(cacheKey, response, c.ttl)
}

// Helper to generate cache key
func generateCacheKey(query string, filters map[string]interface{}) string {
    // Create a deterministic key from query and filters
    // ...
}

3. Monitoring and Observability

Implement comprehensive monitoring for your RAG system:

// Metrics for monitoring
type RAGMetrics struct {
    QueryCount        int64
    AvgProcessingTime float64
    CacheHitRate      float64
    ErrorRate         float64
    TopQueries        []string
    ModelUsage        map[string]int64
}

// MetricsCollector tracks system performance
type MetricsCollector struct {
    queries         int64
    processingTimes []int64
    cacheHits       int64
    errors          int64
    queryCounts     map[string]int
    modelUsage      map[string]int64
    mu              sync.Mutex
}

// RecordQuery tracks a query execution
func (m *MetricsCollector) RecordQuery(
    query string,
    processingTime int64,
    cacheHit bool,
    modelName string,
    err error,
) {
    m.mu.Lock()
    defer m.mu.Unlock()
    
    m.queries++
    m.processingTimes = append(m.processingTimes, processingTime)
    
    if cacheHit {
        m.cacheHits++
    }
    
    if err != nil {
        m.errors++
    }
    
    m.queryCounts[query]++
    m.modelUsage[modelName]++
}

// GetMetrics returns current metrics
func (m *MetricsCollector) GetMetrics() RAGMetrics {
    m.mu.Lock()
    defer m.mu.Unlock()
    
    // Calculate metrics
    // ...
    
    return metrics
}

We’ve now built a complete, self-hosted RAG architecture that provides privacy-preserving document search with natural language answers. Our system integrates:

  1. Document processing with intelligent chunking
  2. Embedding generation for semantic search
  3. Vector storage for efficient retrieval
  4. A flexible search API
  5. Language model integration for answer generation

This architecture gives you complete control over your data and processing, with no sensitive information leaving your environment. It’s scalable, extensible, and can be tailored to your specific needs.

The field of RAG is evolving rapidly, but the core architecture we’ve described provides a solid foundation that can be enhanced and extended as new techniques and models emerge. By following these patterns and best practices, you’ll be well-positioned to take advantage of advances in language models while maintaining strict privacy control.

Most importantly, you’ve built a system that transforms how users interact with your document collection, moving from tedious keyword search to natural conversations that directly answer their questions - all while keeping sensitive information secure and private.