Local-first document intelligence

Find the needle in your
private haystack

Ingest your PDFs, search with hybrid AI retrieval, and chat with your knowledge base. Runs entirely on your hardware — no cloud, no compromises.

Get Access

Capabilities

Three tools, one platform

Ingestion

Drop your PDFs and let the system handle the rest. Intelligent chunking preserves document context, while background processing keeps the UI responsive.

  • Contextual chunking
  • Auto language detection
  • Duplicate detection

Search

Three retrieval modes for every need. Hybrid search fuses semantic understanding with keyword matching, then reranks locally for precision.

  • Semantic + lexical fusion
  • Neural reranking
  • Language-aware stemming

Chat

Converse with your documents. The AI retrieves relevant context on the fly and responds with inline citations you can trace back to the source.

  • Tool-based retrieval
  • Real-time streaming
  • Source citations

Pipeline

From PDF to answer in four steps

01

Upload

Drop your PDFs into the ingestion interface. Duplicates are caught automatically via content hashing.

02

Process

Documents are parsed, chunked with surrounding context preserved, and embedded into dense vectors.

03

Index

Chunks are stored in PostgreSQL with HNSW vector indexes and language-aware full-text search indexes.

04

Retrieve

Query with hybrid search, fuse results with RRF, rerank locally, or let the chat AI handle it.

Interface

Designed for focus

Ingestion

Drop a PDF or click to upload

Documents

Climate_Adaptation_Report_2025.pdf

142 chunks · Mar 15, 2026

completed

Urban_Resilience_Framework.pdf

87 chunks · Mar 15, 2026

completed

Infrastructure_Policy_Brief.pdf

0 chunks · Mar 15, 2026

processing

Start exploring your documents

No cloud accounts. No API keys for the core pipeline. Just your documents and your hardware.

Get Access
Lodestone Lodestone

SvelteKit · FastAPI · PostgreSQL · pgvector