Technology & Architecture

How PaperBrain works under the hood. A high-level overview of architecture, requirements, and deployment options.

System Overview

PaperBrain is a self-contained document intelligence system that runs on your hardware. It combines OCR (for scanned documents), embedding models (for semantic search), and large language models (for document understanding) into one unified system. Everything is local. Everything is fast. Everything is yours.

Core Components

1. Document Processing Pipeline

Ingests PDFs, images, Word documents, and plain text. Uses Optical Character Recognition (OCR) to extract text from scanned pages, maintaining document structure and metadata.

  • PDF parsing with text and layout preservation
  • High-accuracy OCR for scanned documents
  • Support for DOCX, XLSX, and other office formats
  • Automatic metadata extraction (dates, names, amounts)

2. Semantic Indexing Engine

Converts document content into vector embeddings for fast, semantic search. Enables natural-language queries like "Find all contracts expiring next year" without requiring exact keyword matches.

  • Local embedding models (no cloud processing)
  • Real-time indexing as documents are added
  • Fast retrieval across 1000+ documents
  • Semantic similarity for intelligent search

3. Document Understanding Model

A local large language model optimized for business document use. Understands contracts, invoices, financial statements, and legal language. Generates summaries, extracts key terms, and answers complex questions.

  • Context-aware document analysis
  • Key term and obligation extraction
  • Anomaly detection (unusual clauses, missing terms)
  • Source citations for all answers

4. User Interface

Web-based dashboard for document upload, search, and Q&A. Works on any device on your network (desktop, tablet, mobile). No special software requiredβ€”just a web browser.

  • Drag-and-drop document upload
  • Natural-language search bar
  • Instant document summaries
  • Conversation history and saved searches

System Requirements

Minimum Hardware

  • πŸ–₯️ CPU: Modern dual-core (2.0 GHz+)
  • πŸ’Ύ RAM: 8 GB
  • πŸ’Ώ Storage: 100 GB SSD
  • 🌐 Network: LAN or WiFi

Suitable for small teams and light-to-moderate document volumes.

Recommended Hardware

  • πŸ–₯️ CPU: 6-core (2.5 GHz+)
  • πŸ’Ύ RAM: 16+ GB
  • πŸ’Ώ Storage: 500 GB+ NVMe SSD
  • 🌐 Network: Dedicated LAN

Better for larger document libraries (10,000+) and concurrent users.

Deployment Options

1. Existing Server (Recommended)

Install PaperBrain on a machine you already own (Windows, Mac, or Linux server). Most cost-effective. Uses hardware you control.

Cost: $350 setup + hardware you provide
Timeline: 1-2 hours remote installation

2. New Hardware (Optional)

Let us source and configure optimal hardware for your needs. We handle procurement, setup, and delivery.

Cost: $350 setup + hardware cost (~$800–$2,500)
Timeline: 1–2 weeks procurement + 1-2 hours remote setup

3. Virtual Machine / Cloud (Private Cloud)

Run PaperBrain in a virtual machine on your private server or isolated network. Maintains privacy while using cloud infrastructure you control.

Cost: $350 setup + your infrastructure
Timeline: 1-2 hours remote installation

Security & Privacy Architecture

πŸ”’

Local Processing Only

All document processing (OCR, embedding, understanding) happens on your machine. No API calls to external services. No data leaves your network.

πŸ”

No Cloud Infrastructure

Zero dependence on cloud providers. No third-party data processing. Your documents are indexed only on machines you own.

πŸ”‘

Access Control

Network-level access controls. Basic authentication included; Active Directory and LDAP integration available on request.

πŸ“‹

Audit Logging

Comprehensive logs of who accessed what documents and when. Supports compliance audits (HIPAA, SOC 2, etc.).

♻️

Data Retention

No automatic backups to third-party services. Your documents stay under your control. You manage retention and deletion policies.

Performance

Typical Performance Metrics

Document Upload: 5–30 seconds per document (depends on size)

Indexing: 10–50 documents per minute (background process)

Search (100 docs): <1 second

Search (1,000 docs): 1–2 seconds

Answer Generation: 3–10 seconds (depends on complexity)

Concurrent Users: 5–20 per recommended hardware

Performance scales with hardware. Faster machines = faster processing.

Under the Hood: Technology Stack

OCR Engine:

Tesseract + specialized document layout detection for accurate text extraction from scans

Embedding Model:

Local sentence transformers for semantic search without external API calls

LLM:

Local language models optimized for business document understanding

Vector Database:

Local vector store for fast semantic search across documents

Backend:

Python/FastAPI for document processing and Q&A inference

Frontend:

Modern web UI (React/Vue) accessible via browser on your network

Compliance & Certification

βœ“

HIPAA Compatible

All data stays on your network; zero third-party processing

βœ“

GDPR Friendly

Complete data ownership; no automatic transfers to other regions

βœ“

SOC 2 Audit Ready

Comprehensive audit logs and access controls; documentation provided

βœ“

PCI-DSS Compatible

If you don't process payment cards in PaperBrain, no PCI scope

Next Steps

Ready to understand how PaperBrain fits your specific architecture? Schedule a technical consultation with our team.

Schedule Technical Demo