Free & open source · Works 100% offline · No data leaves your machine

Talk to your docs.
Privately.

Ask questions, get answers with citations — all powered by local AI. Your files never leave your machine.

Download for macOS

macOS only for now · Windows & Linux coming soon · Requires Ollama

DocLLM
📄
Q4_Report.pdf
📘
Contract_2024.docx
📑
Research_Notes.pdf
D
What were the key revenue drivers in Q4?
According to the Q4 report, the three key revenue drivers were: enterprise contract renewals (+34%), expansion into APAC markets, and the launch of the Pro tier in November. The report notes enterprise accounted for 62% of total ARR.
📄 Q4_Report.pdf · p.7 📄 Q4_Report.pdf · p.12
D
Summarise the APAC expansion risks
🔒 No data leaves your machine
Runs on local AI (Ollama)
📂 PDF, DOCX, and more
Smart source citations
🌐 Free & open source

Drowning in documents?

Finding answers buried in hundreds of pages is slow, error-prone — and sending those files to cloud AI raises serious privacy concerns.

1.
⏱️

Hours wasted reading

Manually skimming through reports, contracts, and research just to find one specific fact eats up your entire day.

2.
🔓

Privacy risks with cloud AI

Uploading sensitive contracts, financial reports, or internal docs to OpenAI or Google means your data lives on someone else's servers.

3.
🤷

Generic AI doesn't know your docs

ChatGPT has no context about your specific documents. Copy-pasting chunks is tedious and you miss the full picture.

Your AI that actually reads your files

DocLLM indexes your documents locally and lets you have real conversations with them — with citations so you know exactly where every answer comes from.

DocLLM runs on Ollama with models like Mistral 7B and nomic-embed-text, all on your own hardware. No API keys, no data leaving your machine — ever.

Every AI response includes exact source references — which document and which page. Click any citation to jump directly to that page in the built-in PDF viewer.

Organise documents into workspaces for different projects. Ask questions that span across multiple PDFs and DOCX files simultaneously within a workspace.

DocLLM uses Tesseract.js to automatically run OCR on scanned PDFs and image-based documents, so you can chat with any file regardless of how it was created.

Pin any AI response to your notes panel for quick reference. Export full conversation history as Markdown or TXT whenever you need to share insights with your team.

📁
Drop your documents
PDF, DOCX, and more accepted
▼ Indexed locally with Ollama embeddings
💬
Ask any question
Natural language, conversational
▼ RAG retrieval over your document chunks
Get answers + page sources
Click citation → jump to page
▼ Everything stays on your device
🔒
100% private
No cloud, no telemetry, no leaks

Up and running in 5 minutes

No technical skills needed. DocLLM walks you through every step the first time you open it.

Ollama is free — you own your own AI engine, forever
The 4.4 GB model download happens once; after that, no internet needed
Works on any Mac with 8 GB RAM or more (Apple Silicon recommended)
Windows & Linux support coming soon
1

Download DocLLM

Install DocLLM like any Mac app — open the .dmg, drag it to Applications, and launch it. That's it.

2

Install Ollama

DocLLM needs Ollama — a free app that runs AI privately on your computer. Download it from ollama.com and install it. This is the engine that powers everything locally.

3

Start Ollama

Open Terminal and run the command below. This starts the AI engine in the background — keep Terminal open while using DocLLM.

$ollama serve
4

Download your AI models — one time only

DocLLM's setup wizard automatically downloads two AI models to your Mac. This happens once and takes around 5 minutes depending on your connection. After that, everything runs fully offline.

~4.4 GB · downloaded once · stays on your Mac
5

Drop a document and start chatting

Drag in any PDF or Word doc and ask questions in plain English. You'll get answers with exact page references in seconds. Everything stays on your computer — always.

Everything you need to know

Got a question not answered here? Drop us an email.

DocLLM is a privacy-first desktop app that lets you have AI conversations with your documents. It runs entirely on your local machine using local AI models via Ollama — no data ever leaves your computer.

After the one-time model download (~4.4 GB, takes about 5 minutes), DocLLM works fully offline. All AI inference happens locally using Ollama on your machine — no internet needed.

A Mac with at least 8 GB RAM running macOS 12 or later. For best performance, 16 GB RAM is recommended. Apple Silicon Macs (M1/M2/M3/M4) run models particularly well thanks to unified memory.

DocLLM supports PDF (including scanned PDFs via OCR) and DOCX files. Support for more formats is on the roadmap.

Yes — completely. All document processing, embeddings, and AI inference happens locally on your machine. No data is ever sent to a server.

Yes — completely free, forever. DocLLM is open source under the MIT licence. Download it, use every feature, and even modify the source code. No sign-up, no payment, no limits.

Yes. From the settings panel you can switch to any model available in Ollama — Llama 3, Phi-3, Gemma, and hundreds of others. You can also configure a custom Ollama server URL.

Free, open source,
and yours forever

Download DocLLM today. No sign-up, no payment, no limits.