> RARE_SIGNAL — OPEN-SOURCE SUPERPOWERS
A Hyperthinking workflow

Runsheets

Turn any pile of documents into a structured analysis table. Emails, chat logs, books, receipts, transcripts — anything. Each row is a corpus element. Each column is an AI extraction. Batch-execute across the entire set. Find the needle in any haystack.

runsheet_mixed_corpus.rs12 rows × 7 cols
SourceSizeIntentKey ClaimsAction ItemsRed FlagsSummary Δ
discord_export_q1.json342 KB
war_and_peace_full.txt3.3 MB
receipt_0417.txt1.2 KB
slack_#eng_march.json890 KB
claude_session_0328.md67 KB
teams_standup_log.txt23 KB
moby_dick_ch1-42.epub1.8 MB
codex_thread_auth.md41 KB
email_inbox_week12.mbox512 KB
voice_memo_0401.txt8.4 KB
quarterly_report.pdf156 KB
photo_receipt_scan.txt0.4 KB
Extracting · 84 / 84 cells · Qwen 3.5 (7B) via Ollama2 items in summarization mode
The problem

In a world of hyper slop, you need Hyperthinking just to survive.

The volume is overwhelming. Your inbox, your chat logs, your AI sessions, your research stack — it’s all piling up faster than any human attention window can process. You don’t need a conversation with an AI about each document. You need a system that processes the entire corpus and gives you a table.

One at a time

You paste a document into a chat window. Get a decent answer. Now do it again for the next 49. Copy, paste, prompt, wait, copy output. This is not a workflow. It’s data entry.

No structure

Chat gives you prose. You needed a table. Intent, key claims, action items, contradictions — all in columns, all comparable. Prose doesn’t compare. Structure does.

Context evaporates

By document 12, the model has forgotten document 3. There’s no accumulation. No synthesis across the set. Each item is an island. The corpus never becomes a picture.

Human attention is finite

You have only so much attention window yourself. In a world of hyper slop, you need Hyperthinking just to survive. Otherwise the volume is completely, permanently overwhelming.

The workflow

Four steps. Corpus to structure.

Load documents. Define what you want extracted. Let the system batch-execute across every row. Read the result as a structured table. Any LLM. Any corpus. Any question.

01

Load your corpus

Drop in anything. Emails, transcripts, PDFs, chat exports, voice memos, scanned receipts. A book and a receipt can sit in the same sheet. Each element becomes a row. Pipeline connectors handle image-to-text, audio-to-text, PDF extraction — everything arrives as processable text.

02

Define your columns

Tell the system what to extract. Intent. Key claims. Sentiment. Action items. Red flags. Contradictions. Each column is a prompt that runs against every row. You define the lens. The LLM does the looking.

03

Batch execute

The system fires an LLM request per row, in parallel. Results stream into the table in real time. 50 documents, 7 columns — 350 extractions, done while you watch. That’s the whole trick. HTTP request per cell. Parallel. Streaming.

04

Read the structure

Now you have a table. Filter it. Sort it. Search across the full corpus. “When was I talking about that one thing?” becomes a column query, not a memory exercise. The haystack is indexed. The needle lights up.

The hard part

What happens when a document doesn’t fit.

A receipt is 400 bytes. A book is 3.3 million. They’re both rows in the same sheet. Context windows have limits. When a corpus element exceeds the window, the system enters summarization mode. It tries to compress losslessly. It never succeeds. The diff between the original and the summary is the most important artifact in the entire workflow.

01
Detection

The system estimates token load before sending. A receipt is 400 bytes. A book is 3.3 MB. They’re both rows in the same sheet. When a corpus element exceeds the context window, it enters summarization mode automatically.

02
Compression

The LLM makes a pass to create the most lossless version possible that fits within the window. It tries to preserve every claim, every data point, every nuance. It tries to be lossless.

03
The loss

It is always incredibly, incredibly lossy. Every summarization pass destroys signal. This is not a failure of the model. It is a law of compression. You cannot fit 40,000 tokens into 8,000 without loss. This never goes well.

04
The diff

So the critical artifact becomes the diff between the original corpus element and the post-summary version. That diff is where signal dies. Tracking it is how you know what you lost. The Summary Δ column is the most important output in the entire workflow.

In practice

This is a first-class use case for AI.

We haven’t seen this implemented this distinctly, this easily, anywhere else. Runsheets is an incredibly useful mental model for AI and a really easy way to show the average person what the superpower actually looks like in practice.

Cross-platform search

"When was I talking about that thing?"

Export your Discord, Teams, Codex, and Claude conversations. Drop them all into one Runsheet. Add a column: “Mentions of Project Aurora.” Now you have a searchable index across every platform you use. Not a keyword match — a semantic extraction.

Research corpus

14 papers. One table of truth.

Load a stack of research papers, competitor analyses, internal memos. Extract intent, key claims, evidence quality, contradictions. In one table you can see which papers agree, which contradict, and where the gaps are. Days of analyst work. Minutes of compute.

Email triage

Inbox zero, but for real.

Export a week of email. Run extraction: urgency, asks, commitments made, deadlines mentioned. Sort by urgency. The 300 emails you were drowning in are now 12 rows that actually matter. The rest is confirmed noise.

Due diligence

A haystack. A needle. A table.

Legal docs, financial statements, contracts, correspondence. You need to find the one clause, the one number, the one promise that matters. Define the column. The LLM reads every document. The needle lights up.

Session archaeology

Your AI sessions are a goldmine you never mine.

Export your Claude sessions, your ChatGPT logs, your Codex threads. Run extraction: insights generated, decisions made, open questions, abandoned threads. Recover the signal you already produced but never captured.

The arbitrary pile

A book, a receipt, and a chat log walk into a Runsheet.

The corpus doesn’t have to make sense. A 3.3 MB novel and a 400-byte scanned receipt can be rows in the same sheet. The system handles the disparity. Summarization mode kicks in for the book. The receipt processes in milliseconds. Both get the same columns.

Benchmarks

You don’t need big token for this.

We build systems like Wayseer with small parameter models. A 16 GB MacBook Air running Qwen 3.5 locally produces strong extraction results. Claude Opus 4.6 distillation gives near-frontier quality at fractions of a cent per row. The eureka moment: everything you need is already here. We don’t need big token. We’ve got it.

ModelTypeHardwareQualityCost
Qwen 3.5 (7B)Local16 GB MacBook AirStrong extraction, fast iterationFree (local inference)
Opus 4.6 DistillCloudAny machine + API keyNear-frontier quality, efficient tokens$0.002–$0.01 per row
Frontier (Opus 4.6)CloudAny machine + API keyHighest fidelity extraction$0.02–$0.08 per row
The pattern

40 lines. Your LLM gets it instantly.

Send your LLM to grab this code right now. It will know exactly what you’re getting at. You don’t have to understand the implementation. Your LLM reads this and immediately has the pattern. Fork it. Hook up any OpenAI-compatible API. It just works.

# runsheet.py — the whole pattern in 40 lines
import json, httpx
from concurrent.futures import ThreadPoolExecutor

CORPUS = [
    "discord_export_q1.json",     #  342 KB — chat logs
    "war_and_peace_full.txt",     #  3.3 MB — triggers summarization
    "receipt_0417.txt",           #  1.2 KB — processes in ms
    "claude_session_0328.md",     #   67 KB — AI session export
    "email_inbox_week12.mbox",    #  512 KB — email archive
]

COLUMNS = [
    {"name": "intent",       "prompt": "What is the primary intent of this document?"},
    {"name": "key_claims",   "prompt": "List the 3 strongest claims, one per line."},
    {"name": "action_items", "prompt": "Extract any concrete action items."},
    {"name": "red_flags",    "prompt": "Note anything misleading, contradictory, or unsupported."},
    {"name": "summary_diff", "prompt": "If this was summarized to fit context, what was lost?"},
]

API = "http://localhost:11434/v1/chat/completions"  # ollama, lmstudio, any openai-compat
MODEL = "qwen3.5:7b"                                # swap for any model you have

def extract(doc_text: str, col: dict) -> str:
    """One LLM call = one cell in the Runsheet."""
    r = httpx.post(API, json={
        "model": MODEL,
        "messages": [
            {"role": "system", "content": col["prompt"]},
            {"role": "user",   "content": doc_text[:32_000]},  # naive truncation fallback
        ],
    }, timeout=120)
    return r.json()["choices"][0]["message"]["content"]

def process_row(path: str) -> dict:
    text = open(path).read()
    row = {"source": path, "size": f"{len(text):,} chars"}
    for col in COLUMNS:
        row[col["name"]] = extract(text, col)
    return row

# Batch execute: all rows in parallel
with ThreadPoolExecutor(max_workers=5) as pool:
    sheet = list(pool.map(process_row, CORPUS))

print(json.dumps(sheet, indent=2))

Works with Ollama, LM Studio, vLLM, OpenRouter, or any OpenAI-compatible endpoint. Local or cloud. Your choice.

Open source

Free. Forever. For everyone.

Runsheets ships on the Rare Signal GitHub. Bring your own API key or run it fully local. No subscription. No walled garden. This is grassroots AI for people who want the superpower, not the platform. Go yoink the code.

Bring your own model

Local Qwen 3.5 on a MacBook Air. Cloud distillation via API. Frontier models when fidelity matters. You pick the engine. The sheet doesn’t care.

Open architecture

The spec is a Python script you can read in 30 seconds. Fork it. Modify it. Wrap it in a UI. Plug it into your pipeline. It’s yours. End to end.

Honest about cost

Running large corpora through LLMs is spendy. We publish real benchmarks so you can estimate cost before you start. No hidden token bills. No surprises.

“You put elements into a spreadsheet and you run an extraction on each and every one. A book and a receipt in the same sheet. The thing that would take a human analyst days — you get it in minutes. That’s Runsheets. That’s the superpower.”

Rare Signal

Start a Runsheet.

The code is open. The models are cheap. Pick a corpus and go.