DraftLens

Multi-model review

Why DraftLens runs multiple models on the same manuscript (DOCX or PDF): complementary failure modes, structured JSON outputs, and a merged issue ledger you can act on.

Last updated 2026-05-11

Single-model chat can be brilliant and still blind to classes of errors. DraftLens runs multiple structured reviewers on the same manuscript (DOCX or PDF), merges findings, and can iterate toward convergence when configured—so disagreements become inspectable data instead of silent majority votes.

This is not “more models for show.” It is a hedge against correlated mistakes when one vendor’s defaults miss a risk pattern another catches.

Why it exists

Serious review workflows need redundancy: different training distributions, different refusal behaviors, and different blind spots. Merge logic exists because humans should not have to diff three chat transcripts by hand.

When it matters most

  • High-stakes memos and agreements where missing an ambiguity is costly.
  • Long documents where a single pass cannot cover everything with equal depth.

Where it can fail or be limited

  • All models can miss context outside the manuscript (unless you attach evidence appropriately).
  • Rate limits or provider outages can reduce quorum—DraftLens should label partial status honestly.
  • Convergence is bounded; some disagreements still end in human follow-up.

What you should still verify

  • Severity mapping to your org’s rubric—do not ship based on model-severity alone.
  • Citations, numbers, and dates against primary sources.
  • Anything touching regulated language or parties—use locks and human eyes.

Related