How Humanums works: the 6 signals that prove you're human
Quick Answer
Humanums works by measuring six behavioral signals during writing, then combining them into a certification score and badge tied to the finished document.
We get a lot of “but how does it actually work?” from developers and technical writers. Fair question. If you're going to trust a certification system, you should understand what it measures.
This post walks through each of the six behavioral signals Humanums analyzes when you hit Certify. No hand-waving, no marketing fluff. Just the mechanics.
A quick note on what we don't capture
Before we get into it: we never record the content of your keystrokes. Our telemetry captures timing data and structural events. We know that you pressed a key at timestamp X with an interval of Y milliseconds since the last key. We don't know which key. We know you deleted 14 characters and retyped 9. We don't know what those characters were.
This is a hard constraint, not a policy choice. The system is built so that keystroke content never enters our telemetry pipeline at all.
Signal 1: Keystroke cadence
Humans don't type at a constant speed. We speed up on familiar words, slow down when we're choosing phrasing carefully, and have natural micro-variations between every pair of keystrokes. This variance is measurable and consistent within an individual but different from person to person.
We measure the standard deviation and distribution of inter-key intervals across the entire writing session. A real human typing session has a characteristic “spiky” pattern. Someone pasting text in chunks shows zero keystroke cadence between paste events and suspiciously uniform typing elsewhere.
This signal alone isn't definitive. A slow, careful typist might score lower here. That's fine. It's one of six.
Signal 2: Pause frequency
When you write, you pause. Not just between sentences. You pause mid-sentence when you're figuring out the next clause. You pause after writing a heading while you organize your thoughts. You pause before tackling a section you know will be tricky.
We define a “thinking pause” as any gap of 2 seconds or more between keystrokes. For a 1,000-word piece, a typical human writer produces 150 to 300 thinking pauses. The distribution of these pauses matters too. They cluster around transition points: between paragraphs, after headings, at the start of complex arguments.
AI-assisted writing shows a distinctive pattern here. If someone generates a paragraph with AI, then types a prompt for the next one, you get long gaps between blocks with no pauses inside each block. It looks like a staircase, not a natural rhythm.
Signal 3: Revision behavior
This is the signal we're most proud of. Humans revise constantly. Not just at the end, in a big editing pass, but inline as they write. You type a word, realize it's wrong, backspace over it, try a different one. You finish a paragraph, go back to the second sentence, and restructure it. You add a comma, remove it, add it back.
We track every deletion and insertion event, mapped to position in the document. For a typical human writer, the revision-to-forward-progress ratio sits between 0.15 and 0.35. Meaning for every 100 characters of net content, they typed and deleted 15 to 35 characters along the way.
Pasted text has a revision ratio of zero for that block. Dictation software tends to produce very low revision ratios too. The signal isn't just about the ratio, though. It's about the distribution. Human revisions happen throughout the document and throughout the session. They're not clustered at the end.
Signal 4: Paste detection
This one is straightforward. We detect paste events and measure what percentage of the final document arrived via paste rather than keystroke-by-keystroke composition.
Some paste is normal. Writers quote sources, paste URLs, move blocks around with cut-and-paste. We don't penalize paste usage outright. The threshold is generous. But a document that's 80% pasted content with no real typing around it is going to score poorly on this signal.
We also look at paste event size and timing. A single 500-word paste followed by light editing is a different pattern than 20 small pastes spread across a writing session. Context matters.
Signal 5: Session distribution
Real writing usually happens across multiple sessions. You start a blog post in the morning, come back after lunch, finish it the next day. Even a single-session piece typically has the writer taking short breaks, checking references, getting coffee.
We look at how writing activity distributes across time. A document written in three sessions over two days scores higher on this signal than one written in a single unbroken burst. Not because longer is better, but because multi-session writing reflects normal human behavior.
Single-session writing still certifies. We just weight this signal lower when the document is short enough that one sitting makes sense.
Signal 6: Content time ratio
This compares the total active writing time to the word count. A 1,000-word blog post that shows 4 minutes of active writing time is suspicious. Even a fast writer doing 80 words per minute would need 12+ minutes, and that's without any pauses, revisions, or thinking.
Typical human writing speed for composed prose (not transcription) runs 20 to 50 words per minute. We don't enforce a fixed threshold. We look at whether the time spent is plausible for the content produced, accounting for the writer's own pace established during the session.
How they combine
Each signal produces a score between 0 and 1. These are combined using a weighted average. No signal is a dealbreaker on its own. A writer who types fast and doesn't pause much might score 0.6 on pause frequency but 0.95 on keystroke cadence and revision behavior.
The composite score determines the badge level. Above 75: Verified Human (green badge). Between 50 and 75: Likely Human (blue badge). Below 50: certification is rejected with an explanation of which signals fell short.
This is a rule-based system, intentionally. We don't use machine learning for scoring in V1. Rules are transparent, explainable, and predictable. When we have enough data (our target is 10,000+ certified documents), we'll explore ML for edge cases. But the core system will always be auditable.
What about gaming it?
Can you trick the system? In theory, yes. If you type an entire AI-generated document character by character, at natural speed, with realistic pauses and revisions, you'd pass. But that takes as long as actually writing it. For a 2,000-word post, you're looking at 45 to 90 minutes of continuous typing. Most people who try this give up and just write their own piece. Which is sort of the point.
Got questions about how the scoring works? We're happy to go deeper. Create a free account and try certifying something yourself. Seeing your own signal breakdown is the best way to understand how it works.
Keep Reading
How Humanums works in practice
See the higher-level workflow from draft to certificate, badge, and verification page.
Human authorship verification
Understand the larger verification model behind the six signals.
Content authenticity badge
See how signal-level certification becomes a public trust signal.