Skip to main content
Every Tripwire session produces a verdict (human, bot, or inconclusive) and a risk score (0.0 to 1.0). Your backend uses these to decide what to do.

Verdicts

VerdictScore rangeMeaningSuggested action
human0.0 – 0.39Real user behavior detectedAllow
inconclusive0.40 – 0.69Not enough evidence for a confident callChallenge or gather more context
bot0.70 – 1.0Automated or non-human behavior detectedBlock or rate-limit

Risk score

The risk score is a continuous value from 0 (definitely human) to 1 (definitely bot). It’s normalized via a sigmoid function, so scores cluster near the extremes — most sessions score below 0.1 or above 0.9. Use the score for granular policy when the verdict alone isn’t enough:
if score < 0.1  → fast-path allow (no friction)
if score < 0.4  → allow (human verdict)
if score < 0.7  → challenge (inconclusive)
if score >= 0.7 → block (bot verdict)

Evaluation phases

Tripwire evaluates sessions in two phases:
PhaseWhenWhat it usesConfidence
snapshotImmediately on session creationEnvironment probes, fingerprint, anti-tamperGood for deterministic signals
behavioralAfter user interactionMouse, keyboard, touch, timing patternsHigher confidence for ambiguous cases
If you call getSession() before the user interacts with the page, you’ll get a snapshot-phase result. For highest confidence, wait for at least a few seconds of user interaction.

Preliminary vs final

StatusMeaning
preliminaryEarly result, may be updated as more data arrives
finalComplete evaluation, won’t change
Snapshot-phase results are usually preliminary. Behavioral-phase results are final.

Automation attribution

When Tripwire identifies the specific automation tool, the session includes attribution details:
CategoryExamples
automationPlaywright, Puppeteer, Selenium
ai-agentbrowser-use, OpenAI Operator
crawlerGooglebot, Bingbot
verified-botLegitimate crawlers with Web Bot Auth
fabricatedAnti-detect browsers, spoofed profiles
Attribution includes the framework name, variant, organization (if known), and a confidence score.

Using verdicts in your API

The sealed token returns backend terms:
{
  "verdict": "bot",
  "score": 0.94,
  "phase": "behavioral",
  "provisional": false
}
The durable session API (GET /v1/sessions/:id) returns the same data in a public format:
Sealed token fieldSession API field
verdict: "human"decision.automation_status: "none"
verdict: "bot"decision.automation_status: "automated"
verdict: "inconclusive"decision.automation_status: "uncertain"
phasedecision.evaluation_phase
provisional: truedecision.decision_status: "preliminary"
provisional: falsedecision.decision_status: "final"

Policy recommendations

  1. Start with report-only — log verdicts without blocking for the first week
  2. Treat inconclusive as an opportunity — challenge with CAPTCHA or email verification, don’t block
  3. Wait for behavioral phase on high-value actions when possible
  4. Use the score for edge cases — a bot verdict at 0.71 is weaker than one at 0.98
  5. Keep your Tripwire decision in your audit trail — log the sessionId alongside the business action

What’s next