Every Tripwire session produces a verdict (human, bot, or inconclusive) and a risk score (0.0 to 1.0). Your backend uses these to decide what to do.
Verdicts
| Verdict | Score range | Meaning | Suggested action |
|---|
human | 0.0 – 0.39 | Real user behavior detected | Allow |
inconclusive | 0.40 – 0.69 | Not enough evidence for a confident call | Challenge or gather more context |
bot | 0.70 – 1.0 | Automated or non-human behavior detected | Block or rate-limit |
Risk score
The risk score is a continuous value from 0 (definitely human) to 1 (definitely bot). It’s normalized via a sigmoid function, so scores cluster near the extremes — most sessions score below 0.1 or above 0.9.
Use the score for granular policy when the verdict alone isn’t enough:
if score < 0.1 → fast-path allow (no friction)
if score < 0.4 → allow (human verdict)
if score < 0.7 → challenge (inconclusive)
if score >= 0.7 → block (bot verdict)
Evaluation phases
Tripwire evaluates sessions in two phases:
| Phase | When | What it uses | Confidence |
|---|
snapshot | Immediately on session creation | Environment probes, fingerprint, anti-tamper | Good for deterministic signals |
behavioral | After user interaction | Mouse, keyboard, touch, timing patterns | Higher confidence for ambiguous cases |
If you call getSession() before the user interacts with the page, you’ll get a snapshot-phase result. For highest confidence, wait for at least a few seconds of user interaction.
Preliminary vs final
| Status | Meaning |
|---|
preliminary | Early result, may be updated as more data arrives |
final | Complete evaluation, won’t change |
Snapshot-phase results are usually preliminary. Behavioral-phase results are final.
Automation attribution
When Tripwire identifies the specific automation tool, the session includes attribution details:
| Category | Examples |
|---|
automation | Playwright, Puppeteer, Selenium |
ai-agent | browser-use, OpenAI Operator |
crawler | Googlebot, Bingbot |
verified-bot | Legitimate crawlers with Web Bot Auth |
fabricated | Anti-detect browsers, spoofed profiles |
Attribution includes the framework name, variant, organization (if known), and a confidence score.
Using verdicts in your API
The sealed token returns backend terms:
{
"verdict": "bot",
"score": 0.94,
"phase": "behavioral",
"provisional": false
}
The durable session API (GET /v1/sessions/:id) returns the same data in a public format:
| Sealed token field | Session API field |
|---|
verdict: "human" | decision.automation_status: "none" |
verdict: "bot" | decision.automation_status: "automated" |
verdict: "inconclusive" | decision.automation_status: "uncertain" |
phase | decision.evaluation_phase |
provisional: true | decision.decision_status: "preliminary" |
provisional: false | decision.decision_status: "final" |
Policy recommendations
- Start with report-only — log verdicts without blocking for the first week
- Treat
inconclusive as an opportunity — challenge with CAPTCHA or email verification, don’t block
- Wait for behavioral phase on high-value actions when possible
- Use the score for edge cases — a
bot verdict at 0.71 is weaker than one at 0.98
- Keep your Tripwire decision in your audit trail — log the sessionId alongside the business action
What’s next