APIs are different from form-submit surfaces. You don’t want a sealed handoff per request — you want one Tripwire session that covers a user’s browsing session, reused across many API calls. And you want to distinguish LLM scrapers (
ai-agent) from legitimate crawlers (verified-bot, crawler), because the right answer for those is different.The threat
Two problems get lumped together as “API abuse,” and they want different responses:- LLM scraping. Cloud agents like browser-use and computer-use, plus self-hosted equivalents, scrape the web on behalf of model training, retrieval pipelines, or end-user automation. They look like browsers because they are browsers, just driven by an agent loop. Tripwire attributes these as
ai-agent. - Scripted scraping. Classic headless Chrome or HTTP-level clients iterating your listing endpoints, pricing APIs, or search results.
automationattribution.
- Search crawlers (Googlebot, Bingbot) — you want them indexing your public content.
- Web-Bot-Auth–authenticated agents — a growing set of AI products sign their requests with HTTP message signatures declaring themselves. Tripwire surfaces this as the
verified-botcategory and carries the domain/key identity through to your backend. - Your own internal tooling — health checks, monitoring, analytics pipelines.
The flow
Start Tripwire once at app boot
For authenticated SPAs, start the client when the shell mounts. For public APIs accessed directly (no browser), see the bottom of this page.
Acquire a session once and reuse it
Call
getSession() at the start of an API-consuming flow — not on every request. Pass sessionId as a header on subsequent XHRs.Verify on the first protected request
Either verify a fresh sealed token, or look up the durable session via
GET /v1/sessions/:sessionId and cache the verdict.Re-verify on high-value actions or periodically
For mutations or sensitive reads, request a fresh handoff. For long-lived sessions, refresh the cached verdict every few minutes.
Client integration
Request a sealed handoff at the start of a session, stash the session ID, and send it as a header on every API call.Server verification: two patterns
For long-lived API sessions you have a choice. Either verify the sealed token on every request (cheap — it’s a local crypto operation) and cache the decoded result, or callGET /v1/sessions/:sessionId once and cache the durable verdict for a few minutes.
Pattern A: verify the sealed token per request
Pattern B: cache the durable verdict
For very hot endpoints, verify once per session and cache the result.Node.js
decision.automation_status ("automated" | "human" | "uncertain") rather than the sealed-token verdict field; see Server verification for the full shape.
When to re-verify
Session reuse is great for throughput, but a long-lived session becomes a stale verdict. Three refresh triggers worth coding:- High-value mutations — a user changing their email, deleting data, exporting an archive. Always ask for a fresh sealed handoff from the client.
- Periodic refresh — every 5–15 minutes for long sessions, re-fetch
GET /v1/sessions/:sessionId. Tripwire’s server-side behavioral scoring can move a verdict betweensnapshotandbehavioralphases as evidence accumulates, and you want the latest. - Suspicious pattern observed — your own application logic (burst of identical queries, geographic jump) can trigger a client-side
tripwire.getSession()that produces a new, freshly-attested handoff.
Splitting policy by attribution
For read APIs, the policy matrix that works on most sites:| Category | GET | POST / PUT / DELETE |
|---|---|---|
human | Allow | Allow |
verified-bot | Allow | Block (bots generally shouldn’t mutate) |
crawler | Allow | Block |
automation | Block or throttle | Block |
ai-agent | Block or throttle | Block |
unknown (with bot verdict) | Throttle | Block |
ai-agent: a low QPS limit that’s fine for an agent answering one user’s question and painful for a training-data crawler.
APIs called directly (no browser)
Some API consumers don’t run a browser at all — a mobile app, a server-to-server integration, a CLI tool. Tripwire’s browser SDK doesn’t apply there. Two options:- Require pre-issued API keys for non-browser traffic. Route traffic without a Tripwire session header down the API-key path, and apply Tripwire only to browser-origin requests.
- Rely on network-level attribution. Even without the browser bundle, Tripwire’s HTTP edge sees JA4 TLS fingerprints, HTTP/2 SETTINGS, and Web-Bot-Auth signatures, and the durable session API can surface these for direct requests. See Detection categories for what’s available without the client SDK.
What’s next
User-generated content
The write-side counterpart: stop LLM posts at the composer.
Server verification
Reference for both sealed-token and durable-readback paths.
Detection categories
What Tripwire detects, with and without the browser SDK.
Going to production
Rollout plan for API-wide enforcement.