Skip to main content
Any endpoint that accepts user-authored public content — posts, status updates, comments, replies, reviews — is a target for LLM-powered content generation. The right integration runs at the content-submit handler, weights the ai-agent attribution heavily, and uses the durable visitor fingerprint to cap posting velocity across account rotations.

The threat

Public content surfaces have two automation shapes stacked on top of each other:
  • Scripted posting — headless browsers or direct API calls submitting content on a schedule. Classic automation attribution: Playwright, Puppeteer, Selenium.
  • LLM-written content — the script is still automated, but now the content itself is generated by a language model in the same pipeline. The browser looks real because a real (headless) Chromium is driving it; the text looks real because it came from a capable LLM. This is the ai-agent category.
The second one is where policy has to be deliberate. Automation is unambiguously bad on a posting surface — no real human posts through Puppeteer. AI-generated content is blurrier: a human writing a post with LLM assistance is probably fine, an LLM agent posting hundreds of posts under one account is not. Tripwire distinguishes the two because the integration sits at the browser level, not the text level — it sees whether a script or a human drove the submission, regardless of who wrote the words. Three tactics dominate abuse on these endpoints:
  • Spam — outbound links, crypto shilling, promotion of other accounts or products.
  • Astroturfing — coordinated posting to manufacture consensus or suppress criticism.
  • Engagement farming — AI-generated posts designed to get reactions, build account reputation, and later pivot to spam or resale.
All three share the same detection target: a submission that wasn’t driven by a human at a keyboard.

The flow

1

Start Tripwire on the composer surface

Wherever the user actually drafts content — the compose modal, the reply box, the review form.
2

Call getSession() at the submit click

This captures keystroke timing, paste patterns, and the full fingerprint in the sealed handoff.
3

Verify and inspect attribution category

Check decision.verdict and attribution.bot.facets.category.value — you’ll treat automation and ai-agent differently from human, and you may want to allow crawler/verified-bot traffic through read endpoints (see API abuse).
4

Apply a visitor-fingerprint velocity cap

Even a “human” verdict shouldn’t let one fingerprint publish 200 posts per hour. Rate-limit by visitor_fingerprint.id, not just account ID.
5

Consider shadow mode

For surfaces where silent rejection is unacceptable, score content but don’t block — feed the verdict into trust-and-safety tooling instead.

Client integration

<script type="module">
  const tripwirePromise = import("https://cdn.tripwirejs.com/t.js").then(
    (Tripwire) =>
      Tripwire.start({
        publishableKey: "pk_live_your_publishable_key",
      }),
  );

  document.querySelector("#post-form").addEventListener("submit", async (e) => {
    e.preventDefault();
    const tripwire = await tripwirePromise;
    const { sessionId, sealedToken } = await tripwire.getSession();

    await fetch("/api/posts", {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({
        body: e.target.body.value,
        tripwire: { sessionId, sealedToken },
      }),
    });
  });
</script>
Keystroke and paste events are part of Tripwire’s behavioral signal set, and they fire strongest when the user is actually composing in your page. Mount the client on the composer, not just the top-level app shell.

Server verification

const { safeVerifyTripwireToken } = require("@abxy/tripwire-server");

app.post("/api/posts", async (req, res) => {
  const result = safeVerifyTripwireToken(
    req.body.tripwire.sealedToken,
    process.env.TRIPWIRE_SECRET_KEY,
  );

  if (!result.ok) {
    return res.status(403).json({ error: "Verification failed" });
  }

  const { decision, attribution, visitor_fingerprint } = result.data;
  const category = attribution?.bot?.facets?.category?.value;

  if (decision.verdict === "bot") {
    // Automation: hard block. AI-agent: hard block on the posting surface,
    // even though you might allow it on read APIs.
    if (category === "automation" || category === "ai-agent") {
      return res.status(403).json({ error: "Posting blocked" });
    }
    // Other bot categories (unknown, crawler) — block with generic error.
    return res.status(403).json({ error: "Posting blocked" });
  }

  // Even on a human verdict, apply a per-fingerprint velocity cap.
  if (visitor_fingerprint?.id) {
    const tooFast = await exceedsPostingRate(visitor_fingerprint.id);
    if (tooFast) return res.status(429).json({ error: "Slow down" });
  }

  const post = await createPost({
    authorId: req.session.userId,
    body: req.body.body,
    tripwireSessionId: req.body.tripwire.sessionId,
    tripwireVerdict: decision.verdict,
    tripwireCategory: category,
  });

  res.json({ post });
});

Decisioning policy by attribution category

The top-level verdict tells you whether to block; the attribution category tells you why and helps you build useful signal for trust and safety teams.
Attribution categoryRecommended action
automationBlock. No legitimate use of Puppeteer/Playwright posting through a real user account.
ai-agentBlock on posting; allow on read APIs. LLM agents reading your content is a product question; LLM agents posting under user accounts is abuse.
crawlerBlock on posting. Crawlers don’t compose.
unknown + bot verdictBlock. Log for investigation — new automation patterns show up here first.
humanAllow, subject to the velocity cap below.
Persist the tripwire_category alongside the content row. A post that was created with human but sat at manipulation.verdict === "high" is a useful thing to surface to moderators without blocking the user outright.

Velocity caps that survive account rotation

Attackers rotate accounts — making a hundred accounts and posting from each one is cheaper than making one account and posting a hundred times. A per-account rate limit catches the second pattern and misses the first. Per-fingerprint caps catch both: the durable visitor_fingerprint.id persists across account creation on the same device, so a fingerprint that signed up three times in six hours is already suspicious by the time it tries to post.
Node.js
const redis = require("redis");
const client = redis.createClient();

async function exceedsPostingRate(visitorId) {
  const key = `posts:${visitorId}`;
  const count = await client.incr(key);
  if (count === 1) await client.expire(key, 3600);
  return count > 30; // tune per your surface
}
Pair with a per-account cap. The two limits solve different problems: per-account stops one user flooding your feed; per-fingerprint stops one attacker flooding it from many accounts.
visitor_fingerprint is null on sessions where Tripwire couldn’t establish a durable ID (hardened privacy browsers, very short sessions). Fall back to IP-based limiting when the visitor ID is absent.

Shadow mode

Not every surface can silently reject content. A comments section on a news site where users expect their comment to appear might reasonably want to ship the post even on a bot verdict — but route it to a moderation queue, not to the public feed. Or score against a shadow threshold for 30 days before flipping to enforcement. Two patterns worth keeping separate:
  • Shadow scoring — verify the token, persist the verdict alongside the post, publish the post anyway. Used to baseline verdict distribution before you turn enforcement on. See Going to production.
  • Shadow ban — accept the post, publish it to the author only, suppress it from other feeds. Useful against low-grade spam where you want to waste the spammer’s time rather than tip them off that detection fired.
Whichever you pick, don’t mix them accidentally. A post that’s “shadow scored” should still appear publicly; a post that’s “shadow banned” should not.

What’s next

API abuse & scraping

The read-side counterpart: allow crawlers, block LLM scrapers.

Signup protection

Stop the account factory before the posts start.

Verdicts & scoring

How verdict, risk_score, and attribution fit together.

Going to production

Report-only and shadow-mode rollout plans.