Modeling Applications as Adversarial State Machines

Fri Jan 2, 2026

Most web applications aren’t insecure because of bad code.
They’re insecure because they rely on optimistic assumptions about state.

Applications are built to be linear. Users authenticate, get authorized, perform actions, and complete workflows in order. Each step assumes the previous one happened correctly and that nothing meaningful changed in between.

Attackers don’t follow that model.

They treat the application as a state machine with undefined transitions and actively try to force it into states the developers never intended to exist.

That’s where the real failures show up.


Applications Assume the Happy Path

Most applications implicitly model state like this:

  • Unauthenticated
  • Authenticated
  • Authorized
  • Action performed
  • Workflow complete

Most security testing mirrors that thinking:

  • Can I skip auth?
  • Can I bypass authorization?
  • Can I inject something?

Those checks matter, but they’re not where serious compromises usually come from.

Real attacks don’t skip steps.
They desynchronize them.


State Is Not a Boolean

State is not isAuthenticated = true.

A real application state includes:

  • Identity (user, role, tenant)
  • Session context and freshness
  • Authorization scope
  • Business workflow position
  • Assumed trust (device, network, physical access)

These are almost always tracked separately.

That separation is convenient for developers.
It’s dangerous under adversarial pressure.


Every Endpoint Is a State Transition

From an attacker’s perspective, every request is an attempt to move the application between states:

State A –> Request –> State B

Vulnerabilities emerge when:

  • A transition is allowed from an invalid prior state
  • A transition mutates more state than intended
  • Transitions interact in an order no one modeled

At that point, payloads stop mattering.

Exploitation becomes about forcing contradictions.


Exploitation Is State Desynchronization

High-impact web compromises are usually boring:

  • No injection
  • No memory corruption
  • No exotic exploit chains

Instead, you see:

  • Valid sessions paired with invalid roles
  • Workflows marked complete without prerequisites
  • Authorization decisions based on stale identity data
  • Digital authority granted because physical trust was assumed

Nothing is broken in isolation.

What’s broken is the invariant the application assumes will always hold.


A Pattern That Shows Up Everywhere

Intended flow:

  1. Register
  2. Verify email
  3. Subscribe
  4. Access premium features

Actual enforcement:

  • Email verification checked client-side
  • Subscription endpoint trusts session state
  • Premium feature checks a single flag

Attack path:

  1. Register
  2. Skip verification
  3. Call subscription endpoint directly
  4. Access premium features with an identity that should not exist

No scanner flags this.

The application didn’t fail input validation.
It failed to validate reality.


Why Scanners Miss This

Scanners test inputs.

They don’t:

  • Track evolving application state
  • Reason about workflow position
  • Model cross-endpoint side effects
  • Explore invalid transition ordering

This is why business logic flaws, authorization bugs, and session issues survive “clean” assessments.

The problem isn’t coverage.
It’s the model.


Where Automation Actually Helps

Automation is useful here, but not as autonomy.

The goal isn’t automated exploitation.
The goal is state awareness.

Effective offensive automation:

  • Tracks identity, session, and workflow state
  • Explores legal vs illegal transitions
  • Surfaces broken invariants
  • Supports operator reasoning instead of replacing it

This is hard to generalize, which is why most tools avoid it.

It’s also where real attackers live.


Why This Matters for Adversary Emulation

Real attackers don’t ask:

“Is this endpoint vulnerable?”

They ask:

“What happens if I do things out of order?”

Modeling applications as adversarial state machines aligns testing with how compromises actually happen — quietly, incrementally, and by exploiting assumptions rather than bugs.

If your assessment only validates the happy path, you’re testing a system attackers don’t use.

They never walk it.