Skip to content
Thought Leadership February 17, 2026 · 6 min read

FAIR + Monte Carlo in Cyber Risk: What Works (and What Breaks)

FAIR can translate cyber risk into financial ranges, and Monte Carlo can make uncertainty explicit, but only if you treat inputs and validation honestly. Here is a pragmatic approach, common failure modes, and how Basirah anchors quantification to verified outcomes.

SRD
Synodician Research Desk
Security Research

FAIR and Monte Carlo simulation can translate cyber risk into the financial ranges that leadership needs to make decisions, but most implementations produce math theater instead of credible numbers, and the people who need to trust the output can tell.

You’ve felt the gap if you’ve ever tried to explain a “critical CVSS” vulnerability to a CFO. Security scores and severity labels aren’t business language. FAIR (Factor Analysis of Information Risk) exists to bridge that gap by expressing risk as probabilistic loss in financial terms. The question isn’t whether the translation is useful. It’s whether your implementation is honest enough to survive scrutiny.

The spectrum from math theater to credible risk quantification. Input provenance is the dividing line.

What FAIR actually is (and isn’t)

People hear “FAIR” and think it converts CVSS scores to dollars. It doesn’t. It’s not a pricing engine for CVEs, and it won’t replace the operational work of verifying that fixes actually landed.

What FAIR does is force you to make your assumptions explicit: frequency as a range, impact as a range, both with documented reasoning, so that leadership can see the tradeoffs they’re actually making. Loss event frequency × loss magnitude, expressed as a distribution, not a point estimate. That’s a translation layer, not a crystal ball. Open FAIR is standardized by The Open Group (see References 1–2).

Monte Carlo without the math degree

Monte Carlo simulation models uncertainty by running thousands of “possible worlds.”

Instead of pretending you know one exact value for frequency and impact, you define ranges (minimum / most-likely / maximum). The simulation repeatedly samples from those ranges and produces a distribution of outcomes. (Reference 6 is a good primer on Monte Carlo used for uncertainty analysis.)

That distribution is where the useful leadership conversation happens:

  • P50 (median): “typical” expected outcome (not a guarantee)
  • P90 / P95: “stress case” planning
  • Tail risk: low-probability, high-impact outcomes you still need to prepare for

The point isn’t to sound sophisticated. The point is to be honest about uncertainty and still make decisions.

Why FAIR adoption is uneven

If FAIR is so useful, why isn’t everyone doing it?

1) Inputs are harder than formulas

Executives aren’t allergic to math. They’re allergic to unjustified inputs.

If your frequency estimate came from a vendor’s default setting, your impact figure came from a workshop where the loudest voice won, and your confidence interval came from nowhere at all, a “Monte Carlo output” won’t be trusted, because it shouldn’t be. That’s not a modeling failure. It’s an input failure.

NIST emphasizes this discipline: identifying and estimating risk in an enterprise context requires clarity on assumptions, context, and how estimates are produced (Reference 3).

2) Most orgs still operate in qualitative mode

Many risk programs still rely on heat maps, ordinal scoring, and qualitative labels. PwC (citing an HBR Analytic Services survey) notes that only a small minority report using open-source FAIR methodology or actuarial-style models (Reference 4).

That doesn’t mean FAIR is wrong. It means the adoption path has friction.

3) People confuse “precision” with “credibility”

Monte Carlo can output numbers with multiple decimal places. That’s not the point.

Credibility comes from:

  • input provenance,
  • sensitivity analysis (“what drives this result?”),
  • and iteration over time (“did this model help us make better decisions?”).
Warning

If your Monte Carlo output is a single dollar figure to the cent (“CVE-2026-1234 = $183,492.17”), you’re doing math theater. The precision creates an illusion of certainty that the inputs don’t support.

Three ways to waste a Monte Carlo budget

”CVE-2026-1234 is worth $183,492.17”

If your output is “CVE-2026-1234 is worth $183,492.17 of risk,” most stakeholders will (correctly) roll their eyes.

Do this instead: use FAIR where it’s strongest:

  • for scenarios (e.g., ransomware on a critical business unit),
  • for portfolios (top drivers of exposure),
  • and for what-if comparisons (fix now vs defer vs compensate with controls).

Quantification without verification

You can run a Monte Carlo simulation on a vulnerability, produce a credible-looking distribution, present the P50 and P90 to leadership, get the budget approved, deploy the fix, close the ticket, and never confirm the vulnerability is actually gone, which means the risk reduction you reported to the board is a modeled delta against an assumed state, not a measured outcome. Everything else is opinion.

What works: couple quantification to events that can be verified:

  • did the vulnerability disappear on re-scan?
  • did the control test pass?
  • did the exception expire?

”Why did the number change?”

If stakeholders can’t answer “why did the number change?” it won’t survive contact with reality.

The fix is simple: ship transparent artifacts:

  • what inputs were used,
  • what changed since last run,
  • and which assumptions dominate the output.

When FAIR isn’t the answer yet

If a customer says “we don’t do FAIR,” you can still build a credible risk program with the tools below, and you can still introduce financial quantification later.

CVSS tells you severity, not cost

CVSS is useful for consistent technical severity, but it doesn’t incorporate business impact, exposure context, or threat activity by itself (Reference 7). Treat it as one input, not the decision.

CVSS tells you severity. EPSS tries to tell you likelihood.

EPSS

EPSS estimates the probability of exploitation for vulnerabilities (Reference 8). It’s a strong likelihood signal, but it doesn’t tell you what exploitation would cost you.

But neither tells you what’s actually being exploited right now.

KEV: high signal, incomplete picture

CISA’s Known Exploited Vulnerabilities catalog is a high-signal set of “this is actively exploited” items (Reference 9). It’s excellent for urgency, but it doesn’t cover everything and still needs asset context.

Stack all three and you have prioritization. You still don’t have dollars.

The practical stack

The ordering matters: verification has to come first because everything built on top of it (ownership, SLAs, prioritization scores, financial models) assumes the underlying state is accurate, and if you’re quantifying risk against vulnerabilities that were “fixed” but weren’t, you’re producing numbers that are precise, plausible, and wrong.

  1. Verify what’s real (no “ticket closed = fixed” shortcuts)
  2. Add ownership + SLAs to make remediation reliable
  3. Use CVSS/EPSS/KEV + asset criticality for prioritization
  4. Add FAIR/Monte Carlo where the decision warrants it (budget, roadmap, control tradeoffs)

A pragmatic way to adopt FAIR without “math theater”

If you want FAIR to stick, treat it like an operating discipline:

  1. Start narrow. Pick one business service or one exposure class that leadership cares about.
  2. Use ranges, not single numbers. Document min/mode/max and why.
  3. Track confidence explicitly. Low-confidence outputs should be labeled and deprioritized.
  4. Run sensitivity analysis. Show what drives variance so stakeholders know what to validate.
  5. Tie the model to outcomes. Update baselines only when verification or control evidence changes.

This is also where the industry is headed: risk quantification is increasingly being treated as part of enterprise risk management practice, not a spreadsheet add-on (see NIST’s IR 8286 series, Reference 3).

Key Takeaway

The pragmatic adoption path: start narrow (one business service), use ranges (not points), track confidence, run sensitivity analysis, and tie the model to verified outcomes. FAIR sticks when it’s treated as an operating discipline, not a spreadsheet exercise.

Risk in dollars, but only after proof

Basirah is built around a simple promise: risk reduction must be earned by verified outcomes.

That changes what “risk in dollars” is used for:

  • Prioritization: not “highest CVSS,” but “highest modeled impact with context.”
  • What-if decisions: “What happens if we fix this this sprint vs next quarter?”
  • Risk reduction attribution: Basirah attributes modeled risk delta only after remediation is independently verified, not when a ticket is closed.

Even if you never adopt FAIR deeply, the operational foundation still stands:

  • Owned work items with SLA clocks
  • Independent re-scan verification
  • Evidence packages that explain what changed, when, and why

A risk model that’s never been tested against actual outcomes is just an opinion with a distribution curve.

Want to translate remediation into board-level decisions without pretending uncertainty doesn’t exist? Book a Basirah walkthrough.

References

  1. 1. Open FAIR Standard (The Open Group) , accessed Feb 17, 2026
  2. 2. Open FAIR Body of Knowledge (C20B) (The Open Group) , accessed Feb 17, 2026
  3. 3. NIST IR 8286A Rev. 1: Identifying and Estimating Cybersecurity Risk for Enterprise Risk Management (NIST) , accessed Feb 17, 2026
  4. 4. Cyber Risk Quantification and Management (why adoption is uneven) (PwC) , accessed Feb 17, 2026
  5. 5. 2025 State of Cyber Risk Management (survey report) (FAIR Institute) , accessed Feb 17, 2026
  6. 6. Consistency of Monte Carlo uncertainty analyses (NIST) , accessed Feb 17, 2026
  7. 7. Common Vulnerability Scoring System v3.1: Specification Document (FIRST) , accessed Feb 17, 2026
  8. 8. Exploit Prediction Scoring System (EPSS) (FIRST) , accessed Feb 17, 2026
  9. 9. Known Exploited Vulnerabilities (KEV) Catalog (CISA) , accessed Feb 17, 2026
#FAIR #Monte Carlo #cyber risk quantification #risk management #CVSS #EPSS #KEV

Want to operationalize remediation?

See how Basirah supports remediation with ownership, verification, and evidence.

Book a Walkthrough