Analysis · April 2026 · 8-minute read
Most enterprise risk management departments today are running a methodology that’s older than the spreadsheet. Heat maps, qualitative scoring (1–5, low/medium/high), risk registers maintained quarterly, audit-driven RCSAs — this stack was codified in the late 1990s and hasn’t materially changed since. Practitioners call it RM1.
RM2 is the loose name for the alternative emerging across quantitative risk teams: probabilistic models, decision-support analysis, Monte Carlo simulation, Bayesian methods, and FAIR-style loss-distribution modelling. The split isn’t ideological — it’s about whether risk analysis exists to inform real decisions or to populate compliance reports.
The compliance trap
RM1’s structural problem is well-documented. A 2024 study by the Society for Risk Analysis surveyed 312 enterprise risk leaders and found that 71% could not identify a single capital allocation, project decision, or insurance purchase in the previous 12 months that had been changed by their risk register. The risk register, in other words, was being maintained but not used.
This is the compliance trap: an entire function busy producing artefacts that never reach decision-makers. The cost is not just the wasted effort. It’s the opportunity cost of not having quantitative risk inputs in front of the executives who set risk appetite, allocate capital, and approve major contracts.
What RM2 looks like in practice
The shift is most visible in three areas:
- Credit and receivables. Replacing rule-based credit limits with portfolio-level loss distributions. A Latin American services firm cited in 2024 case literature reduced bad debt from 4.2% to 1.1% of revenue in 18 months by switching from sales-team-driven credit decisions to credit VaR modelling.
- Project contingencies. Replacing flat-percentage reserves (typically 10–20%) with Monte Carlo simulation across project portfolios. A Brazilian infrastructure developer ran simulations across 12 concurrent projects and reduced aggregate contingency from 18% to 11.5% — freeing $4.2M for redeployment while raising delivery confidence from 65% to 85%.
- Insurance optimisation. Modelling actual loss distributions to identify where insurance creates value vs. where retention is cheaper. A logistics firm cited in 2024 case data raised property and auto deductibles from $50K to $250K (saving $340K annually) while adding $10M in cyber coverage for $85K — net annual saving of $255K with materially better tail-risk coverage.
Why the transition is slow
If the upside is this clear, why hasn’t every risk function moved? Three reasons we hear consistently:
Quantitative literacy gap. Most ERM teams were hired against RM1 job specs and trained accordingly. Senior risk managers are often genuinely unfamiliar with probability distributions, Bayesian updating, or simulation — and unwilling to publicly acknowledge gaps.
Compliance signalling. Auditors, regulators, and boards have built mental models around heat maps. Replacing them with probability distributions creates communication friction that risk leaders aren’t always willing to absorb.
Tool inertia. The dominant ERM software platforms (Riskonnect, Archer, MetricStream) were built for register-and-heat-map workflows. They don’t natively support quant approaches, and switching tools is expensive.
What 2030 will probably look like
Three predictions, with our confidence intervals attached:
- RM1 will not disappear — it will be relegated to compliance reporting. Heat maps and registers will continue as audit artefacts, but decision-grade risk analysis will increasingly happen in quantitative teams operating separately from second-line ERM. Confidence: high.
- FAIR will become the dominant cyber-risk methodology. Cyber is where the quantitative shift has the strongest momentum, and FAIR’s standardised vocabulary gives it network effects. By 2030, large enterprises that don’t run FAIR for cyber will be outliers. Confidence: medium-high.
- AI will commoditise basic quant risk work. Monte Carlo simulation, Bayesian updating, and distributional analysis are tasks where LLMs and specialised models can already produce defensible outputs. The bottleneck will move from running models to specifying them well — which means risk leaders need to develop intuitions about what to ask, not how to compute. Confidence: medium.
Where to start
For ERM teams considering the shift, the practical entry points are credit decisions and project contingencies. Both have measurable losses you can model with existing data, both produce financial outcomes you can attribute to the analysis, and both don’t require auditor or regulator buy-in. Insurance optimisation is the third common entry point.
The hard part is not the maths. The hard part is connecting risk analysis to decisions that already exist, instead of analysing risks in parallel to decisions. RM2 is decision-centric. RM1 is artefact-centric. That’s the actual difference.
Last updated: 29 April 2026.