Why most ICS 'experts' are wrong — and why it’s putting plants at risk

image is Petr Roupec Author Shot

What we think we know what hurts us most

Industrial Control Systems (ICS) don’t fail because of what engineers don’t know. They fail because of what they think they know — but never validate. Walk into a refinery or power plant today and you’ll hear it repeated like a mantra:

“We follow the Purdue model.”

“Our network is segmented.”

“The vendor is ISO certified.”

But in 2025, none of those statements are guarantees of safety — or even meaningful. They represent assumptions. And when you stack assumptions in a system with complex feedback loops and time-sensitive dependencies, you don’t just get risk — you get fragility.

The expert illusion

As Nassim Taleb famously said, many “experts” aren’t problem solvers — they’re narrators. They wear ties. Build layered Visio diagrams. Cite ISO frameworks. Speak fluently in acronyms. They look impressive in meetings. They present five-point “resilience roadmaps.” And they almost always leave out one detail:

Where is the logic, and who controls it?

Because when firmware is silently updated from a remote cloud server in another jurisdiction — without any audit trail — or when a plant’s core control logic is locked inside a proprietary OEM tool with no export function, none of those diagrams matter.

You can’t draw your way out of a system you don’t understand.

And you certainly can’t secure what you don’t control.

The real threat: unknown unknowns

The ICS world remains obsessed with known threats:

  • Patch cycles
  • Penetration tests
  • Firewall configurations
  • Automated asset inventories
  • Compliance matrices no one understand to

These all have their place — but they protect against what we already expect. They are necessary, but they are not sufficient. What actually breaks production in 2025 are the things no one tracked:

  • An undocumented Modbus register misread by the billing system
  • A “minor” firmware update that silently desynchronizes a protection relay
  • Logic lost because no one backed it up — or even knew where it lived

These aren’t theoretical. They are real failure modes in real plants. I’ve seen them. You’ve probably experienced some form of them. And they happen not because of hackers, but because of blind trust in known structures — and the belief that the last audit means you’re protected.

Enter antiknowledge

Instead of pretending we can forecast every threat, we need architectures based on antiknowledge — what Nassim Taleb defines as the disciplined awareness of what we don’t know. Because no matter how detailed the network diagram, the system is more complex than it appears.

Control systems should be designed not to prevent surprises, but to survive them. They should assume not that things “won’t go wrong,” but that something already has — and we just haven’t seen the effects yet.

That’s why we’re no longer building ICS systems based on compliance checklists from 1993. We build for what happens:

  • When vendors disappear
  • When internet links drop
  • When cloud APIs go dark
  • When systems operate six days with corrupted data before anyone notices
  • When “routine” updates silently reconfigure certified logic

Beyond Purdue: a mindset, not a model

The Purdue Model helped us bring order to chaos in the 1990s. But it was designed for serial protocols, static logic, and air-gapped systems. It assumes:

  • Fixed boundaries
  • Single-vendor vertical stacks
  • Known dependencies
  • Predictable flows
  • Clear zones of trust

But today’s plants run on cloud-assisted firmware, multi-vendor integrations, real-time external telemetry, and logic that can be overwritten remotely. The assumptions behind Purdue no longer hold.

Beyond Purdue is not a new diagram. It is a rejection of false certainty. It means:

It means thinking like a survivor — not like a planner.

Prepare for what you don’t see coming

The next critical failure will not be predicted. It will not appear in a pen test report or compliance checklist. It will come from a trusted system acting unpredictably — because something changed, silently, upstream.

That change might be a patch.

It might be a new API endpoint.

It might be an “AI optimisation” pushed without authorisation.

And if you still think you're protected because your firewall works and your vendor is certified, you’ve already missed the real threat.

Energy Connects includes information by a variety of sources, such as contributing experts, external journalists and comments from attendees of our events, which may contain personal opinion of others.  All opinions expressed are solely the views of the author(s) and do not necessarily reflect the opinions of Energy Connects, dmg events, its parent company DMGT or any affiliates of the same.

KEEPING THE ENERGY INDUSTRY CONNECTED

Subscribe to our newsletter and get the best of Energy Connects directly to your inbox each week.

Back To Top