Your cybersecurity investments may be creating new vulnerabilities,and most boards never see it coming. Here’s a pattern I’ve watched play out across boardrooms for years. You’ll recognise it instantly.
A board approves a significant cyber budget increase. Usually after a near-miss. Or a breach in their sector that made headlines. The money flows. Compliance boxes get ticked. Six months later, a reassuring report lands: spend is on track, frameworks are in place.
Everyone exhales.
Twelve months after that? I watch the same organisation respond to an incident with no clear containment plan, undefined ownership, and a leadership team genuinely shocked that the money hadn’t solved the problem.
The budget was designed to satisfy reporting requirements, not to reduce actual exposure.
The board funded what was visible and measurable. Nobody asked what the investment left unaddressed. Nobody defined what “contained” actually looked like before the incident. Only after.
And here’s what makes this dangerous: the false confidence that follows a large cyber investment is one of the most hazardous conditions a board can create. It generates a governance blind spot at the precise moment directors believe they’ve got governance covered.
So where does it all go wrong?
What Gets Funded vs What Gets Left Out
Boards fund what produces a report.
Compliance certifications. Penetration testing cycles. Vulnerability scanning platforms. Awareness training completion rates. All legitimate investments. But they share one characteristic: they generate a document that can be tabled at a board meeting and filed as evidence of progress.
Boards fund what they can see in a dashboard and defend in an audit.
Simple as that.
What gets left out? The capacity to respond when those controls fail. And they will fail. That’s not pessimism. That’s the operating reality of any threat environment that evolves faster than the frameworks designed to address it.
The numbers don’t lie. Despite 93% of organisations raising their cyber budgets by at least 10%, a staggering 70% remain stuck in formative or beginner stages of maturity. Only 2% of companies worldwide claim full resilience against cybersecurity threats.
Think about that for a moment. More money. Same vulnerability.
What I see consistently underfunded is incident response capability. Not the plan document (most organisations have one of those gathering dust). The actual operational readiness: trained teams who’ve rehearsed breach scenarios, escalation paths that don’t rely on one person knowing the right phone number, pre-agreed containment thresholds, board-level decision protocols for when an incident crosses from technical event to business crisis.
I’ve also seen chronic underinvestment in third-party risk management beyond the first tier. Boards will fund extensive controls on their own environment whilst applying almost no governance discipline to the suppliers and partners who have direct access to their most sensitive systems.
The compliance framework says “assess your vendors.” The budget reflects a checkbox, not a genuine control.
Which brings us to a critical distinction most boards miss entirely.
The Plan vs The Capability
A plan is a document.
A capability is what happens at 2am on a Sunday when nobody can find that document, nobody’s rehearsed it, and the person whose name appears on page four left the organisation eight months ago.
I’ve reviewed incident response plans that are beautifully constructed. Logical sequence, clear ownership, defined escalation paths. Then I’ve watched organisations with those exact plans respond to an incident with all the coordination of a first-time event.
Because it was a first-time event.
The plan had never been tested under pressure. Never with real decision-makers in the room. Never against a scenario that didn’t follow the script.
So what separates a plan from a capability?
Three things.
First, rehearsal. A capability has been exercised. Board directors and the executive team have sat through a simulated breach scenario, made real-time decisions under pressure, and identified where the plan breaks down before an adversary does. Most organisations have never done this at board level.
Second, pre-agreed thresholds. A capability means the board has defined, in advance, what level of incident triggers what level of response. What constitutes a notifiable breach. When the CEO gets personally involved. When external legal counsel is engaged. When the regulator gets called.
These decisions made mid-incident, under pressure, with incomplete information? That’s where organisations make their most expensive mistakes. A plan lists these steps. A capability means the people responsible for executing them have practised making those calls.
Third, ownership that survives personnel change. A plan names roles. A capability ensures those roles are filled, trained, and tested continuously.
I’ve seen organisations spend considerable sums on incident response planning and almost nothing on incident response readiness. In Cyber Insecurity: The Silent Risk in Your Boardroom, I frame readiness as the metric that matters. It’s the one that tells you whether the investment will actually perform when the board needs it to.
Let me show you what this looks like in practice.
The Governance Triggers That Never Get Set
I’ll protect the organisation, as I always do. But this pattern repeats often enough that it’s become a type, not an exception.
A mid-sized financial services organisation experienced what their security team initially classified as a contained network intrusion. Saturday morning. Technical indicators suggested limited lateral movement. The assessment delivered to the CEO was clear: situation under control.
Except nobody had defined, before that Saturday morning, what “under control” actually meant in terms the board could act on.
No pre-agreed threshold that said: at this point, notify the regulator. At this point, engage external legal counsel. At this point, the board chair gets called directly, not briefed through management.
So the CEO made a judgement call. Reasonable. Experienced. Well-intentioned. He decided to wait for more information before escalating. That decision held for thirty-six hours whilst the team worked to confirm the scope.
What emerged after thirty-six hours? A breach significantly larger than initial assessment suggested.
A regulatory notification obligation that had been running since hour four. A board learning the full picture at the same time as external counsel.
The financial cost compounded by the notification delay. The reputational cost compounded by the board’s visible surprise in the days that followed. And the governance cost, the one that persisted longest, was a single question: had the board maintained adequate oversight of a material risk event?
None of that stemmed from a bad plan. The plan existed. The failure was the absence of pre-agreed thresholds that removed human judgement from decisions that should never be made under pressure for the first time.
What this taught me: the most important part of incident response readiness isn’t the technical response steps. It’s the decision architecture the board sets before an incident occurs. The governance triggers. Who decides what, at what point, based on what criteria.
That conversation belongs in the boardroom. In a calm environment. With time to think. Because it won’t be calm when it matters.
Yet most boards never have this conversation at all. Why?
Why Boards Separate Budget From Governance Design
Boards have been trained, largely by the security industry itself, to treat cyber budgets as technology procurement decisions rather than governance design decisions.
When the annual cyber budget lands on the agenda, it arrives as a line item. A spend recommendation. A compliance rationale. A list of tools or services to be purchased. The conversation that follows is purely financial: Is the number justified? How does it compare to last year? What does the benchmarking say?
What almost never happens? The board asking: what governance decisions do we need to make before this budget can actually work?
What thresholds are we setting? What decisions are we pre-authorising management to make, and which require board involvement? What does success look like in an incident, not in a compliance report?
Those questions feel separate because the budget process has clear ownership. The CFO approves it. The audit committee reviews it. The board ratifies it. Done. The governance design questions? They feel less structured. No natural home on the agenda. No number to produce.
And boards are far more comfortable with decisions that produce a number.
There’s a deeper pattern here. Many boards unconsciously treat budget approval as the point they’ve discharged their cyber governance responsibility for that year. Spend approved. Framework funded. Committee’s job done.
The idea that the most important cyber governance decisions haven’t been made yet, that they sit in the decision architecture rather than the spend allocation, is genuinely uncomfortable for a board that believes it’s already acted.
This discomfort creates a specific kind of problem.
The Metrics Problem
When boards prefer quantifiable decisions, they consistently over-invest in cyber capabilities that produce metrics and under-invest in capabilities that produce judgement.
Detection tools generate dashboards. Compliance frameworks generate certificates. Penetration testing generates a scored report. These investments win budget conversations because they produce something a board can point to, review, and file as evidence of oversight.
Incident response readiness? Doesn’t produce a clean number. Neither does the quality of board-level decision-making under pressure. Neither does the depth of third-party risk governance beyond the first supplier tier.
Yet these are the capabilities that determine how a breach unfolds. They’re chronically underfunded because they resist the metrics boards have been conditioned to trust.
The current data is stark. It takes an average of 258 days for IT and security professionals to identify and contain a data breach. That’s not a technology problem. That’s a governance problem.
There’s a second dimension here that goes beyond individual budget lines. The preference for numbers creates a reporting culture where CISOs learn, over time, to present what the board will approve rather than what the organisation genuinely needs.
It’s not dishonesty. It’s adaptation.
When a board consistently responds well to compliance metrics but struggles to engage with qualitative risk assessments, the next budget submission will contain more compliance metrics. The board’s preferences shape the information it receives. That information shapes the decisions it makes. Those decisions shape the risk the organisation actually carries.
I’ve sat in rooms where the security leader knew the most significant exposure in their environment was a people and process problem with no clean metric attached. I’ve watched that same leader present a technology spend recommendation instead. Why? Because that was the conversation the board was equipped to have.
That’s a governance failure. But it belongs to the board, not to the security leader.
And it’s not the only place boards create false comfort.
The Insurance Illusion
Cyber insurance isn’t a cyber risk solution. It’s a financial recovery mechanism. The moment a board conflates those two things, it’s created a governance blind spot with a pound figure attached.
When a board approves a cyber insurance policy as a budget line item, there’s often an implicit sense of relief in the room. The exposure has been transferred. The organisation is covered. The risk has somewhere to go if something goes wrong.
That framing is understandable.
It’s also largely wrong.
What cyber insurance actually does: reimburse certain defined financial losses, after certain conditions are met, within certain coverage limits, subject to certain exclusions that most boards have never read in full.
It doesn’t prevent a breach. Doesn’t contain one. Doesn’t repair the reputational damage, restore customer trust, or address the regulatory inquiry that follows a material incident.
It doesn’t even guarantee financial recovery if the organisation failed to maintain the controls the policy required as a condition of coverage.
And here’s where boards consistently underestimate their exposure. The data shows that 27% of data breach claims and 24% of first-party claims had exclusions that resulted in non-payout or partial payouts.
More than one in four claims. Let that sink in.
Cyber insurance policies carry compliance obligations. Specific controls must be in place and maintained for the policy to respond. Multi-factor authentication. Endpoint detection. Regular backup testing. Incident response planning.
If those controls aren’t in place when you make a claim? The insurer has grounds to dispute coverage.
I’ve seen organisations assume they were covered, make governance decisions based on that assumption, and discover the gap between assumption and reality at precisely the moment they needed the policy to perform.
So what’s the way forward?
The Three Questions Every Board Should Ask
Before approving next year’s cyber budget, three questions need answering. No exceptions. Regardless of organisation size, sector, or how sophisticated the security team appears.
Question one: What level of residual cyber risk are we, as a board, prepared to accept after this budget is fully deployed?
Not what the budget covers. What it leaves exposed.
Every budget has a boundary. Every pound spent on one control is a pound not spent on another.
The board needs to look at the proposed spend and define, explicitly, what sits on the other side of it. What threat scenarios does this budget not address? What categories of data or operational capability remain exposed? Are we, as a board, prepared to accept that exposure as a deliberate risk position?
If the board can’t answer that question, it’s not approving a risk management decision. It’s approving a spend recommendation and hoping the gap doesn’t matter.
Question two: If we experience a significant incident ninety days from now, what decisions will we be making in real time that we should be making today?
This question forces the governance design conversation that budget approvals almost never have.
Who notifies the regulator, and at what threshold? When does the board chair get a direct call rather than a management briefing? What’s the pre-agreed criteria for engaging external legal counsel?
What does the board need to know within the first twenty-four hours? Who’s responsible for ensuring they know it?
These decisions, made under pressure with incomplete information in the middle of an active incident, are where organisations generate their most expensive and most avoidable mistakes.
The board that answers this question before the incident sits in a fundamentally different governance position to the one that hasn’t.
Question three: How will we know, twelve months from now, whether this budget actually reduced our exposure?
Not whether we spent it. Not whether we hit compliance milestones. Whether the organisation’s genuine risk position improved.
That requires the board to define, before approving the budget, what metrics it will use to measure effectiveness. Time to detect. Time to contain. Coverage of the incident response plan against the organisation’s actual threat profile. Third-party risk governance depth beyond tier one.
These are the numbers that tell a board whether its investment performed.
If the board can’t define those metrics before approving the spend, it has no basis for evaluating whether the investment worked. No basis for holding management accountable for the outcome.
What does all this reveal about the current state of cyber governance?
What This Really Reveals
When a board can’t answer those three questions, it reveals something fundamental: the relationship has been structured around information delivery rather than shared accountability.
The security function reports to the board. That’s the formal arrangement. But in practice? That reporting relationship has evolved into a one-directional flow of technical information upward, filtered through whatever framing the security leader believes the board can absorb, received by a board that lacks the governance language to interrogate it effectively.
The board nods. The report gets filed. The relationship continues. Neither party is genuinely accountable for the risk position sitting between them.
What those three questions expose: the board has been a passive recipient of cyber governance rather than an active participant in it.
The security function, often without realising it, has adapted to that passivity. It presents what gets approved. Reports what gets understood. Frames risk in the language of compliance because that’s the language the board responds to.
Over time, the entire reporting relationship becomes calibrated to comfort rather than accuracy.
Boards that have built effective cyber governance don’t need to ask those three questions. The answers are already embedded in how they govern. Boards that can’t answer them haven’t built it yet.
That’s not a criticism. It’s a diagnosis.
And diagnosis is the first step toward building something better.
The work begins with one decision: that the comfortable silence has gone on long enough.
