Cyber Ethos

Documents Don’t Protect Customers: The Liability Exposure Boards Are Carrying

Most boards govern cyber risk with untested documents, not tested decision architectures. When AI-generated scams move faster than approval processes, that gap becomes personal director liability. The fix isn’t more policy. It’s pre-made decisions, tested under pressure, with a timer running.

Directors face personal liability when governance frameworks fail under pressure. Australian law holds boards accountable for setting proper cybersecurity standards. If your incident response plan has never been tested against AI-speed scam scenarios, the gap between what you approved and what you’ve prepared is where the exposure sits. ASIC removed 32 fraudulent sites daily in 2025. The question: if that intervention stopped tomorrow, would your customers be safer because of your controls, or more exposed because those controls were never the primary defence?

What Boards Need to Know

  • Section 180 liability attaches when directors fail to set up proper cybersecurity standards
  • Most incident response plans assume 48-72 hour windows when AI-generated scams create 4-6 hour harm windows
  • The Scams Prevention Framework now carries AU$50 million fines plus victim compensation liability
  • Fewer than 30% of Australian board members consider themselves highly cyber literate
  • Pre-made decision architecture tested under realistic conditions is the only honest defence

Why Most Boards Are Carrying Liability They Don’t Recognise

Directors face personal liability for cybersecurity failures that result in regulatory breaches. Under Section 180 of the Corporations Act 2001 (Cth), the duty of care and diligence provision, if directors failed to set up proper standards of cyber security to be implemented by management, there’s direct exposure.

This isn’t theoretical. This is the actual legal position Australian directors operate under right now.

The question boards aren’t asking: if your incident response plan has never been tested under realistic AI-speed conditions, have you genuinely set up proper standards?

I’ve sat in board sessions where the answer to that question, once asked honestly, was no.

The plan existed. The policy was approved. The framework looked complete on paper. But nobody had run the scenario where scam sites replicate faster than your takedown process. Where customer complaints appear on social media before your communications team gets briefed. Where the General Counsel advises caution at exactly the moment speed matters most.

A document that has never survived contact with reality isn’t a defence. It’s evidence of the gap.

Key insight: Governance frameworks that look complete on paper become liability evidence when they’ve never been pressure-tested against the speed of AI-generated threats.

How the Regulatory Environment Changed the Liability Calculation

The Australian government passed the Scams Prevention Framework law. Social media companies, banks, and telecommunication companies now face a maximum of AU$50 million in fines for violations. Entities that fail to meet obligations become liable to compensate victims for their losses.

The regulatory environment has shifted. Directors governing with 2023 frameworks are carrying 2026 liability.

What that means in plain terms: the threshold for adequate governance has moved. The baseline now includes scam prevention integrated into governance structures with the same seriousness applied to AML/CTF and cybersecurity. Clear executive and board oversight is required.

Not aspirational. Mandatory.

The boards I work with who recognise this aren’t panicking. They’re asking a different question: what decisions do we need to have made in advance so our people act inside the harm window without waiting for us?

The boards who haven’t recognised it yet still treat this as a compliance reporting item. That gap is where the personal liability sits.

Key insight: Regulatory requirements moved from aspirational to mandatory. The gap between what boards approved historically and what’s now legally required creates direct personal exposure for directors.

What Cyber Illiteracy Costs Boards in Governance Capability

Fewer than 30% of Australian board members consider themselves highly cyber literate. This inhibits meaningful challenge and oversight of management’s cyber risk handling.

That statistic matters because it explains why the governance gap persists even when intent is strong.

Directors are intelligent, experienced, and genuinely committed to protecting customers. What they often lack is the specific framework to assess whether the incident response plan they approved delivers what it promises when the threat moves at AI speed.

I’ve watched boards approve frameworks that assume a 48-to-72-hour detection-to-response window when the actual harm window is four to six hours. The gap isn’t negligence. It’s a structural mismatch between the assumptions built into the plan and the reality of how fast AI-generated scam infrastructure deploys.

The liability exposure isn’t that directors don’t care. It’s that they approved a plan designed for a threat landscape that no longer exists, and nobody built the forcing function that required them to retest it.

Key insight: Cyber illiteracy doesn’t cause malicious intent. It causes boards to approve plans built on assumptions that expired when AI changed the speed of the threat.

What Boards Need to Have Decided Before the Incident Occurs

The Australian Signals Directorate’s guidance to boards is clear: understanding whether technology used or provided to your customers is secure by design and secure by default is not optional governance. It’s threshold accountability.

What that looks like in practice isn’t a longer policy document. It’s a set of specific, tested, board-level decisions made before the incident occurs.

Authority thresholds. Customer communication triggers. Regulator notification sequencing. Reputational containment authority. The simulation mandate that tests whether those pre-made decisions hold under pressure.

The boards carrying the lowest liability exposure right now are the ones who formally resolved that this scenario will be rehearsed. Not tabletop in theory. Walked through with executives, with a timer running, against a realistic AI-generated scam scenario.

Because a pre-made decision architecture is only as good as the people who execute it under pressure. If they’ve never run the scenario, the decisions made in advance will still hesitate at the moment of execution.

Key insight: Preparedness isn’t policy documentation. It’s specific pre-made decisions that have survived simulation under realistic AI-speed conditions with a timer running.

The Diagnostic Question Worth Asking in Your Next Board Meeting

ASIC removed 32 fraudulent sites every day in 2025. That’s the symptom.

The question worth asking is what the diagnosis tells you about your own house.

If ASIC’s intervention reduced by half tomorrow, would your customers be materially safer because of the controls you have in place, or materially more exposed because those controls were never the primary line of defence?

Most boards haven’t tested that scenario. Most don’t have a confident answer.

This isn’t a compliance failure. This is a fiduciary one.

The organisations that will separate themselves over the next 18 months are the ones whose boards stop treating scam exposure as a consumer affairs problem and start governing it as a foreseeable harm question with direct liability attached.

The board’s job isn’t to manage the incident. It’s to have made the decisions that allow management to manage it well.

That distinction is everything when the threat moves faster than your approval process.

Questions Boards Should Be Asking

What is Section 180 liability for cybersecurity failures?

Section 180 creates personal liability for directors who fail to set up proper cybersecurity standards. If a breach occurs and the board hasn’t established adequate governance frameworks, directors face direct legal exposure.

How fast do AI-generated scams deploy compared to traditional fraud?

AI-generated scam infrastructure deploys in four to six hours. Most board-approved incident response plans assume 48-to-72-hour windows. That mismatch creates a governance gap where customer harm occurs before your controls activate.

What decisions should boards pre-make before a scam incident?

Authority thresholds for customer communications, regulator notification sequencing, reputational containment authority, and customer communication triggers. These need to be decided at board level and tested under realistic conditions so management doesn’t wait for approval during the harm window.

What does the Scams Prevention Framework mean for board liability?

Fines up to AU$50 million plus compensation liability for victims. The framework makes scam prevention a mandatory governance requirement at the same level as AML/CTF and cybersecurity, with clear board oversight expected.

Why do most incident response plans fail under AI-speed conditions?

They assume friction on the attacker’s side that no longer exists. AI removed the time, skill, and infrastructure barriers that traditional fraud required. Plans built for slow-moving threats don’t survive when scam sites replicate faster than takedown processes.

How often should boards simulate AI-speed scam scenarios?

At minimum annually, with realistic conditions including timer pressure, incomplete information, and competing stakeholder advice. The simulation should test whether pre-made decisions hold when executives face the actual speed and ambiguity of an AI-generated incident.

What is the difference between cyber literacy and cyber governance capability?

Cyber literacy is understanding technical concepts. Cyber governance capability is having the frameworks to assess whether approved plans deliver what they promise under realistic threat conditions. Most boards lack the second even when they possess the first.

If ASIC’s takedown intervention stopped, would our customers be protected?

This is the diagnostic question. If your answer isn’t confident, your controls aren’t the primary defence. That gap is where fiduciary liability sits, because you’ve delegated customer protection to regulatory intervention rather than governed it directly.

Key Takeaways

  • Untested governance frameworks create personal director liability under Section 180 when cybersecurity failures occur
  • AI-generated scams deploy in 4-6 hours whilst most board-approved plans assume 48-72 hour response windows
  • The Scams Prevention Framework carries AU$50 million fines plus victim compensation, making scam governance mandatory not aspirational
  • Pre-made decision architecture tested under realistic conditions is the only defence that survives legal scrutiny
  • Boards must answer: if regulatory intervention stopped tomorrow, would customers be safer because of our controls or more exposed because we deferred to external protection
  • The fiduciary duty isn’t managing incidents, it’s making decisions in advance that allow management to act inside harm windows without waiting for approval
  • Simulation with timer pressure, incomplete information, and competing advice is the only honest test of whether governance capability matches governance intent
Kiran Kewalramani

Kiran Kewalramani

Kiran Kewalramani stands as an acclaimed technologist with over two decades of robust executive experience in technology, cybersecurity, data privacy and cloud solution enablement. His illustrious career has been marked by transformative roles in esteemed organizations, including Cyber Ethos, Queensland Department of Education, Gladstone Area Water Board, NSW Rural Fire Service, NSW Police Force, Telstra, American Express, and more.