Cyber Ethos

When Regulators Act Faster Than Boards Govern

ASIC took down 11,964 scam websites in 2025. A 90% increase from the previous year. Averaging 32 sites per day.

Most commentary will frame this as regulatory success. It is. But it’s also a report card on board governance that preceded it.

Here’s what that number actually says: ASIC stepped in to do work boards either didn’t realise was theirs, didn’t resource, or didn’t treat as time-critical governance.

What 12,000 Takedowns Really Reveal

When a regulator has to coordinate the removal of nearly 12,000 phishing and investment scam websites in a year, it tells you three uncomfortable things about what wasn’t happening in most boardrooms.

Brand and customer impersonation wasn’t being treated as a board-level risk category.

Boards were approving cyber and fraud budgets without explicitly asking: “Who is accountable for detecting and taking down scam sites trading on our brand, what’s the SLA, and how do we know it actually happens at speed?”

Scam governance was outsourced to hope and after-the-fact complaints.

The fact ASIC and other agencies are now leaning on third-party takedown services and still playing catch-up tells you that, before this, most organisations were relying on customers, staff, or banks to spot scams after harm had already started.

Boards weren’t testing their decision architecture against AI-speed harm windows.

ASIC is explicitly warning that AI is “supercharging” online scams. Cybersecurity researchers confirm scammers can now “spin up a website, 10 websites, 100 websites” almost instantly.

If your governance is built around quarterly dashboards and annual fraud briefings, it’s structurally incapable of dealing with an adversary that can deploy 100 look-alike sites before your next risk committee.

The Governance Conversation That Never Happened

Eighteen months ago, the right board-level conversation wasn’t “How can ASIC help us?”

It was this: “If ASIC’s takedown team disappeared tomorrow, how would we protect our customers and our brand at AI-speed, with our own structures, our own money, and our own authority?”

Most boards never asked that. They treated the external ecosystem as the plan.

Since 2023, ASIC and the National Anti-Scam Centre have been very public about launching coordinated takedown services, knocking out more than 25,000 scam and phishing sites with fusion-cell support from banks, telcos, platforms and other agencies.

At the same time, the scams prevention framework has been explicit: entities must “govern, prevent, detect, disrupt, report, and respond” to scams as a structured obligation.

Any board paying attention had enough signal to say: “We will be held to account for this. We cannot outsource the problem.”

The conversation should have turned from “Are we compliant?” to “Are we independently capable?”

Who Actually Closed the Gap

ASIC reports an 11% reduction in investment scam losses alongside those 12,000 takedowns.

Boards are already talking about this as proof their governance is “working.”

The data says something different.

The gap was closed primarily by coordinated regulatory, industry, and fusion-cell action. Not a sudden surge in boardroom excellence.

ASIC didn’t just “encourage awareness.” It stood up an industrial-scale takedown and disruption capability, averaging 32 sites removed per day and more than 1,100 investment scam ads pulled from social platforms.

The Targeting Scams reports make three things clear:

Loss reductions are explicitly attributed to “combined efforts by government, industry, law enforcement and community organisations.” Not to internal governance uplift inside individual companies.

Investment scam losses fell in part because investment scams were singled out for joint disruption cells, targeted enforcement, and large-scale URL and ad takedown campaigns.

ASIC itself points to its takedown service and third-party monitoring vendor as key drivers behind the 11% loss reduction.

In other words, the capability that closed part of the gap sits outside the average boardroom.

Here’s the uncomfortable truth: at the same time as these numbers are improving at the macro level, Australians still lost $2.18 billion to scams in 2025, with $837.7 million of that in investment scams alone.

Total losses actually increased compared with the previous year even as one slice showed an 11% reduction.

What Governing in Practice Actually Looks Like

Governing scam risk in practice means the board can see, in black and white, how fast the organisation detects, blocks, and takes down real scams that use its brand.

And who is personally on the hook when that doesn’t happen.

If all a board has is a policy statement and a quarterly PowerPoint, it’s still on paper.

I’ve seen boards that are serious about this. They can put their hands on three things:

A scams risk and response playbook that has been exercised in simulation, with actions, timings, and owners tested under realistic conditions.

A scam performance pack that includes: scams detected, scams blocked, scams that resulted in loss, time to detection, time to takedown, and decision turn-time for customer redress.

Evidence of resourcing: an identified scams function or team, defined FTE, and budget tied specifically to scam prevention, detection, disruption, and customer support. Not buried generically under “IT security.”

The accountability test is simple. A director should be able to answer, without reaching for the policy, three questions:

Who is the executive on the hook for scam risk across the whole organisation, and what is their authority to act at 2am?

How quickly do we detect and take down scam activity targeting our customers, and where do I see that number?

When a scam does get through, how do we decide, and how quickly, whether and how to make the customer whole?

If the answer to any of those is vague, policy-heavy, or relies on “we would work it out at the time,” then the board is still governing on paper.

The Structure Most Boards Won’t Fund

The structure most boards need isn’t cheap. But it’s nowhere near as expensive as the losses, fines, and remediation they’re already carrying on the other side of a major scam event.

The real reason it’s not funded isn’t price.

It’s that scams are still treated as a reputational irritant and a “regulatory issue,” not as a core customer-harm and liability line in the risk ledger.

Let me put realistic brackets around it. The structure you’re talking about is a combination of: continuous external scam monitoring, rapid takedown capability, internal fraud/scam operations, and board-grade governance and reporting.

For a mid-to-large organisation, that typically looks like:

External monitoring and takedown services in the low-hundreds-of-thousands per year.

A dedicated fraud/scam technology stack and operations team in the low-to-mid-millions annually for serious institutions.

A named head of fraud/scams, analysts, operations staff, plus risk/governance capacity to build and maintain the frameworks.

In other words, a board-ready scam structure is typically a single-digit percentage of what the institution already spends on overall cyber, fraud, and compliance.

And a fraction of the losses avoided if it actually works.

When boards choose not to fund it, they’re not saving money. They’re buying a different exposure: higher scam loss volatility, higher likelihood of regulatory scrutiny, and a higher probability that when the next ASIC report lands, their organisation shows up as a case study for what happens when you run scam governance on paper.

The Section 180 Moment

The moment directors realise they’re personally liable is brutally simple.

It’s when someone in the room reads out the fact pattern and the ASIC lawyer or external counsel quietly says: “These were foreseeable risks, you had clear warnings, and there is no evidence the board ensured adequate controls. That is a section 180 problem.”

At that point directors stop asking “How did this happen?” and start asking “What did we sign off, and what did we fail to insist on?”

In the first hour of a major scam event, the board conversation is about customers, money, and headlines. It only turns to personal liability when three facts line up in the same briefing pack:

ASIC and NASC had already been publicly warning for years that scams and cyber risk were foreseeable, material risks that boards must actively oversee.

The organisation’s own materials show policies and high-level statements, but no tested structure: no clear owner, no metrics, no simulations, no board-level challenge on scam capability.

The loss profile is ugly: widespread customer harm, clear governance gaps, and media or regulatory commentary that the organisation lagged behind peers.

ASIC has already said it will treat failures to adopt adequate cyber and third-party risk measures as potential breaches of the duty of care and diligence under section 180.

The maximum pecuniary penalty for individuals under section 180 is 5,000 penalty units (approximately $1,565,000). And section 199A of the Corporations Act prohibits a company from indemnifying a director for a breach of their directors’ duties.

Directors cannot “blindly delegate” to management, third parties, or committees on cyber and scam risk. The law only protects reliance that is reasonable, informed, and matched by active oversight.

The One Question That Forces Evidence to Surface

There’s one question a director should have been asking in every cyber or scam briefing that would have forced that evidence to surface before the regulator did:

“Show me, with evidence, how our current cyber and scam structures would actually perform over the first 4 to 6 hours of a major incident, without ASIC, NASC or any third party stepping in to save us.”

Everything important is buried in that question.

It forces management to move past policy language and spend approvals into hard proof of capability, timing, and ownership.

It removes the comfort blanket of “the ecosystem” and makes clear you’re testing the organisation on its own feet, which is exactly how ASIC and the courts will look at it.

If you ask that question in every briefing, you’re quietly demanding four things:

Concrete structures, not diagrams. Management has to point to named roles, playbooks, escalation paths, and decision rights. Anything vague becomes immediately obvious.

Performance data, not comfort statements. They must bring simulation results, time-to-detect, time-to-takedown, and decision-turn-time metrics.

Independence from the regulator’s safety net. Management has to show what you own, not what the regulator might do on a good day.

A direct line to director duties. When that evidence is thin, you have a live signal that you’re being asked to rely on assurance that is neither reasonable nor well-informed.

What’s Actually at Stake

Most directors won’t ask that question because they don’t want to look like they’re questioning management’s competence.

What’s at stake isn’t the relationship with management.

It’s the director’s own duty of care, their personal exposure, and the board’s credibility the day something goes wrong.

When a director chooses comfort over the uncomfortable question, they’re effectively choosing to leave a foreseeable, high-impact risk under-examined in exchange for short-term social ease.

After a major scam or cyber event, the question ASIC, plaintiffs and the media all ask is: “What did the board do on this risk?”

If your answer is “We approved the budget and accepted management’s assurance,” that will be read against a backdrop of years of public warnings that boards needed to step up.

That’s not a position you want to defend.

ASIC Chair Joe Longo stated: “Let me be especially clear here, it is a foreseeable risk that your company will face a cyber attack…as a director you need to make it your business to be across questions of cyber resilience and make cyber security a priority.”

The regulator has been crystal clear that cyber and related risks are no longer “technical matters” you can safely leave to management.

In that moment of hesitation, a director isn’t choosing between “being nice” and “being difficult.”

They’re choosing between short-term social comfort in the room, and long-term accountability for a risk they knew was material but decided not to interrogate.

Given what ASIC and governance bodies have already said about directors’ duties in the cyber age, that’s a poor trade.

The uncomfortable question isn’t an attack on management’s competence. It’s a director doing the one thing only they can do: insisting that governance capability matches the risk the organisation is carrying, before a regulator or a court forces that conversation on much harsher terms.

Kiran Kewalramani

Kiran Kewalramani

Kiran Kewalramani stands as an acclaimed technologist with over two decades of robust executive experience in technology, cybersecurity, data privacy and cloud solution enablement. His illustrious career has been marked by transformative roles in esteemed organizations, including Cyber Ethos, Queensland Department of Education, Gladstone Area Water Board, NSW Rural Fire Service, NSW Police Force, Telstra, American Express, and more.