A few weeks ago, researchers uncovered a series of flaws in one of the world’s most widely used AI systems. No ransomware. No phishing link. Just a quiet manipulation of trust, invisible to the human eye and devastating in its implications.
For me, this wasn’t surprising. I’ve been warning Boards for years that AI isn’t just a tool, it’s an attack surface. What makes this breach different is how it happened, not through hacking servers but through hacking language itself.
How AI can be tricked without breaking a single firewall
Let’s break it down in plain English. Today’s AI models don’t just follow code, they follow context. They learn from whatever they’re fed. That means the line between helpful instruction and malicious command can blur fast.
Two vulnerabilities stood out for me, and every Board should understand them.
1) Indirect Prompt Injection, the invisible Trojan horse
Imagine reading a trusted website, a policy paper, a LinkedIn post, or an industry report. Hidden inside that page could be a string of malicious instructions only an AI model can “see.”
When the AI scans that page, it unknowingly executes those commands, like a lawyer being tricked into signing a blank contract. The result? Sensitive data can be silently exfiltrated or altered. No malware. No alerts. Just the AI doing exactly what it was told, by the wrong source.
2) Persistent Memory Injection, the long-term compromise
Some AI systems now have memory to retain context across sessions, designed for convenience. But if a malicious command gets stored there, it doesn’t just vanish when the chat ends. It stays, re-activating every time that AI instance is used.
In a corporate setting, that could mean an AI assistant connected to your CRM or HR system quietly leaking data long after the original trigger. That’s not a software bug, it’s a breach of fiduciary trust.
Why this matters in the Boardroom
These vulnerabilities signal a fundamental shift. Traditional attacks target systems. AI attacks target behaviour and trust.
If you sit on a Board, this changes your oversight posture. AI tools, from HR chatbots to finance co-pilots, are now part of your operational fabric, often accessing regulated data under CPS 234, the SOCI Act, or privacy law obligations.
The key question isn’t “Are we using AI safely?” It’s “Do we even know where AI lives inside our business, and who is governing it?”
From Technology to Trust: The new oversight gap
AI introduces delegated decision-making. Your people, and sometimes your customers, act on what the system says. If that system can be manipulated, so can every downstream decision.
Boards now need to think in layers:
• Data trust – What is the AI learning from, and can that data be poisoned?
• Prompt trust – Can external content instruct our AI without approval?
• Memory trust – Can unauthorised instructions persist across sessions?
Each layer mirrors what Directors already understand, integrity, accountability, auditability. The difference is that AI compresses all three into milliseconds.
The Regulatory Pulse Across APAC
Regulators are moving fast.
APRA’s CPS 234 already requires Boards to maintain information-security capability commensurate with threat. AI clearly sits inside that remit. Under the SOCI Act, critical-infrastructure entities must manage cyber risks across their supply chains, and that includes AI vendors.
Singapore and Japan are aligning to the OECD AI Principles, while South Korea has launched national AI assurance programs.
The signal is clear, governance, not technology, is the weak link.
For APAC Boards, this means a vulnerability in one market can expose regulated data in another. Cross-border AI models blur jurisdictional lines faster than compliance frameworks can keep up.
Making sense of frameworks (without the jargon, I promise)
Here’s how I explain the major frameworks to Boards:
• ISO 42001 – AI Management System Think of it as ISO 27001 for AI. It embeds AI governance into your management systems, defining roles, risk assessments, and accountability.
• NIST AI Risk Management Framework Practical, operational, and built for both tech and business leaders. It focuses on mapping, measuring, and mitigating AI risks across reliability, security, and transparency.
• Australia’s AI Ethics Principles They set the tone, fairness, accountability, transparency, and privacy, reminding Boards that responsible AI is about trust, not just compliance.
Together, they create alignment, Assurance. Accountability. Auditability.
At Cyber Ethos, we translate these frameworks into governance language so AI oversight becomes part of your normal risk rhythm, like financial assurance, only faster.
Lessons from the Breach
Here’s what this latest incident reinforced for me:
1) AI needs segregation, not just supervision. Just as critical systems are isolated, AI should operate in sandboxed environments where data and prompts can’t cross-contaminate.
2) Governance beats gadgets. Technology evolves faster than any control. Clear accountability, who owns AI risk, who reports it, who audits it, builds resilience that no patch can replace.
3) Trust must be tested. Boards should commission AI red-teaming or prompt-injection testing to simulate manipulation and exfiltration scenarios. If your AI tool can be tricked, you want to know before an attacker does.
A Director’s Checklist for AI Oversight
When I brief Boards, I ask five simple questions:
1) Where does AI operate across our organisation?
2) What sensitive data can it access, and how is that data classified?
3) Are AI outputs validated or independently reviewed before decisions are made?
4) Do we have a documented AI Governance or Assurance Policy aligned with ISO 42001 or NIST AI RMF?
5) How often is AI usage risk-assessed and reported to the Audit & Risk Committee?
You don’t need to be a technologist to govern AI. You just need visibility, accountability, and curiosity.
The Broader Message for Boards
AI isn’t inherently unsafe, but it is unforgiving when oversight fails. As we integrate AI into decision-making, the accountability line must remain clear.
AI may execute, but humans must oversee.
Boards that recognise this shift early will lead with confidence. Those that delay will find AI risk showing up in audit findings they never expected.
Where Cyber Ethos Fits In
At Cyber Ethos, we work with Boards and executive teams across Australia and APAC using our AI Governance Diagnostic Framework. It benchmarks your readiness against ISO 42001, NIST AI RMF, and the Australian AI Ethics Principles, translating technical detail into governance language Directors understand.
Our process identifies where AI is embedded, how it’s managed, and which oversight gaps sit between technology and Board visibility. In short, it bridges the trust gap before a breach does.
Final Reflection
This latest AI breach isn’t just a cautionary tale. It’s a wake-up call for leadership.
The trust we place in AI must be earned and verified, not assumed. AI risk governance is no longer optional, it’s a core part of fiduciary duty.
Next Steps for Directors and Executives
1) Schedule an AI & Cyber Governance Diagnostic Understand your organisation’s AI and Cyber risk posture and align with global best practice. 🔗 www.cyberethos.com.au
2) Deepen your understanding My book Cyber Insecurity: The Silent Risk in Your Boardroom explores how Boards can translate complex cyber and AI risks into strategic oversight, without needing a technical degree. Now available globally through Penguin Publications.
🔗 Buy on Amazon: https://lnkd.in/guqz-Xrh
🔗 Buy Directly: https://lnkd.in/ggXhjebs
🌐 Learn more or order signed copies: www.kirankewalramani.com
At the end of the day, AI doesn’t remove human accountability, it redefines it.
And that’s where effective governance begins.
