Here’s the question directors should ask this quarter:
If an AI tool can be “talked into” doing the wrong thing, who carries the risk when the business acts on it?
AI is now embedded in daily operations: customer chatbots, internal “ask me anything” tools, copilots for code, and decision-support dashboards. Attackers have noticed.
The World Economic Forum reports that 87% of leaders see AI-related vulnerabilities as the fastest-growing cyber risk. That’s not hype. It’s a governance signal.
In an Australian context, this also maps neatly to the AICD’s consistent message: cyber is not a technical sidebar, it is a governance issue. Directors are expected to seek assurance on material risks, challenge management where controls are unclear, and ensure accountability is explicit.
What “AI hacking” looks like in plain English
AI attacks do not always “break in” like traditional hacking.
Sometimes they manipulate behaviour.
- Prompt manipulation: An attacker uses carefully crafted input to make the AI ignore rules, reveal sensitive information, or give unsafe guidance.
- Indirect prompt injection: The “trap” is hidden inside something your AI trusts (a document, email, website, ticket). The AI reads it and follows the hidden instruction. OWASP highlights this as a leading LLM risk for LLM applications.
- Abuse of AI-driven workflows: The AI’s output triggers real actions: emails sent, tickets raised, refunds processed, approvals routed, payments queued, code merged. If the output can be manipulated, the business process can be manipulated.
- Data poisoning: Bad data gradually degrades judgement, recommendations, and model behaviour over time.
A simple scenario: A staff member pastes content from a supplier email into an internal AI assistant to “summarise and draft a reply”.
The email contains hidden instructions that cause the assistant to include confidential contract terms in the response. No malware. No breach alert. Just a confident system doing the wrong thing, fast.
Why this is accelerating (and why it lands with Boards)
Because AI is being deployed faster than governance.
- Business teams switch tools on for speed, often without a security review.
- Leaders assume the vendor “has it covered”.
- The real risk sits in integrations, permissions, data access, and day-to-day use.
In Board terms, this is the problem: AI can become a shortcut into your data and your decisions, especially when it has broad access and is trusted by default.
This is where AICD-style governance thinking matters. If the organisation cannot clearly explain where AI is used, what it can access, and what it can trigger, directors do not have assurance. They have optimism.
The Board questions that matter (and vague answers are a red flag)
If you’re a Director or on an Audit & Risk Committee, ask these:
- Where is AI used today (officially and unofficially)? Include “shadow AI” and personal accounts used for work.
- What data can it access? Customer data, HR data, financials, legal documents, source code, IP.
- What can the AI trigger? Emails, tickets, workflow automation, approvals, payments, access requests, code releases.
- Who owns it? Name the accountable executive. “IT” is not an owner.
- How do we test for manipulation, not just breach? Do we test prompts, trusted sources, and unsafe outputs?
- What is our response plan if AI leaks data or drives a bad decision? What would we tell regulators, customers, insurers, and the market?
If management cannot answer these clearly, the organisation does not have AI governance. It has AI adoption.
A practical minimum standard for AI governance
You do not need perfection. You need control.
Guidance from Australia’s Australian Cyber Security Centre (ACSC) reinforces secure deployment, access control, and preventing sensitive data exposure. Frameworks like the NIST AI Risk Management Framework help structure governance and risk decisions.
Here’s a baseline I’d expect in any organisation using AI:
- Register AI systems like critical vendors and critical applications (including internal experiments that touched real data).
- Set clear ownership (business owner + security owner). No orphan tools.
- Lock down access (least privilege, role-based access, API controls, logging).
- Control what the AI can read (trusted sources; reduce open browsing and uncontrolled document ingestion).
- Control what the AI can do (limit automation; add approvals for high-impact actions).
- Monitor for misuse (odd prompts, repeated jailbreak patterns, suspicious integrations, unusual outputs).
- Train staff on “AI-safe behaviours” (what never to paste, what to verify, when to escalate).
Australian director lens: treat AI like any other material risk area. You are not expected to be the technical expert. You are expected to ensure the right questions are asked, accountability is clear, and assurance is credible.
Why traditional pen testing isn’t enough
Pen testing asks: “Can we break in?”
AI assurance also asks: “Can we manipulate the system to make the business do something stupid?”
That means testing behaviour, trust boundaries, downstream actions, and unsafe output handling, not just technical vulnerabilities.
Bringing it back to Cyber Insecurity
In my book Cyber Insecurity: The Silent Risk in Your Boardroom, I argue a simple point: cyber risk becomes director risk because it becomes business risk first.
AI does not change that. It accelerates it.
You can purchase my book from my website – https://book.kirankewalramani.com/buy
Or Amazon – https://www.amazon.com.au/Cyber-Insecurity-Silent-Risk-Boardroom/dp/8198872655
If AI is in your business, put it on the Board agenda this quarter. Not as an innovation update, but as a risk and control discussion.
