Cyber Ethos

The Hidden Risks Lurking in Your AI Systems: A Wake-Up Call for Australian Businesses

AI has well and truly landed in Australia. From agribusiness in the regions to mining in the Pilbara to hospitals in our capital cities, we’re weaving machine learning and large language models into the fabric of our economy.

That’s exciting. But here’s the uncomfortable truth: most organisations charging ahead with AI have left the back door wide open.

A new Accenture Cybersecurity Resilience report found that 77% of organisations globally don’t have the basic security foundations in place for AI. That’s three in four businesses investing in AI without the guardrails to protect it. Even worse, only a quarter have implemented encryption and access controls for their AI data. Don’t worry, I will attach the report in the newsletter so you can read it. It got me thinking for sure.

Think about it: would you leave the keys in your car and walk away from it in the middle of the city? Yet that’s essentially what’s happening with AI systems today.

Why this matters

I’ve seen the consequences first-hand. Just a year ago, I came across a Sydney based firm that had rolled out a shiny new AI analytics tool without considering cybersecurity aspects. It worked brilliantly until a vulnerability let in data manipulation that went unnoticed for weeks. When they approached us, the damage was done. It was subtle but costly. We helped them patch the vulnerability but did the experience burn them a bit, it did. Well, the experience of ‘let’s get into the AI race without any security considerations’ surely did. Trust me – it hurts when it happens.

Fast-forward to now, and we’re staring down even more sophisticated risks. Researchers recently built the Morris II worm, an AI-driven exploit capable of hijacking models and siphoning sensitive data all without a single phishing link being clicked. If academics can prove the concept in a lab, you can bet that cybercriminals and state-sponsored actors are already working on real-world versions.

The cloud isn’t your safe haven

Most AI lives in the cloud. But the numbers are sobering:

  • 14% of Amazon Bedrock users leave their training data buckets accessible.
  • 91% of SageMaker users run notebooks that could give away full system access if compromised.
  • Over 80% lack proper monitoring across their cloud setups.

Add in the growing concerns about poisoned training data, data leakage from large language models, and regulatory turbulence around cross-border data, and you’ve got a storm brewing.

The compliance crunch

Australia is moving toward tighter regulation, but it may take some time. Until then the Government has proposed mandatory guardrails for high-risk AI.

Then on a global front, the EU AI Act is just the start. The message from the Regulators are clear – they won’t accept “we didn’t know.”

The reality is that less than 10% of organisations maintain an inventory of their AI systems. Without that baseline, how do you demonstrate compliance or even know where your exposures lie?

And then there’s the mismatch with zero-trust security.

Everyone talks about it; only 10% apply it to AI.

Remember, these systems need “god-mode” access during training but should have minimal rights during operations. Our 2015-era identity tools were built for Susan in Accounts, not GPT-powered models consuming terabytes of data.

If I can predict this well – this gap is exactly where attackers will strike.

Third-party risks you can’t ignore

Relying on pre-trained models or external vendors without scrutiny is like hiring a stranger into your home without checking their background. Yet that’s what’s happening across the AI supply chain.

If you’re not asking:

·       Where was this model trained?

·       On what data?

·       How secure is the vendor’s environment?

— then you’re flying blind.

What’s at stake

The cost of inaction is more than financial. Businesses in what Accenture calls the “Exposed Zone” are:

  • 69% more likely to suffer AI-related attacks
  • 1.7x more likely to rack up technical debt
  • Delivering lower ROI on their AI investments

But the real currency at risk is trust. When AI systems are compromised, it’s not just data lost. It’s customer confidence and rebuilding that takes years, not months.

Where to start

The good news: this isn’t an impossible fix. Even modest investments reap big returns. A 10% uplift in security spend improves AI threat detection by 14%. That’s survival math, not luxury.

Here’s a pragmatic starting point:

Today

  • Inventory your AI systems. You can’t secure what you can’t see.
  • Encrypt sensitive AI data flows.
  • Appoint executive ownership – if nobody owns it, it won’t get fixed.

This Week

  • Review your cloud AI configurations. Are your buckets locked down?
  • Scrutinise your third-party AI services and vendors.
  • Begin mapping zero-trust style access controls tailored to AI.

This Month

  • Build a fit-for-purpose AI governance framework aligned to Australian compliance requirements.
  • Develop playbooks for adversarial testing and region-specific data laws.
  • Run simulations to test how resilient your AI systems really are.

Final thought

AI is powering the next chapter of Australian innovation, but it’s also exposing new fault lines. The businesses that treat AI security as optional will stumble. Those that bake it in from the start will not only stay safe but also enjoy higher returns, lower tech debt, and stronger customer trust.

The question is simple: Have you audited your AI lately with a cybersecurity lens?

If not, we are keen to have a conversation.

At Cyber Ethos, we help Boards, the CEO/CFO and business leaders cut through the noise and build cyber resilience that keeps pace with innovation. If your organisation is adopting AI, now is the time to get your governance and security posture right.

Reach out to us through the Contact US Form

Explore our services by visiting our website

Until we meet again. Stay Cybersafe !!

Kiran Kewalramani

Kiran Kewalramani

Kiran Kewalramani stands as an acclaimed technologist with over two decades of robust executive experience in technology, cybersecurity, data privacy and cloud solution enablement. His illustrious career has been marked by transformative roles in esteemed organizations, including Cyber Ethos, Queensland Department of Education, Gladstone Area Water Board, NSW Rural Fire Service, NSW Police Force, Telstra, American Express, and more.