Manual AI security testing. Australia.

AI security testing for production deployments

Manual penetration testing applied to LLM applications, RAG pipelines, and agentic systems that have moved past the demo stage and into a real workflow. AI security is where Cyber Node's penetration testing methodology meets the founder's chemical-engineering background in safety-critical systems. Cyber Node tests prompt injection, indirect prompt injection through ingested content, RAG leakage and tenant boundary breaks, agent tool-use abuse, model supply-chain risk, and training-data exposure. Aligned to the OWASP LLM Top 10 and MITRE ATLAS. Reports are written for engineers shipping the system and for the executives signing off the deployment.

Matt Breuillac, founder of Cyber Node

Led by

Matt Breuillac, MIEAust

One operator across every Cyber Node path. Chemical and process engineer turned cybersecurity specialist. Shell Prelude FLNG, Albemarle Kemerton lithium hydroxide, AREVA nuclear, Kazakhstan ISL uranium. Masters Chemical Engineering, EMBA, PMP, AWS Certified Security Specialty. Engineers Australia member.

Read Matt’s story

What we test

The agentic-AI threat surface, in practice

Prompt & context

Direct and indirect prompt injection

User-input prompt injection. Indirect injection through documents, URLs, emails, and tool responses ingested into the context window. System prompt extraction. Jailbreak resilience under realistic adversary effort.

RAG & data

Retrieval and tenant boundaries

RAG leakage across tenants and authorisation scopes. Document-level access bypass. Embedding-store poisoning. Source-document exfiltration through crafted queries. Training-data exposure where sensitive content can be coaxed back out.

Agents & tools

Tool-use abuse and chaining

Agent tool-use boundary breaks where the model can call APIs, browsers, or filesystem. Privilege escalation through tool chaining. Sandbox escape from code-execution tools. Indirect prompt injection that hijacks agent intent across multi-step workflows.

Why us

Engineering reasoning, applied to AI

Most AI consultancies will run a prompt audit. Few will reason about an agent the way an engineer reasons about a P&ID hazard analysis. Cyber Node's lead engineer combines AWS Security Specialty plus AWS Solutions Architect Associate with 15 years of safety-critical engineering across FLNG, lithium hydroxide, nuclear, and uranium operations. The same hazard-tree thinking that asks "what fails when the BPCS fails" gets applied to "what fails when the model fails closed, fails open, or fails confidently wrong".

  • Air-gapped and on-prem AI experience for regulated buyers
  • Aligned to OWASP LLM Top 10 and MITRE ATLAS
  • Reports written for the engineer shipping the system and the executive signing off the deployment
  • The same dataset behind /insights/ covers AI engagements where in-scope

Mis-routed

On the wrong page?

Have OT, ICS, or industrial control systems in scope, including agentic AI deployed inside the plant network? Start at OT and Industrial →

Need a compliance-driven penetration test of corporate IT and SaaS applications, with no AI in scope? Start at Cyber and Compliance →

Scope an engagement

Test the AI before it ships, not after

Tell us what model, what tools the agent calls, and what data it touches. We respond with a fixed-price proposal within 48 hours of the scoping call.