Capability

AI and cloud security, and what automated tools miss

AI-integrated applications and cloud environments are where most new security work happens in 2026. They are also where automated tooling is most likely to give false confidence. A CSPM benchmark will flag an overprivileged role. It will not tell you that your LLM agent can read that role’s secrets and then use them.

AWS Certified Cloud Security Claude Code Specialist AI integration Azure GCP

AI security testing

What we test in LLM-integrated applications

  • Prompt injection: direct and indirect

    Adversarial input in user messages, uploaded files, retrieved documents, and tool outputs. We assume every input is hostile.

  • System prompt exfiltration

    Extraction of the hidden instructions shaping model behaviour, including API keys and internal context leaked into prompts.

  • Tool and function call abuse

    Where the LLM has agentic capability (function calling, database access, external API calls) we test whether the model can be driven to invoke tools outside its intended scope.

  • Data exfiltration via retrieval

    RAG pipelines and vector stores frequently contain data the requesting user should not see. We test row-level access and content filtering.

  • Classic web app surface around the model

    Authentication, authorisation, session management, and input handling still matter. Most AI products we test have at least one finding that has nothing to do with the model.

Case study

AI customer support agent: indirect prompt injection to support queue takeover

An Australian SaaS company had built a customer-facing support agent on top of a major LLM provider. The agent could read knowledge base articles, create support tickets, and escalate to a human agent when needed.

Critical

Indirect prompt injection via uploaded attachment

A user-uploaded PDF attachment containing instructions embedded in white text on a white background caused the agent to mass-escalate tickets, impersonate internal staff, and forward ticket contents to an attacker-controlled webhook. None of the inputs were flagged by content moderation.

High

Overprivileged tool scope

The agent’s function calling permissions included ticket modification across all tenants. The design assumed the model would only act on the current tenant’s tickets. The injection confirmed the assumption did not hold.

Outcome: Tool scope reduced to the current tenant. Deterministic guardrails around escalation actions. All agent actions now logged and replayable.

Cloud security

AWS, Azure and GCP environment reviews

Cloud penetration testing at Cyber Node is not a benchmark scan. We use CSPM output as a starting map, then test whether listed findings are actually exploitable, whether attack paths between them exist, and whether the blast radius is what the client thinks it is.

IAM review

Roles, policies, federation, service control policies, conditional access. We build the actual attack graph from a compromised identity, not a theoretical one.

Network perimeter

Security groups, NACLs, public egress, exposed load balancers, and misconfigured VPC peering. Attention to what is reachable that shouldn’t be.

Secrets and credentials

Where secrets are stored, how they are accessed, what rotates and what doesn’t. Frequently the shortest path to serious impact.

Data tier exposure

S3 bucket policies, RDS access, storage account misconfiguration, and logging blind spots that would prevent detection of real exploitation.

Questions we get

FAQ

Prompt injection is an attack where adversarial input supplied to an LLM-based application overrides or manipulates the system prompt, causing the model to ignore its instructions, disclose hidden context, or take unauthorised actions.

A cloud audit reviews configuration against a benchmark. A cloud penetration test attempts to exploit configuration weaknesses and prove what an attacker can actually reach and do with them.

Yes. AWS is the most common engagement but we also assess Azure and GCP environments. The fundamentals transfer between providers.

For most customer-owned resources, no notification is required. We confirm provider policies at scoping and arrange any required authorisation where relevant.

Scope an engagement

Test your AI product or cloud environment