Case studies

Manual penetration testing, and what we actually find

Automated scanners are the cheapest part of any security assessment. The expensive part, and the part that finds the vulnerabilities that actually matter, is the human tester running hypotheses against your application and chasing down what doesn’t add up.

This page describes how Cyber Node runs manual engagements, what we typically find, and three anonymised case studies drawn from real work with Australian clients.

Web apps APIs Cloud Internal networks

Three years on the frontline · May 2024 — December 2025

What Cyber Node actually found inside Australian businesses

Across 18 manual engagements in 15 sectors, every single one produced findings. Not one organisation came out clean.

01 / headline
0%

Engagements that produced findings

18 of 18, no clean sheets

02 / engagements
0

Manual engagements delivered

2024, 4 · 2025, 14

03 / volume
0

Distinct vulnerabilities logged

8.8 average per engagement

04 / severity
0%

Had Critical or High-risk findings

7 of 18 carried serious exposure

05 / breadth
0

Distinct sectors tested

Banking to aged care, EdTech to utilities

2 Critical 21 High 49 Medium 66 Low 21 Informational

The shape of risk

Highs are concentrated. Mediums and Lows compound.

The severity distribution looks reassuring at first glance, 2 Criticals and 21 Highs against 49 Mediums, 66 Lows and 21 Informationals. That picture is misleading in two ways.

Highs and Criticals are concentrated. The 23 most serious findings landed in just 7 engagements. When they hit, they hit hard. The worst single engagement on this dataset logged unauthenticated cross-tenant data access, SSRF, 2FA bypass, and an unauthenticated sensitive API endpoint, all in one product.

Mediums and Lows compound. A 2025 FinTech engagement had zero Highs on paper, and seven net-new findings that stacked into a High cumulative overall risk rating. Individually low-impact issues become a real attack path when chained. Prior tests don’t immunise future releases.

Patterns we see everywhere

Six vulnerability classes recur across every sector

Across all 18 engagements, the same vulnerability classes keep surfacing — independent of sector, company size, or technology stack. Knowing these is the difference between a clean report and a real one.

  1. 01

    Authentication and session management gaps

    JWTs that survive logout. Password updates that don’t invalidate sessions. Brute-forceable login flows. Forgot-password endpoints leaking user existence. 2FA bypasses. OWASP Top 10 basics, present in nearly every engagement.

  2. 02

    Broken access control and IDOR

    Unauthenticated cross-tenant data access. Authorisation bypass via direct HTTP requests. Deleted records still retrievable. Unauthenticated user delete and update endpoints. The single Critical in this dataset belongs to this class.

  3. 03

    Legacy crypto and protocols

    TLS 1.0 and 1.1. SMBv1. SMB signing disabled. CBC ciphers vulnerable to Lucky13 and SWEET32. SNMP public communities. Standards retired years ago, still running in production.

  4. 04

    Outdated software and missing patches

    Vulnerable Umbraco. Vulnerable WordPress core and Avada theme. Outdated Lighttpd on a medical device. Outdated ESXi, OpenSSH and Dropbear inside a government facility. Debian 11 hardening gaps on an industrial ERP.

  5. 05

    Exposed administrative and management surfaces

    Public WordPress admin pages. Exposed Umbraco admin login. VPN web portals on the open internet. Telephony servers on plain HTTP. Telnet still listening on a production IoT device.

  6. 06

    File upload and input handling

    Unrestricted file upload chained to stored XSS. HTML injection in transactional email. Reflected input in error pages. The kind of issue that slips past a scanner because it only matters in the application’s own logic.

Five engagements that show what’s at stake

The stories the numbers tell

Anonymised. Every detail below is drawn from a real Cyber Node engagement delivered between 2024 and 2025.

Medical Device · MedTech

A management portal on HTTP port 80 with no authentication at all

Any user on the network could record video from the device. Patient data, one unauthenticated HTTP request away. Telnet was also listening on the same device, and Lighttpd was several years out of date.

State-owned Utility · Critical Infrastructure

Two High-risk issues in a customer-facing portal

A grey-box web application test surfaced sensitive information exposure to unauthenticated actors, and an authorisation bypass via direct HTTP requests. Critical infrastructure category, real exposure, remediated within days of the draft report.

Government Facility · Internal Network

Chained hardening gaps to domain admin

Internal engagement: PrinterBug and PetitPotam coercion paths, default HPE switch credentials, outdated VMware ESXi and OpenSSH, anonymous FTP, SMB signing disabled, SNMP public community. Individually, hardening gaps. Chained, domain admin in under two hours.

Real Estate · Web Application

18 findings, six High-risk, one crafted request from the client database

A 3CX telephony server exposed over plain HTTP. Unauthenticated user delete and update endpoints. Outdated Umbraco. Exposed admin login. The client database — buyers, sellers, property records — sat behind configuration that had never been reviewed.

AI FinTech · SaaS

One Critical, four Highs, a reportable breach the moment it’s exploited

Unauthenticated cross-tenant data access. Unrestricted file upload chained to stored XSS. Server-side request forgery. 2FA bypass. An unauthenticated sensitive API endpoint. In a regulated finance context, this is an NDB-reportable incident the moment an attacker finds it first.

Why this matters

Two things are simultaneously true about Australian cyber security

The regulated core — banks, insurers, large utilities, listed critical infrastructure, Commonwealth agencies — is being dragged toward continuous testing by APRA, the SOCI Act, the Essential Eight, and the TGA. They already know they need pen tests. Most are getting them.

Outside that core, the vast majority of Australian businesses have never had a manual penetration test performed against their systems. The reasons are always the same: we’ve got antivirus and a firewall, we’re too small to be a target, we’ll do it next financial year, our developers are careful.

What this dataset says, clearly and repeatedly across every sector: none of those assumptions survive first contact with a skilled human attacker. 100% of the environments we look at have findings. 39% have findings serious enough to hurt the business. The question is not whether the vulnerabilities exist in your systems. The question is whether you find them first, or an attacker does.

Methodology

How an engagement actually runs

  1. 1

    Scoping

    A short call to understand what you run, what you care about, and what drives the engagement. Compliance framing, real attack concerns, or both.

  2. 2

    Reconnaissance

    Mapping the attack surface before any active testing. DNS, subdomains, technology fingerprinting, authentication flows, public data leakage.

  3. 3

    Manual exploitation

    Hypothesis-driven testing of authentication, authorisation, input handling, business logic and server configuration. Scanners run in support, not as the main activity.

  4. 4

    Reporting

    Findings rated by real-world impact with written remediation guidance. Executive summary for the board. Technical detail for the engineer who will fix it.

  5. 5

    Retest

    A free retest on all findings within 60 days of the final report to confirm remediation has worked.

Case study 01

Australian SaaS platform: chained IDOR to full tenant takeover

A mid-market SaaS product had been tested twice before by other firms. Both prior reports were scanner output with a cover page. The client asked us to look again with fresh eyes.

Critical

Tenant isolation bypass

A low-severity IDOR on a reporting endpoint, combined with a session fixation issue on account switching, allowed a malicious tenant admin to read data belonging to other tenants sharing the same cluster. Neither finding alone scored above medium. Chained, they broke the product’s core security promise.

High

Weak password reset token entropy

Password reset tokens were derived from timestamp plus user ID. Predictable within a small window. Exploitable without authentication.

Outcome: Both issues remediated within 72 hours. Retest confirmed the fix. The client now runs a Cyber Node engagement every major release cycle.

Case study 02

FinTech API: authorisation bypass in a compliance-tested product

A licensed FinTech operator preparing for APRA CPS 234 evidence collection. Product had passed three compliance audits. The scope was the customer-facing REST API.

Critical

BOLA on transaction history endpoint

Authenticated users could retrieve transaction history belonging to any other customer by modifying a single path parameter. No additional authorisation check beyond authentication. A finding invisible to automated scanners that don’t model the business.

High

JWT signature verification disabled in staging path

A code path intended for local testing had been promoted to production by a shared configuration file. An unsigned JWT was accepted as valid on one rarely-called endpoint.

Outcome: Emergency patch shipped the same day. Code review of the surrounding logic. CPS 234 evidence submission delayed by two weeks and rebuilt around the revised control.

Case study 03

Professional services firm: internal network compromise via forgotten print server

An internal network penetration test for a Perth professional services firm with about 200 staff. Grey-box engagement with VPN access as a standard user.

Critical

Domain admin via print server

An unpatched print server running end-of-life firmware had been forgotten on the network. Exploitation chained through local privilege escalation, credential dumping, and a cached domain admin token. Full domain compromise in under two hours.

Outcome: Decommission of legacy print infrastructure. Credential rotation across privileged accounts. LAPS rollout for local admin management. Annual internal test now standard.

Questions we get

FAQ

Scoping, reconnaissance, manual exploitation attempts against identified vulnerabilities, a written report with findings rated by real-world impact, and a free retest on all findings within 60 days.

Most engagements run between one and three weeks of active testing plus reporting. A typical web application test takes two weeks from kickoff to draft report.

Yes. We recommend grey box for most engagements because it delivers the most useful findings per dollar. Black box is appropriate for external surface only. White box is best when the goal is a thorough code-informed assessment.

Yes, and the retest is free within 60 days of the final report. Additional retests beyond that window are quoted separately.

Reports have been used as evidence for PCI DSS, SOC 2, ISO 27001, APRA CPS 234 and Essential 8 maturity assessments. We can align scope and reporting to the specific framework driving your engagement.

Scope an engagement

Tell us what you need tested