Compare

How to tell a real penetration test from a scanner with a cover page

Two penetration testing quotes that look identical on the cover page can be wildly different engagements underneath. One is a senior tester running hypotheses against your application for a week. The other is an automated scanner producing a PDF in fifteen minutes, with someone editing the cover. Both invoice for the same amount. This page describes the five differences a buyer can spot before signing.

The five differences

None of them are on the cover page

Compliance frameworks, insurers, and auditors distinguish manual penetration testing from automated scanning by what is in the engagement, not what is on the document. Here are the five places the difference shows up.

// 1. Methodology

Real penetration test

Hypothesis-driven manual testing. A senior tester forms a theory about how the system breaks, attempts exploitation, and chases what does not add up. Tools like Burp, BloodHound, and custom scripts are inputs to the human's reasoning, not the report itself. Findings include chained exploits where two low-severity issues combine into a critical.

Scanner with a cover page

Automated signature-based scan against a public CVE list. No interpretation, no chaining, no business-logic awareness. The scanner output IS the report. Cover page added. AI-powered platforms add nothing meaningful at the methodology layer; they rephrase the same scanner output in fluent prose.

// 2. Who runs it

Real penetration test

A named senior tester with verifiable certification: OSCP, CREST CCT, GIAC GPEN, or equivalent. The tester is identified in the report and accountable from kick-off through the 60-day retest. You can ask for their CV, certifications, and prior engagement examples, anonymised.

Scanner with a cover page

Often a junior staff member or, increasingly, no human at all. Reseller GRC firms commonly subcontract to platforms that run the engagement automated end to end. The cover page may be signed by a Director who has never seen the system being tested.

// 3. What it finds

Real penetration test

Findings a scanner cannot detect: business logic flaws, IDOR, authentication bypasses, race conditions, chained exploits, RCE through input handling, custom payload paths. The recurring patterns across Cyber Node's 54 manual engagements: auth and session gaps, broken access control, legacy crypto, end-of-life software, exposed admin surfaces, input handling. None of those six pattern classes are reliably surfaced by automated scanning alone.

Scanner with a cover page

Public CVE matches. Configuration patterns the scanner's signature library recognises. Outdated software versions. Missing security headers. The scanner cannot reason about why a finding matters in the tested environment, so impact is rated by raw CVSS rather than real-world exploitability. Anything novel, chained, or context-dependent is invisible to the tool.

// 4. What it satisfies

Real penetration test

PCI DSS Requirement 11.4 mandates manual penetration testing of the cardholder data environment, separate from vulnerability scanning under 11.3. APRA CPS 234 paragraph 27 systematic testing. ISO 27001:2022 A.8.8 and A.8.29 evidence. SOC 2 CC4.1 and CC7.1. Cyber insurance underwriting prerequisites for higher-tier policies. A real engagement satisfies all of these. An auditor who reads the methodology section recognises it as manual.

Scanner with a cover page

None of the above when read carefully. PCI DSS is explicit that 11.4 cannot be satisfied by a scan. APRA expects testing that reflects active adversary behaviour. Cyber insurance claim adjusters who read your scanner-output report after an incident will note that you bought what looked like a pen test and got a vulnerability scan, which is not the same control. The policy you thought you had may not respond.

// 5. Sample report

Real penetration test

Tester named on the cover page. Methodology section names specific techniques: BloodHound enumeration, Burp manual fuzzing, custom payload chaining, IEC 62443-aligned OT pivoting where relevant. Each finding includes evidence (screenshots, request and response pairs), an exploitation path showing how the tester moved from initial access to impact, real-world impact rating in the tested environment, and remediation guidance written for an engineer who has to fix the issue. A retest section is included or scheduled.

Scanner with a cover page

Tool screenshots. CVSS scores copied verbatim from the scanner. Generic remediation advice ("upgrade to latest version", "apply security patches"). No exploit chains. No business context. The "Executive Summary" is one paragraph of generic prose that could apply to any organisation in any industry.

Three questions

Ask these of any vendor with a competing quote

If you are evaluating a quote, three questions will tell you almost everything. The vendor will either answer with specifics, evade them, or contradict themselves under pressure. Any of those outcomes is your answer.

  1. 01

    Who runs the test?

    Ask for the tester's name and CV. A real engagement has one named senior tester accountable from kick-off through retest. If the answer is "we have a team of consultants" without specifics, the engagement is being subcontracted or automated.

  2. 02

    What is their OSCP, CREST, or equivalent certification number?

    Verifiable certifications. The relevant ones in 2026 are OSCP (Offensive Security), CREST CCT (recognised globally), GIAC GPEN (SANS), and Australian-recognised offensive security credentials. A consultant who runs penetration tests has at least one. If the answer is vague, you are talking to a salesperson who will subcontract the work to someone you have not vetted.

  3. 03

    Does the methodology section of the sample report name the tools or the tester?

    If the methodology lists "Nessus, Qualys, Acunetix" and stops there, the engagement is a scanner. If it names techniques (BloodHound enumeration, manual fuzzing, custom payload chaining, business-logic exploitation), the engagement is real. The methodology section is the single most diagnostic part of any pen test report. Read it carefully.

Why it matters

Three concrete consequences of buying the wrong thing

The compliance audit fails

Your QSA, ISO certification body, or SOC 2 CPA reads the methodology section and recognises it as automated scanning, not manual testing. The framework specifically distinguishes the two. You either pay again for a real engagement, or lose the certification cycle and the customer contracts that depend on it.

The insurance claim gets denied

After an incident, your cyber insurer's claim adjuster reviews the testing evidence you submitted at policy renewal. A scanner output filed as a manual pen test is grounds for denying the prerequisite-met claim. The policy you thought responded to ransomware does not, because the underwriting prerequisite was not actually satisfied. More on cyber insurance prerequisites.

The real vulnerabilities stay live

A scanner-only engagement does not find the chained exploits that ransomware operators and initial-access brokers use in 2026. The "clean report" you got is not a clean environment. The first time you find out is when the SOC alerts at 3am, or when your finance team pays a fraudulent invoice routed through an account-takeover the scanner could not detect.

Cyber Node commitment

What you get when you scope an engagement with us

  • A named senior tester from kick-off through retest

    One human, accountable. CV available before contract.

  • Manual testing methodology disclosed in the report

    Named techniques. Findings rated by real-world impact in your environment, not raw CVSS.

  • Audit-ready reporting

    Formatted for QSAs, SOC 2 auditors, ISO certification bodies, and APRA-regulated entities.

  • Free retest within 60 days

    Validates that fixes hold against the original attack chains.

  • 100% find rate across 54 manual engagements

    May 2024 to December 2025. Every engagement produces findings worth fixing. See the dataset.

Scope a real engagement

Free scoping call. We tell you up front what your money buys.