← Certified Web Application Pentester
Task 1
Methodology & ideology
1. The Penetration Testing Lifecycle
+------------------+ +------------------+ +------------------+ | 1. Pre-Engagement| --> | 2. Reconnaissance| --> | 3. Scanning & | | & Scoping | | (OSINT) | | Enumeration | +------------------+ +------------------+ +------------------+ | +------------------+ +------------------+ +------------------+ | 6. Reporting | <-- | 5. Post- | <-- | 4. Exploitation | | | | Exploitation | | | +------------------+ +------------------+ +------------------+
2. Phase 1: Pre-Engagement
Scope Definition
# Define clearly: - Target domains and subdomains (*.target.com) - IP ranges (10.0.0.0/24) - Specific applications (app.target.com, api.target.com) - Out-of-scope systems (payments.target.com, production DB) - Testing window (dates and hours) - Testing type (black box, white box, gray box) - Contact information (emergency contacts, technical POCs)
Rules of Engagement
# Document: - Allowed testing techniques (automated scanning, manual testing) - Disallowed actions (DoS, social engineering, physical access) - Data handling requirements (PII, credentials found) - Communication protocols (reporting critical findings immediately) - Legal authorization (signed agreement, NDA) - Credential provision (for authenticated testing)
3. Phase 2: Reconnaissance
Passive Recon (No direct interaction with target)
# DNS enumeration dig target.com ANY whois target.com # Subdomain discovery subfinder -d target.com -o subs.txt amass enum -passive -d target.com -o amass_subs.txt # Technology fingerprinting # Wappalyzer, BuiltWith, Netcraft # Google dorking site:target.com filetype:pdf site:target.com inurl:admin site:target.com ext:sql | ext:env | ext:log # Certificate transparency curl -s "https://crt.sh/?q=%25.target.com&output=json" | jq '.[].name_value' | sort -u # Wayback machine waybackurls target.com | sort -u > wayback_urls.txt gau target.com >> wayback_urls.txt
Active Recon (Direct interaction with target)
# Port scanning nmap -sV -sC -p- -oA nmap_full target.com masscan -p1-65535 target.com --rate=1000 # Web technology detection whatweb https://target.com httpx -l subs.txt -title -tech-detect -status-code -o httpx_results.txt # Directory bruteforcing gobuster dir -u https://target.com -w /usr/share/seclists/Discovery/Web-Content/common.txt ffuf -u https://target.com/FUZZ -w /usr/share/wordlists/dirb/common.txt
4. Phase 3: Scanning & Enumeration
# Vulnerability scanning nuclei -l live_hosts.txt -t nuclei-templates/ -o nuclei_results.txt nikto -h https://target.com # Parameter discovery arjun -u https://target.com/page # Burp Suite Param Miner # API enumeration # Check /swagger, /api-docs, /openapi.json, /graphql # CMS-specific scanning wpscan --url https://target.com -e ap,at,u droopescan scan drupal -u https://target.com joomscan -u https://target.com
5. Phase 4: Exploitation
# Prioritize based on: # 1. Critical: RCE, Authentication Bypass, SQLi with data access # 2. High: Stored XSS, SSRF to internal, privilege escalation # 3. Medium: Reflected XSS, CSRF, IDOR (read-only) # 4. Low: Information disclosure, missing headers # For each vulnerability: # 1. Verify the vulnerability manually # 2. Determine the full impact # 3. Document the exploitation steps # 4. Capture evidence (screenshots, requests/responses) # 5. Attempt to chain with other vulnerabilities
6. Phase 5: Post-Exploitation
# After gaining access: # - Identify what data is accessible # - Determine the scope of compromise # - Check for lateral movement opportunities # - Document the business impact # - DO NOT exfiltrate real sensitive data # - DO NOT modify production data # - DO NOT install persistent backdoors (unless authorized)
7. Phase 6: Reporting
# Report Structure: # 1. Executive Summary (for management) # 2. Scope and Methodology # 3. Findings Summary (risk matrix) # 4. Detailed Findings # - Title and Severity (CVSS score) # - Description # - Impact # - Steps to Reproduce # - Evidence (screenshots, requests) # - Remediation Recommendation # 5. Appendices (tools used, additional data)
8. Testing Approaches
Black Box Testing
- No prior knowledge of the application
- Simulates an external attacker
- Focus: reconnaissance, discovery, exploitation
- Most realistic but most time-consuming
- Common in: external pentests, bug bounty
White Box Testing
- Full access to source code, architecture docs, credentials
- Simulates an insider or developer review
- Focus: code review, architectural analysis, comprehensive testing
- Most thorough coverage
- Common in: secure code review, compliance testing
Gray Box Testing
- Partial knowledge (user credentials, API docs, architecture overview)
- Simulates an authenticated user or partner
- Focus: authenticated testing, privilege escalation, business logic
- Best balance of coverage and efficiency
- Common in: internal pentests, API testing
9. Note-Taking and Organization
Recommended Tools
# CherryTree: Hierarchical note-taking, great for pentests # Obsidian: Markdown-based, linking between notes # Notion: Web-based, templates, collaboration # Joplin: Open-source, encrypted, syncing # Structure your notes: # /Project_Name # /Recon # subdomains.md # technologies.md # endpoints.md # /Vulnerabilities # vuln_001_sqli.md # vuln_002_xss.md # /Evidence # screenshots/ # requests/ # /Report # draft.md
Evidence Collection
# Terminal recording script -a pentest_session.log # or asciinema rec pentest_session.cast # Screenshots # Use Flameshot, Greenshot, or Snipping Tool # Always include: URL bar, timestamp, relevant response # Burp Suite evidence # Right-click request → Copy to file # Right-click → Save item # Export Proxy history as XML # Video recording for complex chains # OBS Studio for screen recording
10. Common Mistakes to Avoid
1. Testing without authorization (legal issues) 2. Going out of scope (testing unauthorized systems) 3. Not documenting findings as you go (forgetting details) 4. Spending too much time on one vulnerability 5. Ignoring business logic flaws (focusing only on technical vulns) 6. Not testing with different user roles 7. Skipping authenticated testing 8. Running aggressive scans without permission 9. Not verifying automated scanner results manually 10. Writing reports that lack reproduction steps 11. Forgetting to test API endpoints 12. Ignoring client-side vulnerabilities 13. Not checking for default credentials 14. Overlooking HTTP security headers 15. Not testing error handling and edge cases