1. DNS
    1. A Records
      1. Nslookup
        1. export TARGET="app.com"
        2. nslookup $TARGET
      2. dig
        1. dig app.com @<nameserver/IP>
    2. A Records for a Subdomain
      1. Nslookup
        1. export TARGET=sub.app.com
        2. nslookup -query=A $TARGET
      2. dig
        1. dig a sub.app.com @<nameserver/IP>
    3. PTR Records for an IP Address
      1. Nslookup
        1. nslookup -query=PTR <ip address>
      2. dig
        1. dig -x <ip address> @<nameserver/IP>
    4. ANY Existing Records
      1. Nslookup
        1. export TARGET="app.com"
        2. nslookup -query=ANY $TARGET
      2. dig
        1. dig any app.com @<nameserver/IP>
    5. TXT Records
      1. Nslookup
        1. export TARGET="app.com"
        2. nslookup -query=TXT $TARGET
      2. dig
        1. dig txt app.com @<nameserver/IP>
    6. MX Records
      1. Nslookup
        1. export TARGET="app.com"
        2. nslookup -query=MX $TARGET
      2. dig
        1. dig mx app.com @<nameserver/IP>
  2. WHOIS
    1. Online
      1. https://whois.domaintools.com/
    2. Linux
      1. export TARGET="app.com"
      2. whois $TARGET
    3. Windows
      1. whois.exe app.com
  3. Subdomain
    1. PASSIVE
      1. VirusTotal
        1. "Relations" tab
      2. Certificates
        1. Online
          1. https://censys.io
          2. https://crt.sh/
        2. Command Line
          1. export TARGET="app.com"
          2. curl -s "https://crt.sh/?q=${TARGET}&output=json" | jq -r '.[] | "\(.name_value)\n\(.common_name)"' | sort -u > "${TARGET}_crt.sh.txt"
          3. head -n20 app.com_crt.sh.txt
      3. Automation
        1. TheHarvester
          1. gathering information from sources
          2. export TARGET="facebook.com"
          3. cat sources.txt | while read source; do theHarvester -d "${TARGET}" -b $source -f "${source}_${TARGET}";done
          4. extract all the subdomains found and sort them
          5. cat *.json | jq -r '.hosts[]' 2>/dev/null | cut -d':' -f 1 | sort -u > "${TARGET}_theHarvester.txt"
          6. merge all the passive reconnaissance files
          7. cat facebook.com_*.txt | sort -u > facebook.com_subdomains_passive.txt
          8. cat facebook.com_subdomains_passive.txt | wc -l
    2. ACTIVE
      1. ZoneTransfers
        1. Online
          1. https://hackertarget.com/zone-transfer/
        2. Command Line
          1. Identifying Nameservers
          2. nslookup -type=NS <domain name>
          3. Testing for ANY and AXFR Zone Transfer
          4. nslookup -type=any -query=AXFR <domain name> <subdomain>
      2. Gobuster
        1. export TARGET="facebook.com"
        2. export NS="d.ns.facebook.com"
        3. export WORDLIST="numbers.txt"
        4. gobuster dns -q -r "${NS}" -d "${TARGET}" -w "${WORDLIST}" -p ./patterns.txt -o "gobuster_${TARGET}.txt"
  4. Infrastructure
    1. PASSIVE
      1. Netcraft
        1. https://sitereport.netcraft.com
      2. Wayback Machine
        1. http://web.archive.org/
        2. waybackurls
          1. waybackurls -dates https://facebook.com > waybackurls.txt
          2. cat waybackurls.txt
    2. ACTIVE
      1. HTTP Headers
        1. identify the webserver version
        2. curl -I http://${TARGET}
      2. WhatWeb tool
        1. recognizes web technologies
        2. whatweb -a3 https://www.facebook.com -v
      3. Wappalyzer 
        1. what websites built with
        2. https://www.wappalyzer.com/
      4. WafW00f Tool
        1. sends requests and analyses responses to determine if a security solution is in place
        2. -a to check all possible WAFs in place instead of stopping scanning at the first match
        3. -i flag to read targets from an input file
        4. -p option to proxy the requests
        5. wafw00f -v https://www.tesla.com
      5. Aquatone Tool
        1. overview of HTTP-based attack surfaces
        2. taking screenshots
        3. cat facebook_aquatone.txt | aquatone -out ./aquatone -screenshot-timeout 1000
        4. the result in: a file called aquatone_report.html
  5. VHost
    1. test some subdomains having the same IP address that can either be virtual hosts or different servers
    2. Manual
      1. if identified a web server
        1. make a cURL request sending a domain previously identified
        2. vHost Fuzzing using a dictionary file of possible vhost names
          1. cat /opt/useful/SecLists/Discovery/DNS/namelist.txt | while read vhost;do echo "\n********\nFUZZING: ${vhost}\n********";curl -s -I http://targetIP -H "HOST: ${vhost}.randomtarget.com" | grep "Content-Length: ";done
        3. if successfully identified a virtual host
          1. curl -s http://targetIP -H "Host: vhost.randomtarget.com"
    3. Automatic
      1. Using ffuf
        1. ffuf -w /opt/useful/SecLists/Discovery/DNS/namelist.txt -u http://targetIP -H "HOST: FUZZ.randomtarget.com" -fs xxx
  6. Crawling
    1. find as many pages and subdirectories belonging to a website
      1. Using ZAP
        1. built-in Fuzzer and Manual Request Editor
          1. Write the website in the address bar and add it to the scope
          2. then use the Spider submenu
      2. Using FFuF
        1. ffuf -recursion -recursion-depth 1 -u http://targetIP/FUZZ -w /opt/useful/SecLists/Discovery/Web-Content/raft-small-directories-lowercase.txt
      3. Sensitive Information Disclosure
        1. find backup or unreferenced files that can have important information or credentials.
          1. create a file with the found folder names
          2. folders.txt
          3. using CeWL to extract some keywords from the website
          4. instruct tool with minimum length ex of 5 characters -m5, convert them to lowercase --lowercase
          5. cewl -m5 --lowercase -w wordlist.txt http://targetIP
          6. combine everything in ffuf
          7. ffuf -w ./folders.txt:FOLDERS,./wordlist.txt:WORDLIST,./extensions.txt:EXTENSIONS -u http://192.168.10.10/FOLDERS/WORDLISTEXTENSIONS
          8. ex: curl http://targetIP/wp-content/secret~