Katana recon pipeline
One-liner:
bash
katana -u https://url -hl -nos -jc -silent -aff -kf all,robotstxt,sitemapxml -c 150 -fs fqdn |subjs | python3 ./jsa.py |goverview probe -N -c 500 |sort -u -t';' -k2,14 |cut -d ';' -f1Running on WSL Kali:
bash
katana -u https://url -hl -nos -jc -silent -aff -kf all,robotstxt,sitemapxml -c 150 -fs fqdn -sc | ~/go/bin/subjs | python ./JSA/jsa.py | ~/go/bin/goverview probe -N -c 1 | sort -u -t';' -k2,14Steps:
- Katana
- gau (getallurls – includes Wayback)
- subjs
- trufflehog
- secretfinder
Install
- Install gau (getAllUrls)
- Install Katana
- https://github.com/projectdiscovery/katana
- On Kali via WSL I needed to install Chromium separately:
sudo apt install chromium
- Install subjs
- Install goverview
- Install secretfinder
- Add Go binaries to
PATH - Optional: install JSA
Scan
Create a file with the target domains (e.g., from Burp target, subfinder, ...):
powershell
https://www.syslifters.com
https://handbook.syslifters.com
https://sysleaks.com
https://sysreptor.com
...Scan:
bash
# Get URLs from Wayback, etc. with gau
cat domains.txt | ../../go/bin/gau > urls.txt
# Get more URLs with a live crawl using katana
# I needed to use -sc to use local Chromium on WSL Kali
katana -list domains.txt -hl -nos -jc -kf all -c 150 -fs fqdn -sc >> urls.txt
# Deduplicate URL list
awk '!seen[$0]++' urls.txt > dedupedurls.txt
# Find JS files and JS files within via subjs, filter to accessible files with goverview
cat dedupedurls.txt | subjs | goverview probe -N -c 500 | sort -u -t';' -k2,14 | cut -d ';' -f1 > goverview.txt
# Download JS files
mkdir files
cd files
wget -i ../dedupedgoverview.txt
# Scan files for secrets with trufflehog
sudo docker run --rm -it -v "$PWD:/pwd" trufflesecurity/trufflehog:latest filesystem .
# Scan files for secrets with SecretFinder
# I ran into issues here and had to remove JS files that caused SecretFinder to crash, then re-run.
python3 SecretFinder.py -i './*' -o ../secrets.html