How to Trick Hackers & Web Crawlers with Spidertrap
j-h.io/pwyc || Jump into Pay What You Can training for Active Defense & Cyber Deception -- at whatever cost makes sense for you! j-h.io/pwyc
🔥 KZread ALGORITHM ➡ Like, Comment, & Subscribe!
🙏 SUPPORT THE CHANNEL ➡ jh.live/patreon
🤝 SPONSOR THE CHANNEL ➡ jh.live/sponsor
🌎 FOLLOW ME EVERYWHERE ➡ jh.live/discord ↔ jh.live/twitter ↔ jh.live/linkedin ↔ jh.live/instagram ↔ jh.live/tiktok
💥 SEND ME MALWARE ➡ jh.live/malware
Пікірлер: 56
I'm adding this to my honeypot.
The hacker tears😢
Love the simplicity of this script :D. I think this idea is like an April 1 gift for script kiddies. Nice content, by the way. :)
I'm still holding out for the
The PoC seems ok, but you'd have to think about that a crawler that gets stuck in your domain might be causing other issues like unintentional DoS, especially on low resource machines.
maybe bypassable by matching only pages having a specific word in page src code .(u'll definitely get a word that appears on all true pages and not on fake ones) , it could also be bypassed by response time filtering (fake pages will load much faster).
I think it would make sense to add a delay for the fake pages, such as 1 second to load, would make it way slower for the pages to be crawled
Thanks a John I really appreciate all your videos highly informative
This was neat, Thank you!
Cool one
I understand the idea, and I will investigate how much power my server needs to do this stuff.
Loving The content .. 🎉 good show
I had an idea but too many ongoing rn. Basically the idea was that if it detected someone was crawling it would start injecting hidden links that go to an endpoint that returns a location header sending malicious get requests to internal ips. Eg known router exploits. I was thinking of making it as a flask module just as a fun project I'm not sure of the legality of writing and publishing it anyway since it would be illegal to deploy
On burp you can see the Page Lenght and notice that you're on a tool quickly. Light pages dont call much attention, specially with a page that only contain anchors.
This may be cool to deter indiscriminate site scrapers, but last week wget helped me grab a php script for a website service that slices images online, and I really needed to see how the tool was scripted. General website cloners struggle to retrieve most php files. But wget saved my day Yay!
Great concept, however, would running SpiderTrap not be open to being abused by an attacker via a DOS attack? Constantly creating new sites multiplied by however many threads would use up a lot of resources.
Good stuffs
I assume that wget actually has a "maximum" setting of some sort or other; dynamic vulnerability scanners like ZAP or the like do precisely because of tools like Spidertrap (and also because of designs that might result in loops that are not detected). (Never let beginning developers build a spider - there are just so many ways that it can go wrong.)
Thank you for the amazing content. Note: On-Demand courses are not available as Pay-What-You-Can course offerings. Says unfortunately.
i wonder if shodan crawlers would get stuck