Web Scraping + Reverse Engineering APIs
Ғылым және технология
Web scraping 101! Dive into the world of web scraping with Scott and Wes as they explore everything from tooling setup and navigating protected routes to effective data management. In this Tasty Treat episode, you'll gain invaluable insights and techniques to scrape (almost) any website with ease.
Show Notes
00:00 Welcome to Syntax!
03:13 Brought to you by Sentry.io.
05:00 What is scraping?
08:01 Examples of past scrapers.
10:06 Cloud app downloader.
16:13 Other use cases.
16:58 Scraping 101.
17:28 Client Side.
19:08 Private API.
22:40 Server rendered.
23:27 Initial state.
24:57 What format is the data in?
27:08 Working with the DOM.
27:12 Linkedom npm package.
29:02 querySelector everything.
31:28 How to find the elements without classes.
34:08 Use XPath selectors for select by word.
34:53 Make them as flexible as you can. Classes change!
35:10 AI is good at this!
36:26 File downloading.
38:20 Working with protected routes.
40:41 Programatically retrieve authentication keys because they are short-lived.
43:20 Deal-breakers.
44:58 What happened with Amazon?
46:42 Wes' portable refrigerator utopia.
47:25 Sick Picks & Shameless Plugs.
All links available at syntax.fm/763
------------------------------------------------------------------------------
Hit us up on Socials!
Scott: / stolinski
Wes: / wesbos
Randy: / @randyrektor
Syntax: / syntaxfm
www.syntax.fm
Brought to you by Sentry.io
#webdevelopment #webdeveloper #javascript
Пікірлер: 18
finally a talk on Web Scraping! good to see you again wesbos and scott!
Awesome! On the same line, I’d love an episode on reverse engineering scrambled or minified webapps 😏
@WesBos
27 күн бұрын
good idea - I think there is also one on how to find objects of data in the JS heap
Love you both from Sri Lanka...🇱🇰 ❤
Awesome! I was using puppeteer to scrape a site and converted it to pinging their api directly. So much faster and no random errors when a element fails to load. Where would you host your scraping scripts that run everyday, hour or minute? I used a package to run it as a service on windows.
I never thought I’d hear XPath mention on a podcast. It’s really too bad XML became a 4 letter word. There was actually some cool things you could do with it that you can’t do with JSON. It also having a DOM for one thing.
How would you alert if something was available? I want instant, attention ambushing feedback if my scraper finds something. If i run a cypress script in headless to check a site for tickets, say, and it found one, i want a desktop alert somehow. Browser alerts work if i run it manually, but if I schedule it on mac, then it runs in the background and i dont get any alerts.
love this podcast and this episode since i’m also an scrape OG/ automation panda :) side question will the video format of the podcast ever pan into visual snapshots; when talking about something like when mention console then pan into a snapshot of that or if a website is mentioned than a print screen of that like wes did once during the this video; i know this will add in more work during editing but it would be extra coolness if it was included as a standard; thanks keep up the awesomeness 🎉👍;
@jayfiled
26 күн бұрын
Yeah, I jumped off the audio version and onto KZread hoping to see something in action. But I think that would slow down the time to upload, CJ probs has something in the mix no doubt.
Is there a course you recommend for this?
Lol I've been watching every episode since CJ joined and yet I'm not subscribed 😅 Time to change that
@WesBos
27 күн бұрын
yeahhh buddy
Working on a scraper rn.
@jayfiled
26 күн бұрын
Public repo? Link us up
@jayfiled
26 күн бұрын
Oh it's you Scott, hahah. I had a rush of enthusiasm to work on it with a fellow listener but now I feel silly.
If someone scrapes for indexing and links to your site to consume it I am totally cool with it, but if someone scrapes to bypass the site I'm not.
Ok