Selenium in Google Colab Tutorial For Beginners: Web scraping To Google Sheets

Ғылым және технология

Пікірлер: 28

  • @LeitordeMapa
    @LeitordeMapa2 ай бұрын

    Perfect, you saved my day!

  • @ScrapingNinja

    @ScrapingNinja

    2 ай бұрын

    Glad I could help! don't forget to subscribe.

  • @LeitordeMapa

    @LeitordeMapa

    2 ай бұрын

    @@ScrapingNinja You earned it, for sure

  • @aidenstyle8604
    @aidenstyle86042 ай бұрын

    I am from Thailand. I love you, Guy

  • @ScrapingNinja

    @ScrapingNinja

    2 ай бұрын

    Thank you so much dear.. ❤️

  • @mariana_anst
    @mariana_anstАй бұрын

    Unfortunately couldn’t pass from step on 5:18, the command driver = web_driver() didn’t work. Just giving up on trying to use selenium on colab. Visual Code has been better to me…

  • @ScrapingNinja

    @ScrapingNinja

    Ай бұрын

    Can you share the error?

  • @tomhanksact
    @tomhanksact5 ай бұрын

    i want to ask brother, how to Bypass Cloudflare anti-bot service detected ? With undetected_chromedriver doesn't work. Still got blocked.

  • @ScrapingNinja

    @ScrapingNinja

    5 ай бұрын

    Use playwright with playwright_stealth.

  • @ilvhimekazain
    @ilvhimekazainАй бұрын

    bro, try using this method again to scrape all the product review components on the Lazada website, when I tried it like in the video and just adjusted what I wanted to scrape by copying the inspection results and then running the code, the results didn't appear like at minute 08:21

  • @ScrapingNinja

    @ScrapingNinja

    Ай бұрын

    make sure you are using correct xpath and also try to print page source to see if the correct page is being loaded

  • @JP-vo8ik
    @JP-vo8ikАй бұрын

    SessionNotCreatedException: Message: session not created: This version of ChromeDriver only supports Chrome version 114 Current browser version is 126.0.6478.126 with binary path /usr/bin/google-chrome error

  • @ScrapingNinja

    @ScrapingNinja

    Ай бұрын

    You need to update chromedriver using apt-get

  • @josemariomonteiro3628
    @josemariomonteiro36287 күн бұрын

    SessionNotCreatedException: Message: session not created exception: Missing or invalid capabilities (Driver info: chromedriver=2.41.578700 (2f1ed5f9343c13f73144538f15c00b370eda6706),platform=Linux 6.1.85+ x86_64)

  • @ScrapingNinja

    @ScrapingNinja

    6 күн бұрын

    Check your code to make sure you have set compatibilities correctly

  • @ScrapingNinja

    @ScrapingNinja

    6 күн бұрын

    Options

  • @aounzaidi9248
    @aounzaidi92485 ай бұрын

    Salam Sir, I want extract data like url and titles from Google search page Ap guide kr kr skty, ap sirf ek roadmap bata dein? Waiting for your reply

  • @ScrapingNinja

    @ScrapingNinja

    5 ай бұрын

    you can do it following the same steps, install selenium, get the google link and start extracting data like i did in this video

  • @aounzaidi9248

    @aounzaidi9248

    5 ай бұрын

    @@ScrapingNinja sir in my project there are some queries which i want to search on Google and then i want to scrape 300 results from each query, is it possible

  • @ScrapingNinja

    @ScrapingNinja

    4 ай бұрын

    Yes you can edit your link to get up to 40 results per page and also include query inside the link this way you will keep minimum requests to google and get more results

  • @OnlineEarning-ql1ls
    @OnlineEarning-ql1ls3 ай бұрын

    Thank you so much sir.. This is the method I was searching from a weeks that how to use selenium in Google sheets.. Really thank you so much.. But in my case, the website from where I want to get data is behind the login page. So I need a little more help how can we login to website with selenium.

  • @ScrapingNinja

    @ScrapingNinja

    3 ай бұрын

    You are most welcome and login can be automated same as any other task, locate username password input fields and submit..

  • @OnlineEarning-ql1ls

    @OnlineEarning-ql1ls

    3 ай бұрын

    @@ScrapingNinja yes sir I have done the login to website by finding there xpath and then .send-key() command. It is done.. But now I want to CONCAT some phrases to make an XPATH and then want to apply a Loop to get Id's of their respective path. Can you please write an example for me to CONCAT in xpath and then loop for some step to scrape id's (say till 5 rows) Thank you for your time and reply sir...

  • @OnlineEarning-ql1ls

    @OnlineEarning-ql1ls

    3 ай бұрын

    ​@@ScrapingNinja Sir actually there are 96 id's on webpage who's xpath is "//tbody/tr/td[8]", this is valid xpath as I checked it with selectorsHub. It show exact 96 maching of those ids. But problem with me is when I use it in this code for container in containers: ID = container.find_element(BY.XPATH, "//tbody/tr/td[8]").text Print(ID) I suppose it should return all id's of this page as all rows have same xpath, but It return just single ID of first row whose XPATH is //tbody/tr[1]/td[8]. When I change tr number it returns that row I'd. Can you please correct me what I am doing wrong here. Otherwise I have to make a Loop from 1 to 96 to pull all id's. 😢

  • @OnlineEarning-ql1ls

    @OnlineEarning-ql1ls

    3 ай бұрын

    ​@@ScrapingNinjaSir actually there are 96 id's on webpage who's xpath is "//tbody/tr/td[8]", this is valid xpath as I checked it with selectorsHub. It show exact 96 maching of those ids. But problem with me is when I use it in this code for container in containers: ID = container.find_element(BY.XPATH, "//tbody/tr/td[8]").text Print(ID) I suppose it should return all id's of this page as all rows have same xpath, but It return just single ID of first row whose XPATH is //tbody/tr[1]/td[8]. When I change tr number it returns that row I'd. Can you please correct me what I am doing wrong here. Otherwise I have to make a Loop from 1 to 96 to pull all id's. 😢

  • @OnlineEarning-ql1ls

    @OnlineEarning-ql1ls

    3 ай бұрын

    Sir actually there are 96 id's on webpage who's xpath is "//tbody/tr/td[8]", this is valid xpath as I checked it with selectorsHub. It show exact 96 maching of those ids. But problem with me is when I use it in this code for container in containers: ID = container.find_element(BY.XPATH, "//tbody/tr/td[8]").text Print(ID) I suppose it should return all id's of this page as all rows have same xpath, but It return just single ID of first row whose XPATH is //tbody/tr[1]/td[8]. When I change tr number it returns that row I'd. Can you please correct me what I am doing wrong here. Otherwise I have to make a Loop from 1 to 96 to pull all id's. 😢

Келесі