This tutorial shows how to ignore (bypass) errors/exceptions in Selenium and continue processing.
Your problem is, you are scraping a website and the crawler stopped because of an exception.
Solution
You can use the try-except
block to ignore the error and continue scraping in Selenium. Below is an example:
try: price = browser.find_element_by_id(id_).text except: print("Price is not found.") price = "-" # for dataframe
Another way. You can create a function to check if exists then continue processing. Below is another example:
from selenium import webdriver browser = webdriver.Chrome() import numpy as np import pandas as pd def check_if_exists(browser, id_): return len(browser.find_elements_by_css_selector("#{}".format(id_))) > 0 browser.get('https://www.yourwebsite.com') id_ = 'priceblock_ourprice' price = browser.find_element_by_id(id_).text if check_if_exists(browser, id_) else "-" df = pd.DataFrame([["info", "info", price]], columns=["Product", "Firm", "Price"]) df.to_csv('info.csv', encoding="utf-8", index=False, header=False) df_final = pd.read_csv('info.csv') df_final.head() browser.quit()
See also: