在python脚本中退出chromedriver后,我正在尝试调用它:
# set driver options
chrome_options = Options()
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--window-size=1420,1080')
chrome_options.add_argument('--headless')
chrome_options.add_argument('--disable-dev-shm-usage')
chrome_options.add_argument('--disable-gpu')
chrome_options.add_argument("--disable-notifications")
chrome_options.add_argument("--remote-debugging-port=9222")
chrome_options.add_experimental_option("excludeSwitches", ["enable-automation"])
chrome_options.add_experimental_option('useAutomationExtension', False)
chrome_options.binary_location='/usr/bin/google-chrome-stable'
chrome_driver_binary = "/usr/bin/chromedriver"
driver = new ChromeDriver()
driver = webdriver.Chrome(executable_path=chrome_driver_binary, chrome_options=chrome_options)
# Set base url
base_url = 'https://www.example.com&page='
events = []
eventContainerBucket = []
for i in range(1,40):
#cycle through pages in range
driver.get(base_url + str(i))
pageURL = base_url + str(i)
#do some stuff............
driver.quit()
# Want to re-open chrome driver here to scrape a new url
# set driver options
chrome_options = Options()
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--window-size=1420,1080')
chrome_options.add_argument('--headless')
chrome_options.add_argument('--disable-dev-shm-usage')
chrome_options.add_argument('--disable-gpu')
chrome_options.add_argument("--disable-notifications")
chrome_options.add_argument("--remote-debugging-port=9222")
chrome_options.add_experimental_option("excludeSwitches", ["enable-automation"])
chrome_options.add_experimental_option('useAutomationExtension', False)
chrome_options.binary_location='/usr/bin/google-chrome-stable'
chrome_driver_binary = "/usr/bin/chromedriver"
driver = webdriver.Chrome(executable_path=chrome_driver_binary, chrome_options=chrome_options)
# Set base url
base_url = 'https://www.example2.com&page='
events = []
eventContainerBucket = []
for i in range(1,40):
#cycle through pages in range
driver.get(base_url + str(i))
pageURL = base_url + str(i)
脚本的第一部分运行正常,驱动程序关闭,但在第二次url刮取时无法再次初始化驱动程序( driver.get
正在失败)。它给了我一个错误:
Traceback (most recent call last):
File "scraper.py", line 462, in <module>
driver.get(base_url + str(i))
TypeError: 'WebElement' object is not callable
我怎样才能解决这个问题?谢谢
暂无答案!
目前还没有任何答案,快来回答吧!