selenium 为什么从谷歌搜索结果中只得到空的结果集?

ct3nt3jp  于 2023-01-26  发布在  其他
关注(0)|答案(1)|浏览(139)

我一直致力于谷歌Colab开发一个脚本抓取谷歌搜索结果。它一直工作了很长一段时间没有任何问题,但现在没有。它似乎是代码页源不同,我现在使用的CSS类是不同的。我使用 selenium 和BeautifulSoup和代码如下:

# Installing Selenium after new Ubuntu update
%%shell
cat > /etc/apt/sources.list.d/debian.list <<'EOF'
deb [arch=amd64 signed-by=/usr/share/keyrings/debian-buster.gpg] http://deb.debian.org/debian buster main
deb [arch=amd64 signed-by=/usr/share/keyrings/debian-buster-updates.gpg] http://deb.debian.org/debian buster-updates main
deb [arch=amd64 signed-by=/usr/share/keyrings/debian-security-buster.gpg] http://deb.debian.org/debian-security buster/updates main
EOF

apt-key adv --keyserver keyserver.ubuntu.com --recv-keys DCC9EFBF77E11517
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 648ACFD622F3D138
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 112695A0E562B32A

apt-key export 77E11517 | gpg --dearmour -o /usr/share/keyrings/debian-buster.gpg
apt-key export 22F3D138 | gpg --dearmour -o /usr/share/keyrings/debian-buster-updates.gpg
apt-key export E562B32A | gpg --dearmour -o /usr/share/keyrings/debian-security-buster.gpg

cat > /etc/apt/preferences.d/chromium.pref << 'EOF'
Package: *
Pin: release a=eoan
Pin-Priority: 500

Package: *
Pin: origin "deb.debian.org"
Pin-Priority: 300

Package: chromium*
Pin: origin "deb.debian.org"
Pin-Priority: 700
EOF

apt-get update
apt-get install chromium chromium-driver

pip install selenium

from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from bs4 import BeautifulSoup 

# Parameters to use Selenium and Chromedriver
ua = UserAgent()
userAgent = ua.random
options = Options()
options.add_argument('--headless')
options.add_argument('--no-sandbox')
options.add_argument('--disable-dev-shm-usage')
options.add_argument('--user-agent="'+userAgent+'"')

#options.headless = True

driver = webdriver.Chrome('chromedriver',options=options)

# Trying to scrape Google Search Results
links = [] 
url = "https://www.google.es/search?q=alergia

driver.get(url)
soup = BeautifulSoup(driver.page_source, 'html.parser')

#This doesn't return anything
search = soup.find_all('div', class_='yuRUbf')
for h in search:
  links.append(h.a.get('href'))
print(links)

为什么现在yuRUbf类对scrape搜索结果不起作用了?一直对我起作用
尝试使用SeleniumBeautifulSoup从谷歌搜索结果中抓取href链接

6jjcrrmo

6jjcrrmo1#

可能会有不同的问题,只要您的问题在这一点上不是那么具体-因此,总是而且首先,看看您的soup,看看是否所有预期的成分都到位。

  • 检查您是否遇到同意横幅重定向,并通过点击或发送相应的标题与selenium处理它。
  • 类是高度动态的东西,所以改变选择策略,使用更多的静态内容,如id或HTML结构-在这里使用css selctors
soup.select('a:has(h3)')
示例:

因为这里实际上不需要selenium,这是一个使用requests的精简版本:

import requests
from bs4 import BeautifulSoup

soup = BeautifulSoup(requests.get('https://www.google.es/search?q=alergia',headers = {'User-Agent': 'Mozilla/5.0'}, cookies={'CONSENT':'YES+'}).text)
[a.get('href').strip('/url?q=') for a in soup.select('a:has(h3)')]

相关问题