我需要刮多个网址同时使用scrapy和飞溅..我试着写以下代码,但仍然没有运气..
我已经附上了网址..在这里..
'下一页',
'加利福尼亚州',
'波士顿'
所以我需要遍历这些URL,然后使用Scrapy来抓取它。
我无法使用多个URL获取数据。它显示错误。请帮助
我的问题是如何进一步抓取这个URL列表?
import scrapy
from scrapy_splash import SplashRequest
import scrapy_proxies
class WundergroundSpider(scrapy.Spider):
name = 'wunderground'
#allowed_domains = ['www.wunderground.com/forecast/us/ny/brooklyn']
start_urls = []
script = '''
function main(splash, args)
splash.private_mode_enabled = false
assert(splash:go(args.url))
assert(splash:wait(10))
return splash:html()
end
'''
def start_requests(self):
urls = [
'https://wunderground.com/forecast/us/ny/brooklyn/',
'https://www.wunderground.com/forecast/us/pa/california/',
'https://www.wunderground.com/forecast/us/ny/boston'
]
for url in urls:
yield SplashRequest(url, self.parse, args={'wait': 8})
def parse(self, response):
tmps= {
'tempHigh': response.xpath("//div[@class='forecast']/a[@class='navigate-to ng-star-inserted']/div[@class='obs-forecast']/span/span[@class='temp-hi']/text()")[0],
'templow': response.xpath("//div[@class='forecast']/a[@class='navigate-to ng-star-inserted']/div[@class='obs-forecast']/span/span[@class='temp-lo']/text()")[0],
'obsphs' : response.xpath("//div[@class='forecast']/a[@class='navigate-to ng-star-inserted']/div[@class='obs-forecast']/div[@class='obs-phrase']/text()")[0]
}
yield tmps
1条答案
按热度按时间zaqlnxep1#
您创建了lua脚本但从未使用过。
请尝试以下方法:生成SplashRequest(url=url,回调=self.parse,端点=“执行”,参数={'lua_source':self.script})