我已经创建了一个管道来将所有数据放入SQLite数据库,但是我的Spider没有完成分页。这是当Spider关闭时我得到的结果。我应该得到大约45k个结果,而我只得到420个。为什么会这样呢?
2021-12-06 14:47:55 [scrapy.core.engine] INFO: Closing spider (finished)
2021-12-06 14:47:55 [selenium.webdriver.remote.remote_connection] DEBUG: DELETE http://127.0.0.1:60891/session/d441b41f-b62b-4c64-a5ef-68329c18dd4e {}
2021-12-06 14:47:56 [urllib3.connectionpool] DEBUG: http://127.0.0.1:60891 "DELETE /session/d441b41f-b62b-4c64-a5ef-68329c18dd4e HTTP/1.1" 200 14
2021-12-06 14:47:56 [selenium.webdriver.remote.remote_connection] DEBUG: Finished Request
2021-12-06 14:47:56 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/response_bytes': 7510132,
'downloader/response_count': 15,
'downloader/response_status_count/200': 15,
'elapsed_time_seconds': 89.469538,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2021, 12, 6, 20, 47, 55, 551566),
'item_scraped_count': 420,
'log_count/DEBUG': 577,
'log_count/INFO': 11,
'request_depth_max': 14,
'response_received_count': 15,
'scheduler/dequeued': 15,
'scheduler/dequeued/memory': 15,
'scheduler/enqueued': 15,
'scheduler/enqueued/memory': 15,
'start_time': datetime.datetime(2021, 12, 6, 20, 46, 26, 82028)}
2021-12-06 14:47:56 [scrapy.core.engine] INFO: Spider closed (finished)
这是我的蜘蛛
import scrapy
from scrapy_selenium import SeleniumRequest
class HomesSpider(scrapy.Spider):
name = 'homes'
def remove_characters(self,value):
return value.strip(' m²')
def start_requests(self):
yield SeleniumRequest(
url='https://www.vivanuncios.com.mx/s-venta-inmuebles/queretaro/v1c1097l1021p1',
wait_time=3,
callback=self.parse
)
def parse(self, response):
homes = response.xpath("//div[@id='tileRedesign']/div")
for home in homes:
yield {
'price': home.xpath("normalize-space(.//span[@class='ad-price']/text())").get(),
'location': home.xpath(".//div[@class='tile-location one-liner']/b/text()").get(),
'description': home.xpath(".//div[@class='tile-desc one-liner']/a/text()").get(),
'bathrooms': home.xpath("//div[@class='chiplets-inline-block re-bathroom']/text()").get(),
'bedrooms': home.xpath(".//div[@class='chiplets-inline-block re-bedroom']/text()").get(),
'm2': self.remove_characters(home.xpath("normalize-space(.//div[@class='chiplets-inline-block surface-area']/text())").get()),
'link':home.xpath("//div[@class='tile-desc one-liner']/a/@href").get()
}
next_page = response.xpath("//a[@class='icon-pagination-right']/@href").get()
if next_page:
absolute_url = f"https://www.vivanuncios.com.mx/s-venta-inmuebles/queretaro/v1c1097l1021p1{next_page}"
yield SeleniumRequest(
url=absolute_url,
wait_time=3,
callback=self.parse,
dont_filter = True
)
这是否与我的user_agent明确相关,我已经将其分配给了settings.py,还是我被禁止进入此页面?网页的HTML也没有任何变化。
1条答案
按热度按时间9nvpjoqh1#
你的代码是工作良好,因为你的期望和问题是在分页部分,我已经在开始的网址分页,分页类型总是准确的,超过两倍的速度比如果下一页。有50页,总项目刮计数1400
脚本
输出
...等等