scrapy 网页抓取器无法正常工作-字段未显示任何数据

6jygbczu  于 2022-11-09  发布在  其他
关注(0)|答案(1)|浏览(127)

我试着做一个网页刮刀从堆栈溢出问题,但第三列不下载的数据,你能帮我吗?

from scrapy.item import Field
from scrapy.item import Item
from scrapy.spiders import Spider
from scrapy.selector import Selector
from scrapy.loader import ItemLoader

class Question(Item):
    a_id = Field()
    b_question = Field()
    c_desc = Field()

class StackOverflowSpider(Spider):
    name = "MyFirstSpider"
    custom_settings = {
        'USER-AGENT': "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36"
    }
    start_urls = ['https://stackoverflow.com/questions']

    def parse(self, response):
        sel = Selector(response)
        questions = sel.xpath('//div[@id="questions"]//div[@class="s-post-summary--content"]')
        i = 1
        for quest in questions:
            item = ItemLoader(Question(), quest)
            item.add_xpath('b_question', './/h3/a/text()')
            item.add_xpath('c_desc', './/div[@class="s-post-summary--content-excerpt"]/text()')
            item.add_value('a_id', i)
            i = i+1
            yield item.load_item()

picture from csv file output
picture from website and the html code

oyxsuwqo

oyxsuwqo1#

试试看:我添加了一些内嵌注解来解释这些更改

from scrapy.spiders import Spider

class StackOverflowSpider(Spider):
    name = "MyFirstSpider"
    custom_settings = {
        'USER-AGENT': "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36"
    }
    start_urls = ['https://stackoverflow.com/questions']

    def parse(self, response):
        # iterate through each question as an xpath object.
        for i, question in enumerate(response.xpath("//div[@class='s-post-summary--content']")):
            # use get method to grab text
            title = question.xpath('.//h3/a/text()').get()
            content = question.xpath('.//div[@class="s-post-summary--content-excerpt"]/text()').get()
            # yielding a regular dictionary in your case is the same thing
            yield {
                "b_question": title.strip(),
                "c_desc": content.strip(),
                "a_id": i
             }

相关问题