如何使用scrapy从网页中提取链接?

dzjeubhm  于 2023-10-20  发布在  其他
关注(0)|答案(1)|浏览(125)

我试图从遵循一定规则的网页中提取链接。我尝试使用scrapy,代码如下:

from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from scrapy.http import Request

class MagazineCrawler(CrawlSpider):
    name = "MagazineCrawler"
    allowed_domains = ["eu-startups.com"]
    start_urls = ["https://www.eu-startups.com"]

    rules = (
        Rule(LinkExtractor(allow=["category/interviews"]), callback="parse_category"),
    )

    def parse_category(self, response):
        xpath_links = "//div[@class='td_block_inner tdb-block-inner td-fix-index']//a[@class='td-image-wrap ']/@href"
        subpage_links = response.xpath(xpath_links).extract()

        # Follow each subpage link and yield requests to crawl them
        for link in subpage_links:
            yield Request(link)

问题是它只从模式提供的第一个链接中提取链接,然后就停止了。如果我删除parse_category回调选项,它会正常遍历所有包含“category/interviews”的网页。为什么会这样?

xhv8bpkk

xhv8bpkk1#

这是因为如果您计划将其与回调一起使用,则需要在规则中设置follow参数。
从Rule类的scrapy文档中:
class scrapy.spiders.Rule(link_extractor=None, callback=None, cb_kwargs=None, follow=None, process_links=None, process_request=None, errback=None)
follow是一个布尔值,它指定是否应该从使用此规则提取的每个响应中跟踪链接。如果callback为None,则follows默认为True,否则默认为False。
因此,如果您希望spider继续跟踪链接并为每个响应使用回调,那么您可以简单地在spider rule中设置follow=True
举例来说:

class MagazineCrawler(CrawlSpider):
    name = "MagazineCrawler"
    allowed_domains = ["eu-startups.com"]
    start_urls = ["https://www.eu-startups.com"]

    rules = (
        Rule(LinkExtractor(allow=["category/interviews"]),
             callback="parse_category", 
             follow=True),
    )

    def parse_category(self, response):
        xpath_links = "//div[@class='td_block_inner tdb-block-inner td-fix-index']//a[@class='td-image-wrap ']/@href"
        subpage_links = response.xpath(xpath_links).extract()

        # Follow each subpage link and yield requests to crawl them
        for link in subpage_links:
            yield Request(link)

相关问题