如何在Scrapy中构建自己的中间件?

6ioyuze2  于 2022-11-09  发布在  其他
关注(0)|答案(1)|浏览(98)

我刚刚开始学习Scrapy,我有这样一个问题。对于我的“蜘蛛”,我必须从谷歌表单表中获取一个URL列表(start_urls),我有这样的代码:

import gspread
from oauth2client.service_account import ServiceAccountCredentials

scope = ['https://spreadsheets.google.com/feeds','https://www.googleapis.com/auth/drive']
creds = ServiceAccountCredentials.from_json_keyfile_name('token.json', scope)

client = gspread.authorize(creds)
sheet = client.open('Sheet_1')
sheet_instance = sheet.get_worksheet(0)
records_data = sheet_instance.col_values(col=2)

for link in records_data:
    print(link)
    ........

我如何配置中间件,以便当启动spider(scrappy crawl my_spider)时,此代码中的链接自动替换为start_urls?也许我需要在www.example.com中创建一个类middlewares.py?有任何帮助,请提供示例。此规则必须适用于所有新的spider,从start_requests中的文件生成列表(例如start_urls = [l.strip() for an open string('urls.txt ').readline()])并不方便...

a64a0gku

a64a0gku1#

请阅读
spider.py:

import scrapy

class ExampleSpider(scrapy.Spider):
    name = 'example'

    custom_settings = {
        'SPIDER_MIDDLEWARES': {
            'tempbuffer.middlewares.ExampleMiddleware': 543,
        }
    }

    def parse(self, response):
        print(response.url)

middlewares.py:

class ExampleMiddleware(object):
    def process_start_requests(self, start_requests, spider):
        # change this to your needs:
        with open('urls.txt', 'r') as f:
            for url in f:
                yield scrapy.Request(url=url)

urls.txt:

https://example.com
https://example1.com
https://example2.org

输出:

[scrapy.core.engine] DEBUG: Crawled (200) <GET https://example2.org> (referer: None)
[scrapy.core.engine] DEBUG: Crawled (200) <GET https://example.com> (referer: None)
[scrapy.core.engine] DEBUG: Crawled (200) <GET https://example1.com> (referer: None)
https://example2.org
https://example.com
https://example1.com

相关问题