在本地运行Scrapy中的所有蜘蛛

rpppsulh  于 2023-01-09  发布在  其他
关注(0)|答案(6)|浏览(183)

有没有一种方法可以在不使用Scrapy守护进程的情况下运行Scrapy项目中的所有spider?曾经有一种方法可以使用scrapy crawl运行多个spider,但是这种语法被删除了,Scrapy的代码也发生了很大的变化。
我试着创建自己的命令:

from scrapy.command import ScrapyCommand
from scrapy.utils.misc import load_object
from scrapy.conf import settings

class Command(ScrapyCommand):
    requires_project = True

    def syntax(self):
        return '[options]'

    def short_desc(self):
        return 'Runs all of the spiders'

    def run(self, args, opts):
        spman_cls = load_object(settings['SPIDER_MANAGER_CLASS'])
        spiders = spman_cls.from_settings(settings)

        for spider_name in spiders.list():
            spider = self.crawler.spiders.create(spider_name)
            self.crawler.crawl(spider)

        self.crawler.start()

但是,一旦一个spider注册到self.crawler.crawl(),我就会得到所有其他spider的Assert错误:

Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/scrapy/cmdline.py", line 138, in _run_command
    cmd.run(args, opts)
  File "/home/blender/Projects/scrapers/store_crawler/store_crawler/commands/crawlall.py", line 22, in run
    self.crawler.crawl(spider)
  File "/usr/lib/python2.7/site-packages/scrapy/crawler.py", line 47, in crawl
    return self.engine.open_spider(spider, requests)
  File "/usr/lib/python2.7/site-packages/twisted/internet/defer.py", line 1214, in unwindGenerator
    return _inlineCallbacks(None, gen, Deferred())
--- <exception caught here> ---
  File "/usr/lib/python2.7/site-packages/twisted/internet/defer.py", line 1071, in _inlineCallbacks
    result = g.send(result)
  File "/usr/lib/python2.7/site-packages/scrapy/core/engine.py", line 215, in open_spider
    spider.name
exceptions.AssertionError: No free spider slots when opening 'spidername'

有什么方法可以做到这一点吗?我不想仅仅为了像这样运行我所有的spider而开始对核心Scrapy组件进行子类化。

c7rzv4ha

c7rzv4ha1#

你为什么不直接说:

scrapy list|xargs -n 1 scrapy crawl

eh57zj3b

eh57zj3b2#

下面是一个不在自定义命令中运行的示例,但它手动运行Reactor并为每个蜘蛛创建一个新的Crawler:

from twisted.internet import reactor
from scrapy.crawler import Crawler
# scrapy.conf.settings singlton was deprecated last year
from scrapy.utils.project import get_project_settings
from scrapy import log

def setup_crawler(spider_name):
    crawler = Crawler(settings)
    crawler.configure()
    spider = crawler.spiders.create(spider_name)
    crawler.crawl(spider)
    crawler.start()

log.start()
settings = get_project_settings()
crawler = Crawler(settings)
crawler.configure()

for spider_name in crawler.spiders.list():
    setup_crawler(spider_name)

reactor.run()

您必须设计some signal system以在所有spider完成时停止reactor。
编辑:下面是如何在一个自定义命令中运行多个spider:

from scrapy.command import ScrapyCommand
from scrapy.utils.project import get_project_settings
from scrapy.crawler import Crawler

class Command(ScrapyCommand):

    requires_project = True

    def syntax(self):
        return '[options]'

    def short_desc(self):
        return 'Runs all of the spiders'

    def run(self, args, opts):
        settings = get_project_settings()

        for spider_name in self.crawler.spiders.list():
            crawler = Crawler(settings)
            crawler.configure()
            spider = crawler.spiders.create(spider_name)
            crawler.crawl(spider)
            crawler.start()

        self.crawler.start()
34gzjxbg

34gzjxbg3#

@Steven Almeroth的答案在Scrapy 1.0中会失败,您应该像这样编辑脚本:

from scrapy.commands import ScrapyCommand
from scrapy.utils.project import get_project_settings
from scrapy.crawler import CrawlerProcess

class Command(ScrapyCommand):

    requires_project = True
    excludes = ['spider1']

    def syntax(self):
        return '[options]'

    def short_desc(self):
        return 'Runs all of the spiders'

    def run(self, args, opts):
        settings = get_project_settings()
        crawler_process = CrawlerProcess(settings) 

        for spider_name in crawler_process.spider_loader.list():
            if spider_name in self.excludes:
                continue
            spider_cls = crawler_process.spider_loader.load(spider_name) 
            crawler_process.crawl(spider_cls)
        crawler_process.start()
bweufnob

bweufnob4#

此代码适用于我的scrapy版本是1. 3. 3(保存在scrapy. cfg中的同一目录下):

from scrapy.utils.project import get_project_settings
from scrapy.crawler import CrawlerProcess

setting = get_project_settings()
process = CrawlerProcess(setting)

for spider_name in process.spiders.list():
    print ("Running spider %s" % (spider_name))
    process.crawl(spider_name,query="dvh") #query dvh is custom argument used in your scrapy

process.start()

对于scrapy 1.5.x(这样你就不会收到弃用警告)

from scrapy.utils.project import get_project_settings
from scrapy.crawler import CrawlerProcess

setting = get_project_settings()
process = CrawlerProcess(setting)

for spider_name in process.spider_loader.list():
    print ("Running spider %s" % (spider_name))
    process.crawl(spider_name,query="dvh") #query dvh is custom argument used in your scrapy

process.start()
jobtbby3

jobtbby35#

Linux脚本

#!/bin/bash
for spider in $(scrapy list)
do
scrapy crawl "$spider" -o "$spider".json
done
jucafojl

jucafojl6#

使用python运行项目中的所有蜘蛛

# Run all spiders in project implemented using Scrapy 2.7.0

from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings

def main():
    settings = get_project_settings()
    process = CrawlerProcess(settings)
    spiders_names = process.spider_loader.list()
    for s in spiders_names:
        process.crawl(s)
    process.start()

if __name__ == '__main__':
    main()

相关问题