我有以下任务:在数据库中,我们有~2k个网址。2对于每个网址,我们需要运行蜘蛛程序,直到所有的网址都被处理。3我运行蜘蛛程序是为了一堆网址(一次运行10个)
我使用了以下代码:
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings
URLs = crawler_table.find(crawl_timestamp=None)
settings = get_project_settings()
for i in range(len(URLs) // 10):
process = CrawlerProcess(settings)
limit = 10
kount = 0
for crawl in crawler_table.find(crawl_timestamp=None):
if kount < limit:
kount += 1
process.crawl(
MySpider,
start_urls=[crawl['crawl_url']]
)
process = CrawlerProcess(settings)
process.start()
但它只运行了第一个循环。对于第二个循环,我有错误:
File "C:\Program Files\Python310\lib\site-packages\scrapy\crawler.py", line 327, in start
reactor.run(installSignalHandlers=False) # blocking call
File "C:\Program Files\Python310\lib\site-packages\twisted\internet\base.py", line 1314, in run
self.startRunning(installSignalHandlers=installSignalHandlers)
File "C:\Program Files\Python310\lib\site-packages\twisted\internet\base.py", line 1296, in startRunning
ReactorBase.startRunning(cast(ReactorBase, self))
File "C:\Program Files\Python310\lib\site-packages\twisted\internet\base.py", line 840, in startRunning
raise error.ReactorNotRestartable()
twisted.internet.error.ReactorNotRestartable
有什么解决方案可以避免这个错误吗?2并对所有2k URL运行蜘蛛程序?
1条答案
按热度按时间11dmarpk1#
这是因为你不能在同一个过程中启动twisted reactor两次。你需要做的是定义和循环外的过程:
您可以检查文档中提供的示例