在使用子进程时,如何使用scrapy redis处理非法url?

zvms9eto  于 2021-06-09  发布在  Redis
关注(0)|答案(0)|浏览(408)

我想为推送到redis的url请求html,它运行良好。
如果我用cmdline运行,当valueerror可以很容易地处理时,进程就可以继续了。
然而,当我使用python(在subprocess下)运行时,一旦它遇到来自非法“url”的valueerror(例如avascript:void(0),电子邮件链接),它会立即完成,其余的URL不会运行。
在用python.subprocess运行scrapy redis时,你们如何处理非法的url?


# request html (or other info)

    # TODO: dupefilter implemented by scratch
    # push urls to redis
    r = redis.Redis(host='localhost', port=6379, db=0)
    r.delete('MySpider:start_urls')
    r.delete("MySpider:items")
    for url in url_list:
        print(start_urls)
        print('-----------------')
        print(url_list)
        r.lpush('MySpider:start_urls', url)

    # scrap info from urls
    urls_from_scrapy = []
    html_strs_from_scrapy = []

    worker = subprocess.Popen("scrapy crawl MySpider".split())  # error when LOG arg added
    worker.wait(timeout=None)

以下是我看到的错误:

2020-08-11 09:26:21 [scrapy.utils.signal] ERROR: Error caught on signal handler: <bound method RedisMixin.spider_idle of <MySpider 'MySpider' at 0x201fde4ef48>>
Traceback (most recent call last):

  File "c:\programdata\anaconda3\lib\site-packages\scrapy\utils\signal.py", line 32, in send_catch_log
    response = robustApply(receiver, signal=signal, sender=sender, *arguments,**named)

  File "c:\programdata\anaconda3\lib\site-packages\pydispatch\robustapply.py", line 55, in robustApply
    return receiver(*arguments,**named)

  File "D:\PycharmProjects\genetic_algo_crawl\demo\scrapy_redis\spiders.py", line 128, in spider_idle
    self.schedule_next_requests()

  File "D:\PycharmProjects\genetic_algo_crawl\demo\scrapy_redis\spiders.py", line 122, in schedule_next_requests
    for req in self.next_requests():

  File "D:\PycharmProjects\genetic_algo_crawl\demo\scrapy_redis\spiders.py", line 91, in next_requests
    req = self.make_request_from_data(data)

  File "D:\PycharmProjects\genetic_algo_crawl\demo\scrapy_redis\spiders.py", line 117, in make_request_from_data
    return self.make_requests_from_url(url)

  File "c:\programdata\anaconda3\lib\site-packages\scrapy\spiders\__init__.py", line 87, in make_requests_from_url
    return Request(url, dont_filter=True)

  File "c:\programdata\anaconda3\lib\site-packages\scrapy\http\request\__init__.py", line 25, in __init__
    self._set_url(url)

  File "c:\programdata\anaconda3\lib\site-packages\scrapy\http\request\__init__.py", line 69, in _set_url
    raise ValueError('Missing scheme in request url: %s' % self._url)

ValueError: Missing scheme in request url: mailto:sjxx@xidian.edu.cn
2020-08-11 09:26:21 [scrapy.core.engine] INFO: Closing spider (finished)

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题