我试图解析这个网站的帖子,收集文本进行情感分析。下面是我正在使用的代码。
第一个
且在终端中,
cd dcscraper
scrapy crawl dcscraper -o ~/dcscraper/result/result.csv
这是日志。
2022-11-22 15:57:53 [scrapy.utils.log] INFO: Scrapy 2.7.0 started (bot: dcscraper)
2022-11-22 15:57:53 [scrapy.utils.log] INFO: Versions: lxml 4.9.1.0, libxml2 2.10.3, cssselect 1.2.0, parsel 1.7.0, w3lib 2.0.1, Twisted 22.10.0, Python 3.10.8 (main, Nov 1 2022, 14:18:21) [GCC 12.2.0], pyOpenSSL 22.1.0 (OpenSSL 3.0.7 1 Nov 2022), cryptography 38.0.3, Platform Linux-5.15.78-1-MANJARO-x86_64-with-glibc2.36
2022-11-22 15:57:53 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'dcscraper',
'EDITOR': '/usr/bin/nano',
'NEWSPIDER_MODULE': 'dcscraper.spiders',
'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7',
'ROBOTSTXT_OBEY': True,
'SPIDER_MODULES': ['dcscraper.spiders'],
'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor',
'USER_AGENT': 'Googlebot/2.1 (+http://www.google.com/bot.html)'}
2022-11-22 15:57:53 [asyncio] DEBUG: Using selector: EpollSelector
2022-11-22 15:57:53 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor
2022-11-22 15:57:53 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.unix_events._UnixSelectorEventLoop
2022-11-22 15:57:53 [scrapy.extensions.telnet] INFO: Telnet Password: 0ceb3c2ae12e2e05
2022-11-22 15:57:53 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.feedexport.FeedExporter',
'scrapy.extensions.logstats.LogStats']
2022-11-22 15:57:53 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2022-11-22 15:57:53 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2022-11-22 15:57:53 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2022-11-22 15:57:53 [scrapy.core.engine] INFO: Spider opened
2022-11-22 15:57:53 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2022-11-22 15:57:53 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2022-11-22 15:57:53 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://gall.dcinside.com/robots.txt> (referer: None)
2022-11-22 15:57:53 [filelock] DEBUG: Attempting to acquire lock 140598032389680 on /home/luxiant/.cache/python-tldextract/3.10.8.final__usr__7d8fdf__tldextract-3.4.0/publicsuffix.org-tlds/de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock
2022-11-22 15:57:53 [filelock] DEBUG: Lock 140598032389680 acquired on /home/luxiant/.cache/python-tldextract/3.10.8.final__usr__7d8fdf__tldextract-3.4.0/publicsuffix.org-tlds/de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock
2022-11-22 15:57:53 [filelock] DEBUG: Attempting to release lock 140598032389680 on /home/luxiant/.cache/python-tldextract/3.10.8.final__usr__7d8fdf__tldextract-3.4.0/publicsuffix.org-tlds/de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock
2022-11-22 15:57:53 [filelock] DEBUG: Lock 140598032389680 released on /home/luxiant/.cache/python-tldextract/3.10.8.final__usr__7d8fdf__tldextract-3.4.0/publicsuffix.org-tlds/de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock
2022-11-22 15:57:53 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://gall.dcinside.com/board/lists?id=bitcoins_new1&page=1> (referer: None)
2022-11-22 15:57:53 [scrapy.core.scraper] ERROR: Spider error processing <GET https://gall.dcinside.com/board/lists?id=bitcoins_new1&page=1> (referer: None)
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/parsel/selector.py", line 423, in xpath
result = xpathev(
File "src/lxml/etree.pyx", line 1599, in lxml.etree._Element.xpath
File "src/lxml/xpath.pxi", line 305, in lxml.etree.XPathElementEvaluator.__call__
File "src/lxml/xpath.pxi", line 225, in lxml.etree._XPathEvaluatorBase._handle_result
lxml.etree.XPathEvalError: Invalid predicate
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/scrapy/utils/defer.py", line 240, in iter_errback
yield next(it)
File "/usr/lib/python3.10/site-packages/scrapy/utils/python.py", line 338, in __next__
return next(self.data)
File "/usr/lib/python3.10/site-packages/scrapy/utils/python.py", line 338, in __next__
return next(self.data)
File "/usr/lib/python3.10/site-packages/scrapy/core/spidermw.py", line 79, in process_sync
for r in iterable:
File "/usr/lib/python3.10/site-packages/scrapy/spidermiddlewares/offsite.py", line 29, in <genexpr>
return (r for r in result or () if self._filter(r, spider))
File "/usr/lib/python3.10/site-packages/scrapy/core/spidermw.py", line 79, in process_sync
for r in iterable:
File "/usr/lib/python3.10/site-packages/scrapy/spidermiddlewares/referer.py", line 336, in <genexpr>
return (self._set_referer(r, response) for r in result or ())
File "/usr/lib/python3.10/site-packages/scrapy/core/spidermw.py", line 79, in process_sync
for r in iterable:
File "/usr/lib/python3.10/site-packages/scrapy/spidermiddlewares/urllength.py", line 28, in <genexpr>
return (r for r in result or () if self._filter(r, spider))
File "/usr/lib/python3.10/site-packages/scrapy/core/spidermw.py", line 79, in process_sync
for r in iterable:
File "/usr/lib/python3.10/site-packages/scrapy/spidermiddlewares/depth.py", line 32, in <genexpr>
return (r for r in result or () if self._filter(r, response, spider))
File "/usr/lib/python3.10/site-packages/scrapy/core/spidermw.py", line 79, in process_sync
for r in iterable:
File "/home/luxiant/dcscraper/dcscraper/spiders/spider.py", line 14, in parse
for link in response.xpath('//*[@id="container"]/section/article/div/table/tbody/tr/td/a[contains(@href, "/board/view")'):
File "/usr/lib/python3.10/site-packages/scrapy/http/response/text.py", line 138, in xpath
return self.selector.xpath(query, **kwargs)
File "/usr/lib/python3.10/site-packages/parsel/selector.py", line 430, in xpath
raise ValueError(f"XPath error: {exc} in {query}")
ValueError: XPath error: Invalid predicate in //*[@id="container"]/section/article/div/table/tbody/tr/td/a[contains(@href, "/board/view")
2022-11-22 15:57:53 [scrapy.core.engine] INFO: Closing spider (finished)
2022-11-22 15:57:53 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 505,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 33699,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'elapsed_time_seconds': 0.36847,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2022, 11, 22, 6, 57, 53, 838005),
'httpcompression/response_bytes': 169467,
'httpcompression/response_count': 2,
'log_count/DEBUG': 9,
'log_count/ERROR': 1,
'log_count/INFO': 10,
'memusage/max': 117219328,
'memusage/startup': 117219328,
'response_received_count': 2,
'robotstxt/request_count': 1,
'robotstxt/response_count': 1,
'robotstxt/response_status_count/200': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'spider_exceptions/ValueError': 1,
'start_time': datetime.datetime(2022, 11, 22, 6, 57, 53, 469535)}
2022-11-22 15:57:53 [scrapy.core.engine] INFO: Spider closed (finished)
我应该检查哪些内容以进行故障诊断?
我首先认为这是我放置的元素的问题,所以我放置了我想收集的元素的xpath,产生了我现在展示的代码。检查调试日志,我发现解析器没有读取足够的元素。(引用者:无)我认为这可能是原因之一,但仍试图处理这一点。
1条答案
按热度按时间hpcdzsge1#
它会在日志中告诉您问题所在。
File "/home/luxiant/dcscraper/dcscraper/spiders/spider.py", line 14, in parse for link in response.xpath('//[@id="container"]/section/article/div/table/tbody/tr/td/a[contains(@href, "/board/view")'): File "/usr/lib/python3.10/site-packages/scrapy/http/response/text.py", line 138, in xpath return self.selector.xpath(query,**kwargs) ... raise ValueError(f"XPath error: {exc} in {query}") ValueError: XPath error: Invalid predicate in //[@id="container"]/section/article/div/table/tbody/tr/td/a[contains(@href, "/board/view") 2022-11-22 15:57:53 [scrapy.core.engine] INFO: Closing spider (finished)
这意味着您在parse方法的第14行没有使用正确的xpath选择器语法。
问题是你从来没有在你的表达式中关闭最后一组
[]
方括号。它应该看起来像这样: