scrapy 从网页下载csv文件时获取丢失的URL方案

2ul0zpep  于 2022-11-09  发布在  其他
关注(0)|答案(1)|浏览(154)

我是相当新的刮,并一直试图直接下载.csv文件从一个网站。我设法修复我的最后一个问题与编辑,但我得到一个新的错误时,试图下载文件。以下错误是:
raise ValueError(f '请求url中缺少方案:{self._url}')值错误:请求URL中缺少方案:小时
我不确定是什么触发了这个错误,因为链接正确地跟随到下一个函数。
例如,以下是我尝试过的方法:

import scrapy
from nhs.items import DownfilesItem

class NhsScapeSpider(scrapy.Spider):
    name = 'nhs_scape'
    #allowed_domains = ['nh']
    start_urls = ['https://www.england.nhs.uk/statistics/statistical-work-areas/ae-waiting-times-and-activity/ae-attendances-and-emergency-admissions-2021-22/']

    custom_settings = {
        'USER_AGENT':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36'
    }

    def start_requests(self):
        for url in self.start_urls:
            yield scrapy.Request(
                url = url,
                callback = self.parse
            )

    def parse(self, response):
        side_panel = response.xpath("//aside[@class='subnav group minimal_nav desktop-only']//ul[@class='children']//li")
        for years in side_panel:
            year_links = years.xpath('.//a/@href').get()
            yield response.follow(year_links, callback = self.download_files)

    def download_files(self, response):
        test_files = response.xpath("//article[@class='rich-text']//p")
        month_files = response.xpath("//article[@class='rich-text']//h3")

        for files, mn in zip(test_files, month_files):
            all_files = files.xpath('.//a//@href').getall()
            all_file_names = files.xpath('.//a//text()').getall()
            month_year = mn.xpath('.//text()').get()

            for ind_files,ind_text in zip(all_files, all_file_names):
                item = DownfilesItem()

                if '.xls' in ind_files and 'Monthly' in ind_text:
                    item['file_urls'] = ind_files
                    item['original_file_name'] = ind_text
                    yield item

                elif '.xls' in ind_files and 'Week' in ind_text:
                    item['file_urls'] = ind_files
                    item['original_file_name'] = ind_text
                    yield item

Items.py:

import scrapy
class DownfilesItem(scrapy.Item):

    # define the fields for your item here like:
    file_urls = scrapy.Field()
    original_file_name = scrapy.Field()

Pipelines.py:

from scrapy.pipelines.files import FilesPipeline
class DownfilesPipeline(FilesPipeline):
    def file_path(self, request, response=None, info=None):
        file_name: str = request.url.split("/")[1]
        return file_name

Settings.py:

ITEM_PIPELINES = {'nhs.pipelines.DownfilesPipeline': 150}
FILES_STORE = "Files"

@超级用户回答后更新错误:

IsADirectoryError: [Errno 21] Is a directory: 'Files/'

这似乎是由FILES_STORE = "Files"造成的,但是当我删除这个时,我没有得到错误,但也没有下载任何文件。

oug3syen

oug3syen1#

item['file_urls']应该是一个列表:

if '.xls' in ind_files and 'Monthly' in ind_text:
    item['file_urls'] = [ind_files]
    item['original_file_name'] = ind_text
    yield item

elif '.xls' in ind_files and 'Week' in ind_text:
    item['file_urls'] = [ind_files]
    item['original_file_name'] = ind_text
    yield item

编辑:

第二个错误是由于管道的原因,file_name是一个空字符串,例如可以将其更改为:

file_name: str = request.url.split("/")[-1]

编辑2:

我认为问题出在xpath选择器上,尝试一下,并根据您的需要调整它:

import scrapy
from tempbuffer.items import DownfilesItem

class NhsScapeSpider(scrapy.Spider):
    name = 'nhs_scape'
    #allowed_domains = ['nh']
    start_urls = ['https://www.england.nhs.uk/statistics/statistical-work-areas/ae-waiting-times-and-activity/ae-attendances-and-emergency-admissions-2021-22/']

    custom_settings = {
        'USER_AGENT': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36',
    }

    def start_requests(self):
        for url in self.start_urls:
            yield scrapy.Request(
                url=url,
                callback=self.parse
            )

    def parse(self, response):
        side_panel = response.xpath("//aside[@class='subnav group minimal_nav desktop-only']//ul[@class='children']//li")
        for years in side_panel:
            year_links = years.xpath('.//a/@href').get()
            yield response.follow(year_links, callback=self.download_files)

    def download_files(self, response):
        # test_files = response.xpath("//article[@class='rich-text']//p")
        test_files = response.xpath("//article[@class='rich-text']//p[a[contains(@href, '.xls')]]")
        # month_files = response.xpath("//article[@class='rich-text']//h3")
        # couldn't make a prettier xpath selector
        month_files = response.xpath("//article[@class='rich-text']//h3[starts-with(text(), 'January') or starts-with(text(), 'February') or starts-with(text(), 'March') or starts-with(text(), 'April') or starts-with(text(), 'May') or starts-with(text(), 'June') or starts-with(text(), 'July') or starts-with(text(), 'August') or starts-with(text(), 'September') or starts-with(text(), 'October') or starts-with(text(), 'November') or starts-with(text(), 'December')]")

        for files, mn in zip(test_files, month_files):
            all_files = files.xpath('.//a//@href').getall()
            all_file_names = files.xpath('.//a//text()').getall()
            month_year = mn.xpath('.//text()').get()

            for ind_files, ind_text in zip(all_files, all_file_names):
                item = DownfilesItem()

                if '.xls' in ind_files and 'Monthly' in ind_text:
                    item['file_urls'] = [ind_files]
                    item['original_file_name'] = ind_text
                    yield item

                elif '.xls' in ind_files and 'Week' in ind_text:
                    item['file_urls'] = [ind_files]
                    item['original_file_name'] = ind_text
                    yield item

相关问题