爬行器无法识别sqlite文件

bpsygsoo  于 2021-08-20  发布在  Java
关注(0)|答案(0)|浏览(224)

我已经在我的电脑上运行了这段代码,它运行得很好。但在将其部署到zyte(scrapinghub)时出现问题
我曾经使用scrapy_crawl_来防止复制,这在我的计算机上可以正常工作,但是当我将它上传到zyte时,它没有检测到复制。
下面列出了所有文件。
zyte

scrapy_crawl_once.middlewares] Opened crawl database '/scrapinghub/.scrapy/crawl_once/gumtree.sqlite' with 0 existing records
the directory ```

My computer ```NFO: Opened crawl database 'E:\\python\\my projects\\GT\\final\\GT\\New GT\\.scrapy\\crawl_once\\gumtree.sqlite' with 20 existing records

setup.py


# Automatically created by: shub deploy

from setuptools import setup, find_packages

setup(
    name         = 'project',
    version      = '1.0',
    packages     = find_packages(),
    entry_points = {'scrapy': ['settings = gumtree.settings']},
)

目录

New/   
    .scrapy/crawl_once/gumtree.splite  
    gumtree/              
        __init__.py
        items.py  
        middlewares.py
        models.py
        pipelines.py      
        settings.py       

        spiders/           
            __init__.py
            example.py
    templates/
       base.html
       results.html
    __init__.py
    requirements.txt
    scrapinghub.yml

设置.py

SPIDER_MIDDLEWARES = {
    'scrapy_crawl_once.CrawlOnceMiddleware': 100,
}

# Enable or disable downloader middlewares

# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html

DOWNLOADER_MIDDLEWARES = {
    'scrapy_crawl_once.CrawlOnceMiddleware': 50,
}

scrapinghub.yml

project: 111111
requirements:
    file: requirements.txt

requirements.txt

SQLAlchemy==1.4.20
PyMySQL==1.0.2
scrapy-crawl-once==0.1.1
itemadapter==0.2.0

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题