scrapy 由于云耀斑(clutch.co),403响应不佳

b1zrtrql  于 2023-02-22  发布在  其他
关注(0)|答案(1)|浏览(863)

我试着从clutch.co抓取一些关于不同机构的信息。当我在浏览器中查找url时一切正常,但使用scrappy时,它给了我403响应。从我所读到的相关问题来看,我认为它来自Cloudflare。有什么方法可以绕过这些安全措施吗?以下是我的scrappy代码:

class ClutchSpider(scrapy.Spider):
    name = "clutch"
    allowed_domains = ["clutch.co"]
    start_urls = ["http://clutch.co/"]
    
    custom_settings = {
        'DOWNLOAD_DELAY': 1,
        'CONCURRENT_REQUESTS': 5,
        'RETRY_ENABLED': True,
        'RETRY_TIMES': 5,
        'ROBOTSTXT_OBEY': False,
        'FEED_URL': f'output/output{datetime.timestamp(datetime.now())}.json',
        'FEED_FORMAT': 'json',
    }

    def __init__(self, *args, **kwargs) -> None:
        super().__init__(*args, **kwargs)
        self.input_urls = ['https://clutch.co/directory/mobile-application-developers']
        self.headers = {
                        'accept': '*/*', 
                        'accept-encoding': 'gzip, deflate, br', 
                        'accept-language': 'en-US,en;q=0.9,fa;q=0.8', 
                        # 'cookie': 'shortlist_prompts=true; FPID=FPID2.2.iqvavTK2dqTJ7yLsgWqoL8fYmkFoX3pzUlG6mTVjfi0%3D.1673247154; CookieConsent={stamp:%27zejzt8TIN2JRypvuDr+oPX/PjYUsuVCNii4qWhJvCxxtOxEXcb5hMg==%27%2Cnecessary:true%2Cpreferences:true%2Cstatistics:true%2Cmarketing:true%2Cmethod:%27explicit%27%2Cver:1%2Cutc:1673247163647%2Cregion:%27nl%27}; _gcl_au=1.1.1124048711.1676796982; _gid=GA1.2.316079371.1676796983; ab.storage.deviceId.c7739970-c490-4772-aa67-2b5c1403137e=%7B%22g%22%3A%22d2822ae5-4bac-73ae-cfc0-86adeaeb1add%22%2C%22c%22%3A1676797005041%2C%22l%22%3A1676797005041%7D; ln_or=eyIyMTU0NjAyIjoiZCJ9; hubspotutk=f019384cf677064ee212b1891e67181c; FPLC=o62q7Cwf0JP12iF73tjxOelgvID3ocGZrxnLxzHlB%2F9In25%2BL7oYAwvSxOTnaZWDYH7G2iMkQ03VUW%2BJgWsv7i7StDXSdFnQr6Dpj6VC%2F2Ya4ZptNbWzzRcJUv00JA%3D%3D; __hssrc=1; shortlist_prompts=true; __hstc=238368351.f019384cf677064ee212b1891e67181c.1676798584729.1676873409297.1676879456609.3; __cf_bm=Pn4xsZ2pgyFdB0bdi9t0xTpqxVzY9t5vhySYN6uRpAQ-1676881063-0-AT8uJ+ux6Tmu0WU+bsJovJ1CubUhs+C0JBulUr1i2aQLY28rn7T23PVuGWffSrCaNjeeYPzSDN42NJ46j10jKEPjPO3mS4P8uMx9dDmA7wTqz5NCdil5W5uGQJs2pMbcjbQSfNTjQLh5umYER6hhhLx8qrRFHDnTTJ1vkORfc0eSqBe0rjqaHeR4HFINZOp1UQ==; _ga=GA1.2.298895719.1676796981; _gat_gtag_UA_2814589_5=1; __hssc=238368351.3.1676879456609; _ga_D0WFGX8X3V=GS1.1.1676879405.3.1.1676881074.46.0.0', 
                        'referer': 'https://google.com', 
                        'sec-ch-ua': '"Chromium";v="110", "Not A(Brand";v="24", "Microsoft Edge";v="110"', 
                        'sec-ch-ua-mobile': '?0', 
                        'sec-ch-ua-platform': '"Windows"', 
                        'sec-fetch-dest': 'empty', 
                        'sec-fetch-mode': 'cors', 
                        'sec-fetch-site': 'same-origin', 
                        'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36 Edg/110.0.1587.50'
                    }

    def start_requests(self) -> scrapy.Request:
        for url in self.input_urls:
            yield scrapy.Request(url=url, callback=self.parse, headers=self.headers)

    def parse(self, response) -> scrapy.Request:
        agencies = response.xpath(".//div[@class='company col-md-12 prompt-target sponsor']/a/@href").extract()
        for agency in agencies:
            return response.follow(agency, callback=self.parse_agency, headers=self.headers)

PS:我不愿意使用 selenium 这样的工具,因为它们让一切都太慢了。但是如果没有其他方法来解决这个问题,我怎么能从 selenium 中受益呢?(尽管它也给了我403)

jaql4c8m

jaql4c8m1#

用户cloudscraper项目旨在绕过cloudflare保护:

import cloudscraper

# returns a CloudScraper instance
scraper = cloudscraper.create_scraper()

# CloudScraper inherits from requests.Session
# Or: scraper = cloudscraper.CloudScraper()  

page = scraper.get("http://somesite.com")

# 200
print(page.status_code)

安装:

只需运行pip install cloudscraper,PyPI包位于https://pypi.python.org/pypi/cloudscraper/

相关问题