python-3.x 为什么我的程序刮NSE网站在服务器中被阻止,但在本地工作?

siv3szwd  于 2023-01-10  发布在  Python
关注(0)|答案(2)|浏览(85)

此python代码正在本地计算机上运行,但未在
1.数字海洋
1.亚马逊AWS
1.谷歌合作
1.黑六
和许多其他的vps。它在不同的时间显示不同的错误。

import requests

headers = {
    'authority': 'beta.nseindia.com',
    'cache-control': 'max-age=0',
    'dnt': '1',
    'upgrade-insecure-requests': '1',
    'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.117 Safari/537.36',
    'sec-fetch-user': '?1',
    'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
    'sec-fetch-site': 'none',
    'sec-fetch-mode': 'navigate',
    'accept-encoding': 'gzip, deflate, br',
    'accept-language': 'en-US,en;q=0.9,hi;q=0.8',
}

params = (
    ('symbol', 'BANKNIFTY'),
)

response = requests.get('https://nseindia.com/api/quote-derivative', headers=headers, params=params)

#NB. Original query string below. It seems impossible to parse and
#reproduce query strings 100% accurately so the one below is given
#in case the reproduced version is not "correct".
# response = requests.get('https://nseindia.com/api/quote-derivative?symbol=BANKNIFTY', headers=headers)

上面的代码有什么错误吗?我错过了什么?我从Chrome开发者工具〉网络中复制了头部数据,在隐姓埋名模式下使用https://curl.trillworks.com/站点从curl命令生成了python代码。
但curl命令运行良好输出也很好

curl "https://nseindia.com/api/quote-derivative?symbol=BANKNIFTY" -H "authority: beta.nseindia.com" -H "cache-control: max-age=0" -H "dnt: 1" -H "upgrade-insecure-requests: 1" -H "user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.117 Safari/537.36" -H "sec-fetch-user: ?1" -H "accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9" -H "sec-fetch-site: none" -H "sec-fetch-mode: navigate" -H "accept-encoding: gzip, deflate, br" -H "accept-language: en-US,en;q=0.9,hi;q=0.8"  --compressed

为什么curl命令可以工作,而由curl命令生成的python却不能?

uwopmtnx

uwopmtnx1#

有两件事需要注意。
1.请求标头需要有“host”和“user-agent”

__request_headers = {
        'Host':'www.nseindia.com', 
        'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:82.0) Gecko/20100101 Firefox/82.0',
        'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8', 
        'Accept-Language':'en-US,en;q=0.5', 
        'Accept-Encoding':'gzip, deflate, br',
        'DNT':'1', 
        'Connection':'keep-alive', 
        'Upgrade-Insecure-Requests':'1',
        'Pragma':'no-cache',
        'Cache-Control':'no-cache',    
    }

1.以下cookie是动态设置的,需要动态提取和设置。

'nsit',
'nseappid',
'ak_bmsc'

这些参数是根据正在使用的功能从nse设置的。此示例:我试着得到最高的赢家和输家列表,在这个列表中,如果没有这些cookie,请求将被阻止。

try:
            nse_url = 'https://www.nseindia.com/market-data/top-gainers-loosers'
            url = 'https://www.nseindia.com/api/live-analysis-variations?index=gainers'
            resp = requests.get(url=nse_url, headers=__request_headers)
            if resp.ok:
                req_cookies = dict(nsit=resp.cookies['nsit'], nseappid=resp.cookies['nseappid'], ak_bmsc=resp.cookies['ak_bmsc'])
                tresp = requests.get(url=url, headers=__request_headers, cookies=req_cookies)
                result = tresp.json()
                res_data = result["NIFTY"]["data"] if "NIFTY" in result and "data" in result["NIFTY"] else []
                if res_data != None and len(res_data) > 0:
                    __top_list = res_data
        except OSError as err:
            logger.error('Unable to fetch data')

另一件需要注意的事情是,NSE阻止了来自大多数云VM(如AWS、GCP)的这些请求。我能够从个人Windows机器获得这些请求,但无法从AWS或GCP获得。

svmlkihl

svmlkihl2#

我也遇到了同样的问题。我不知道python-requests模块的正确的pythonic解决方案。NSE很有可能会阻塞它。
所以这里有一个很有效的解决方案。它看起来很蹩脚,但我使用它没有深入挖掘-

import subprocess
import os
os.chdir(os.path.dirname(os.path.abspath(__file__)))

subprocess.Popen('curl "https://www.nseindia.com/api/quote-derivative?symbol=BANKNIFTY" -H "authority: beta.nseindia.com" -H "cache-control: max-age=0" -H "dnt: 1" -H "upgrade-insecure-requests: 1" -H "user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.117 Safari/537.36" -H "sec-fetch-user: ?1" -H "accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9" -H "sec-fetch-site: none" -H "sec-fetch-mode: navigate" -H "accept-encoding: gzip, deflate, br" -H "accept-language: en-US,en;q=0.9,hi;q=0.8" --compressed  -o maxpain.txt', shell=True)

f=open("maxpain.txt","r")
var=f.read()
print(var)

它基本上运行curl函数,把输出发送到一个文件,然后再读回来,就这样。

相关问题