如何解决使用python读写hdfs时的代理错误?

xqnpmsa8  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(466)

我有一个hdfs,我想用python脚本读写它。

import requests
import json
import os
import kerberos
import sys

node = os.getenv("namenode").split(",")
print (node)

local_file_path = sys.argv[1]
remote_file_path = sys.argv[2]
read_or_write = sys.argv[3]
print (local_file_path,remote_file_path)

def check_node_status(node):
    for name in node:
        print (name)
        request = requests.get("%s/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus"%name,
                               verify=False).json()
        status = request["beans"][0]["State"]
        if status =="active":
            nnhost = request["beans"][0]["HostAndPort"]
            splitaddr = nnhost.split(":")
            nnaddress = splitaddr[0]
            print(nnaddress)
            break
    return status,name,nnaddress

def kerberos_auth(nnaddress):
    __, krb_context = kerberos.authGSSClientInit("HTTP@%s"%nnaddress)
    kerberos.authGSSClientStep(krb_context, "")
    negotiate_details = kerberos.authGSSClientResponse(krb_context)
    headers = {"Authorization": "Negotiate " + negotiate_details,
                "Content-Type":"application/binary"}
    return headers

def kerberos_hdfs_upload(status,name,headers):
    print("running upload function")
    if status =="active":
        print("if function")
        data=open('%s'%local_file_path, 'rb').read()
        write_req = requests.put("%s/webhdfs/v1%s?op=CREATE&overwrite=true"%(name,remote_file_path),
                                 headers=headers,
                                 verify=False, 
                                 allow_redirects=True,
                                 data=data)
        print(write_req.text)

def kerberos_hdfs_read(status,name,headers):
    if status == "active":
        read = requests.get("%s/webhdfs/v1%s?op=OPEN"%(name,remote_file_path),
                            headers=headers,
                            verify=False,
                            allow_redirects=True)

        if read.status_code == 200:
            data=open('%s'%local_file_path, 'wb')
            data.write(read.content)
            data.close()
        else : 
            print(read.content)

status, name, nnaddress= check_node_status(node)
headers = kerberos_auth(nnaddress)
if read_or_write == "write":
    kerberos_hdfs_upload(status,name,headers)
elif read_or_write == "read":
    print("fun")
    kerberos_hdfs_read(status,name,headers)

代码在我自己的机器上工作,它不支持任何代理。但在代理后面的办公机器上运行时,会出现以下代理错误:

$ python3 python_hdfs.py ./1.png /user/testuser/2018-02-07_1.png write
['https://<servername>:50470', 'https:// <servername>:50470']
./1.png /user/testuser/2018-02-07_1.png
https://<servername>:50470
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 555, in urlopen
    self._prepare_proxy(conn)
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 753, in _prepare_proxy
    conn.connect()
  File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 230, in connect
    self._tunnel()
  File "/usr/lib/python3.5/http/client.py", line 832, in _tunnel
    message.strip()))
OSError: Tunnel connection failed: 504 Unknown Host

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 376, in send
    timeout=timeout
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 610, in urlopen
    _stacktrace=sys.exc_info()[2])
  File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 273, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
requests.packages.urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='<servername>', port=50470): Max retries exceeded with url: /jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 504 Unknown Host',)))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "python_hdfs.py", line 68, in <module>
    status, name, nnaddress= check_node_status(node)
  File "python_hdfs.py", line 23, in check_node_status
    verify=False).json()
  File "/usr/lib/python3/dist-packages/requests/api.py", line 67, in get
    return request('get', url, params=params,**kwargs)
  File "/usr/lib/python3/dist-packages/requests/api.py", line 53, in request
    return session.request(method=method, url=url,**kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 468, in request
    resp = self.send(prep,**send_kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 576, in send
    r = adapter.send(request,**kwargs)
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 437, in send
    raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='<server_name>', port=50470): Max retries exceeded with url: /jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 504 Unknown Host',)))

我试着在代码中提供代理信息,比如:

proxies = {
"http": "<proxy_username>:<proxy_password>@<proxy_IP>:<proxy_port>",
"https": "<proxy_username>:<proxy_password>@<proxy_IP>:<proxy_port>",
}

node = os.getenv("namenode").split(",")
print (node)
local_file_path = sys.argv[1]
remote_file_path = sys.argv[2]
print (local_file_path,remote_file_path)

local_file_path = sys.argv[1]
remote_file_path = sys.argv[2]
read_or_write = sys.argv[3]
print (local_file_path,remote_file_path)

def check_node_status(node):
        for name in node:
                print (name)
                request = requests.get("%s/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus"%name,proxies=proxies,
                                                           verify=False).json()
                status = request["beans"][0]["State"]
                if status =="active":
                        nnhost = request["beans"][0]["HostAndPort"]
                        splitaddr = nnhost.split(":")
                        nnaddress = splitaddr[0]
                        print(nnaddress)
                        break
        return status,name,nnaddress

### Rest of the code is the same

现在它给出了以下错误:

$ python3 python_hdfs.py ./1.png /user/testuser/2018-02-07_1.png write
['https://<servername>:50470', 'https:// <servername>:50470']
./1.png /user/testuser/2018-02-07_1.png
https://<servername>:50470
Traceback (most recent call last):
  File "python_hdfs.py", line 73, in <module>
    status, name, nnaddress= check_node_status(node)
  File "python_hdfs.py", line 28, in check_node_status
    verify=False).json()
  File "/usr/lib/python3/dist-packages/requests/api.py", line 67, in get
    return request('get', url, params=params,**kwargs)
  File "/usr/lib/python3/dist-packages/requests/api.py", line 53, in request
    return session.request(method=method, url=url,**kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 468, in request
    resp = self.send(prep,**send_kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 576, in send
    r = adapter.send(request,**kwargs)
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 343, in send
    conn = self.get_connection(request.url, proxies)
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 254, in get_connection
    proxy_manager = self.proxy_manager_for(proxy)
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 160, in proxy_manager_for
  **proxy_kwargs)
  File "/usr/lib/python3/dist-packages/urllib3/poolmanager.py", line 281, in proxy_from_url
    return ProxyManager(proxy_url=url,**kw)
  File "/usr/lib/python3/dist-packages/urllib3/poolmanager.py", line 232, in __init__
    raise ProxySchemeUnknown(proxy.scheme)
requests.packages.urllib3.exceptions.ProxySchemeUnknown: Not supported proxy scheme <proxy_username>

所以,我的问题是,我是否需要在kerberos中设置代理才能正常工作?如果是,怎么做?我不太熟悉kerberos。我跑了 kinit 在运行python代码之前,为了进入kerberos领域,kerberos领域运行良好并连接到适当的hdfs服务器而不使用代理。所以我不知道为什么读写同一个hdfs服务器时会出现这种错误。感谢您的帮助。
我还把代理设置在 /etc/apt/apt.conf 像这样:

Acquire::http::proxy  "http://<proxy_username>:<proxy_password>@<proxy_IP>:<proxy_port>/";
Acquire::https::proxy "https://<proxy_username>:<proxy_password>@<proxy_IP>:<proxy_port>/";

我还尝试了以下方法:

$ export http_proxy="http://<user>:<pass>@<proxy>:<port>"
$ export HTTP_PROXY="http://<user>:<pass>@<proxy>:<port>"

$ export https_proxy="http://<user>:<pass>@<proxy>:<port>"
$ export HTTPS_PROXY="http://<user>:<pass>@<proxy>:<port>"

import os

proxy = 'http://<user>:<pass>@<proxy>:<port>'

os.environ['http_proxy'] = proxy 
os.environ['HTTP_PROXY'] = proxy
os.environ['https_proxy'] = proxy
os.environ['HTTPS_PROXY'] = proxy

# rest of the code is same

但错误依然存在。
更新:我也尝试了以下方法。
有人建议我们已经在伦敦设立了一个代理 /etc/apt/apt.conf 连接到web。但也许我们不需要代理连接到hdfs。所以,试着在 /etc/apt/apt.conf ,然后再次运行python脚本。我做到了。
$env| grep proxy http|u代理=http://hfli:test6969@192.168.44.217:8080 https\U代理=https://hfli:test6969@192.168.44.217:8080$unset http|u proxy$unset https|u proxy$env | grep proxy$
再次运行python脚本—(i)不在python脚本中定义代理,并且(ii)使用python脚本中定义的代理。在这两种情况下,我都得到了相同的原始代理错误。
我发现了下面的java程序,据说它允许在hdfs上运行java程序:
导入com.sun.security.auth.callback.textcallbackhandler;导入org.apache.hadoop.fs.fsdataoutputstream;导入org.apache.hadoop.fs.filesystem;导入org.apache.hadoop.fs.path;导入java.io.bufferedreader;导入java.io.inputstreamreader;导入javax.security.auth.subject;导入javax.security.auth.login.logincontext;
导入org.apache.hadoop.conf.configuration;导入org.apache.hadoop.security.usergroupinformation;
公共类hdfs_rw_secure{public static void main(string[]args)抛出异常{system.setproperty(“java.security.auth.login.config”,“/tmp/sc3_temp/hadoop_kdc.txt”);system.setproperty(“java.security.krb5.conf”,“/tmp/sc3_temp/hadoop_krb.txt”);
configuration hadoopconf=new configuration()//本例使用密码登录,可以改为使用keytab登录logincontext lc;主题;lc=new logincontext(“jaassample”,new textcallbackhandler());lc.login();system.out.println(“登录”);

subject = lc.getSubject();
    UserGroupInformation.setConfiguration(hadoopConf);
    UserGroupInformation ugi = UserGroupInformation.getUGIFromSubject(subject);
    UserGroupInformation.setLoginUser(ugi); 

    Path pt=new Path("hdfs://edhcluster"+args[0]);

    FileSystem fs = FileSystem.get(hadoopConf);

    //write
    FSDataOutputStream fin = fs.create(pt);
    fin.writeUTF("Hello!");
    fin.close();

    BufferedReader br=new BufferedReader(new InputStreamReader(fs.open(pt)));
    String line;
    line=br.readLine();
    while (line != null)
    {
           System.out.println(line);
           line=br.readLine();
    }
    fs.close();
    System.out.println("This is the end.");

} }
我们需要拿走它的jar文件, HDFS.jar ,并运行以下shell脚本以使java程序能够在hdfs上运行。

nano run.sh

# contents of the run.sh file:

/tmp/sc3_temp/jre1.8.0_161/bin/java -Djavax.net.ssl.trustStore=/tmp/sc3_temp/cacerts -Djavax.net.ssl.trustStorePassword=changeit -jar /tmp/sc3_temp/HDFS.jar $1

所以,我可以用 /user/testuser 作为允许它在hdfs中运行java程序的参数:

./run.sh /user/testuser/test2

其输出如下:

Debug is  true storeKey false useTicketCache false useKeyTab false doNotPrompt false ticketCache is null isInitiator true KeyTab is null refreshKrb5Config is false principal is null tryFirstPass is false useFirstPass is false storePass is false clearPass is false
Kerberos username [testuser]: testuser
Kerberos password for testuser: 
        [Krb5LoginModule] user entered username: testuser

principal is testuser@KRB.REALM
Commit Succeeded 

login
2018-02-08 14:09:30,020 WARN  [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Hello!
This is the end.

所以,我想这是可行的。但是如何编写一个等价的shell脚本来运行python代码呢?

7rfyedvj

7rfyedvj1#

我找到了解决办法。结果,我找错地方了。似乎用户帐户设置错误。我试着做一些简单的事情,比如把网页下载到服务器上。我注意到它正在下载页面,但没有权限修复它。所以我进一步研究了一下,发现在创建用户帐户时,没有为其分配适当的所有权。因此,一旦我为用户帐户指定了正确的所有者,代理错误就消失了(唉,浪费了这么多时间。)
我在这里写得更详细了。

相关问题