如何访问hadoop文件系统上的文件,该文件系统位于与本地计算机不同的服务器上?

dba5bblo  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(416)

我有一台本地机器( local_user@local_machine ). 另一台服务器上有一个hadoop文件系统( some_user@another_server ). hadoop服务器中的一个用户名为 target_user . 如何访问中的文件 target_userlocal_user@local_machine ? 更确切地说,假设有一个文件 /user/target_user/test.txt 在上的hdfs中出现 some_user@another_server . 访问时应该使用什么正确的文件路径 /user/target_user/test.txtlocal_user@local_machine ?
我可以使用 hdfs dfs -cat /user/target_user/test.txt . 但是我无法使用我编写的python脚本从hdfs读写(需要3个参数-本地文件路径、远程文件路径和读写)从本地机器访问文件,很可能是因为我没有给出正确的路径。
我试过以下方法,但都不管用: $ #local_user@local_machine $ python3 rw_hdfs.py ./to_local_test.txt /user/target_user/test.txt read $ python3 rw_hdfs.py ./to_local_test.txt some_user@another_server/user/target_user/test.txt read 它们都给出了完全相同的错误:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 377, in _make_request
    httplib_response = conn.getresponse(buffering=True)
TypeError: getresponse() got an unexpected keyword argument 'buffering'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 560, in urlopen
    body=body, headers=headers)
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 379, in _make_request
    httplib_response = conn.getresponse()
  File "/usr/lib/python3.5/http/client.py", line 1197, in getresponse
    response.begin()
  File "/usr/lib/python3.5/http/client.py", line 297, in begin
    version, status, reason = self._read_status()
  File "/usr/lib/python3.5/http/client.py", line 279, in _read_status
    raise BadStatusLine(line)
http.client.BadStatusLine: 

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 376, in send
    timeout=timeout
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 610, in urlopen
    _stacktrace=sys.exc_info()[2])
  File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 247, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "/usr/lib/python3/dist-packages/six.py", line 685, in reraise
    raise value.with_traceback(tb)
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 560, in urlopen
    body=body, headers=headers)
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 379, in _make_request
    httplib_response = conn.getresponse()
  File "/usr/lib/python3.5/http/client.py", line 1197, in getresponse
    response.begin()
  File "/usr/lib/python3.5/http/client.py", line 297, in begin
    version, status, reason = self._read_status()
  File "/usr/lib/python3.5/http/client.py", line 279, in _read_status
    raise BadStatusLine(line)
requests.packages.urllib3.exceptions.ProtocolError: ('Connection aborted.', BadStatusLine('\x15\x03\x03\x00\x02\x02\n',))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "python_hdfs.py", line 63, in <module>
    status, name, nnaddress= check_node_status(node)
  File "python_hdfs.py", line 18, in check_node_status
    request = requests.get("%s/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus"%name,verify=False).json()
  File "/usr/lib/python3/dist-packages/requests/api.py", line 67, in get
    return request('get', url, params=params,**kwargs)
  File "/usr/lib/python3/dist-packages/requests/api.py", line 53, in request
    return session.request(method=method, url=url,**kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 468, in request
    resp = self.send(prep,**send_kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 576, in send
    r = adapter.send(request,**kwargs)
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 426, in send
    raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', BadStatusLine('\x15\x03\x03\x00\x02\x02\n',))
m528fe3b

m528fe3b1#

更准确地说,假设在上的hdfs中存在一个文件/user/target\u user/test.txtsome_user@another_server
首先,hdfs不是一台机器上的单一目录。因此,试图这样访问它是没有意义的。
其次,无论您使用什么python库,它都试图通过webhdfs进行通信,您必须为集群专门启用webhdfs。
https://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-hdfs/webhdfs.html BadStatusLine 这个错误可能表明您正在处理一个kerberized的安全集群,因此您可能需要一种不同的方式来读取文件
例如,pyspark或ibis项目

相关问题