无法使用hadoop访问s3 bucket

pgky5nke  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(593)

我正在尝试用hadoop(2.7.3)访问我的s3 bucket,我得到了以下结果
ubuntu@aws:~/prototype/hadoop$ubuntu@aws:~/prototype/hadoop$bin/hadoop fs-ls s3://[bucket]/
17/03/24 15:33:31警告util.nativecodeloader:无法为您的平台加载本机hadoop库。。。在适用的情况下使用内置java类-ls:fatal internal error com.amazonaws.services.s3.model.amazon3exception:状态代码:400,aws服务:amazon s3,aws请求id:1fa2318a386330c0,aws错误代码:null,aws错误消息:错误请求,s3扩展请求id:1s7eq6s9yxub9bpwyhp73cljvd619lz2ooje8vklmaa9jrkxpbvt7cg6nh0zeulugrzybipbgrq=com.amazonaws.http.amazonhttpclient.handleerrorresponse(amazonhttpclient)。java:798)在com.amazonaws.http.amazonhttpclient.executehelp(amazonhttpclient。java:421)在com.amazonaws.http.amazonhttpclient.execute(amazonhttpclient。java:232)在com.amazonaws.services.s3.amazon3client.invoke(amazon3client。java:3528)在com.amazonaws.services.s3.amazon3client.headbucket(amazon3client。java:1031)在com.amazonaws.services.s3.amazons3client.doesbucketexist(amazons3client。java:994)在org.apache.hadoop.fs.s3a.s3afilesystem.initialize(s3afilesystem。java:297)在org.apache.hadoop.fs.filesystem.createfilesystem(文件系统)。java:2669)在org.apache.hadoop.fs.filesystem.access$200(文件系统)。java:94)在org.apache.hadoop.fs.filesystem$cache.getinternal(filesystem。java:2703)在org.apache.hadoop.fs.filesystem$cache.get(filesystem。java:2685)在org.apache.hadoop.fs.filesystem.get(filesystem。java:373)在org.apache.hadoop.fs.path.getfilesystem(路径。java:295)在org.apache.hadoop.fs.shell.pathdata.expandasglob(路径数据)。java:325)在org.apache.hadoop.fs.shell.command.expandargument(command。java:235)在org.apache.hadoop.fs.shell.command.expandarguments(command。java:218)在org.apache.hadoop.fs.shell.command.processrawarguments(command。java:201)在org.apache.hadoop.fs.shell.command.run(命令。java:165)在org.apache.hadoop.fs.fsshell.run(fsshell。java:287)在org.apache.hadoop.util.toolrunner.run(toolrunner。java:70)在org.apache.hadoop.util.toolrunner.run(toolrunner。java:84)在org.apache.hadoop.fs.fsshell.main(fsshell。java:340) ubuntu@aws:~/prototype/hadoop$
conf-site.xml格式:

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>s3://[ Bucket ]</value>
    </property>

    <property>
            <name>fs.s3a.endpoint</name>
            <value>s3.eu-central-1.amazonaws.com</value>
    </property>

    <property>
        <name>fs.s3a.access.key</name>
        <value>[ Access Key Id ]</value>
    </property>

    <property>
        <name>fs.s3a.secret.key</name>
        <value>[ Secret Access Key ]</value>
    </property>

    <property>
        <name>fs.s3.awsAccessKeyId</name>
        <value>[ Access Key Id ]</value>
    </property>

    <property>
        <name>fs.s3.awsSecretAccessKey</name>
        <value>[ Secret Access Key ]</value>
    </property>

    <property>
        <name>fs.s3n.awsAccessKeyId</name>
        <value>[ Access Key Id ]</value>
    </property>

    <property>
        <name>fs.s3n.awsSecretAccessKey</name>
        <value>[ Secret Access Key ]</value>
    </property>

    <property>
        <name>fs.s3.impl</name>
        <value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
    </property>

    <!-- Comma separated list of local directories used to buffer
         large results prior to transmitting them to S3. -->
    <property>
        <name>fs.s3.buffer.dir</name>
        <value>/tmp</value>
    </property>
</configuration>

有人知道问题出在哪里吗?
编辑:bucket和访问它的vm在法兰克福。它似乎与https://docs.hortonworks.com/hdpdocuments/hdcloudaws/hdcloudaws-1.8.0/bk_hdcloud-aws/content/s3-trouble/index.html 但是在添加端点之后,它仍然不起作用。

jk9hmnmh

jk9hmnmh1#

听起来像是v4身份验证问题,fs.s3a.endpoint属性应该已经解决了这个问题
时钟问题也会导致问题。查查乔达的时间,确保你所有的机器都赶上了这个周末的时钟变化。
试着抓取hadoop2.8.0rc3,看看问题是否已经解决了。如果它仍然存在,那就是apache列表中需要帮助的版本。

相关问题