禁止(服务:亚马逊s3;状态代码:403;错误代码:403禁止(hadoop+s3)

af7jpaap  于 2021-06-01  发布在  Hadoop
关注(0)|答案(2)|浏览(941)

我试图通过hadoopshell命令访问s3文件,当我执行下面的命令时,我得到了这个错误。
我这样做的目的是安装hadoop单节点(hadoop-2.6.1)并添加(hadoop-aws-jar和类路径中的aws-jdk-jar)
我执行的命令

hdfs dfs -ls s3a://s3-us-west-2.amazonaws.com/azpoc1/

错误

ubuntu@ip-172-31-2-211:~/hadoop-2.6.1$ hdfs dfs -ls s3a://s3-us-west-2.amazonaws.com/azpoc1/
-ls: Fatal internal error
com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: FC80B14D00C2FBE0; S3 Extended Request ID: TAHwxzqjMF8CD3bTnyaRGwpAgQnu0DsUFWL/E1llrXDfS+CqEMq6K735Koh7QkpSwEe8jzIOIX0=), S3 Extended Request ID: TAHwxzqjMF8CD3bTnyaRGwpAgQnu0DsUFWL/E1llrXDfS+CqEMq6K735Koh7QkpSwEe8jzIOIX0=
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1632)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1058)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
        at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
        at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4365)
        at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4312)
        at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1270)
        at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1245)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:688)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:71)
        at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
        at org.apache.hadoop.fs.Globber.glob(Globber.java:252)
        at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1625)
        at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:326)
        at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:224)
        at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:207)
        at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:190)
        at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
        at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
        at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)

my core-site.xml文件

<configuration>

<property>
<name>fs.default.name</name>
<value>hdfs://localhost:50000</value>
</property>

<property>
<name>fs.s3a.access.key</name>
<value>*****</value>
</property>

<property>
<name>fs.s3a.secret.key</name>
<value>*****</value>
</property>

<property>
<name>fs.s3a.impl</name>
<value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
</property>
</configuration>
bsxbgnwa

bsxbgnwa1#

有一个完整的文档要经历:从那里开始。
我还提供了一个诊断模块,它尝试在不打印机密的情况下调试连接问题:storediag。抓取最新版本,看看是怎么说的。

yc0p9oo0

yc0p9oo02#

首先,不要发布您的密钥和访问密钥。这是一个重大的安全风险。
与您的iam用户关联的权限是什么?我猜它没有访问bucket的适当权限。我会暂时给它太多的权限(比如s3:*),看看它是否有效。如果有,那就是权限。

相关问题