amazons3exception:尝试使用s3afilesystem时请求错误400

wnavrhmk  于 2021-05-29  发布在  Hadoop
关注(0)|答案(0)|浏览(383)

我有一段java代码,它尝试使用coniguration从s3初始化一个远程文件系统(这是以前在hdfs上做的,我尝试在不太修改代码的情况下将其移动到s3)。
这是配置:

fs.s3a.aws.credentials.provider=com.amazonaws.auth.DefaultAWSCredentialsProviderChain
fs.defaultFS=s3a://mybucket.devrun.algo-resources/

然后,在我使用的设置中

hdfsFileSystem = FileSystem.get(conf);

这将导致以下异常:

org.apache.hadoop.fs.s3a.AWSS3IOException: doesBucketExist on mybucket.devrun.algo-resources: com.amazonaws.services.s3.model.AmazonS3Exception: Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: 1E91F85FA3751C44), S3 Extended Request ID: 5KDgH7lsaIX7l5DQcdBdUjeg/qxYgOEU4WJBOL0p090kqNNlYOAie31zuYUQw+R3LN4CvavdVJk=: Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: 1E91F85FA3751C44)
    at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:178)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:282)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:236)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2811)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2848)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2830)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:181)
    at pipeline.HdfsSyncUtils.<init>(HdfsSyncUtils.java:32)
    at pipeline.QueriesForArticles$QFAMapper.setup(QueriesForArticles.java:158)
    at AWSPipeline.C2SIndexSearchingAlgo.<init>(C2SIndexSearchingAlgo.java:41)
    at AWSPipeline.ABTestMainRunner.main(ABTestMainRunner.java:27)
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: 1E91F85FA3751C44), S3 Extended Request ID: 5KDgH7lsaIX7l5DQcdBdUjeg/qxYgOEU4WJBOL0p090kqNNlYOAie31zuYUQw+R3LN4CvavdVJk=
    at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
    at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770)
    at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785)
    at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1107)
    at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:1070)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:276)
    ... 11 more

我用 hadoop-aws.jar 版本 2.8.0 我用 awsfed 在下面创建凭据文件 ~\.aws ,所以我想我的证件是对的。
你知道这个错误是什么意思吗?没有详细的错误消息。。。
编辑:
对于任何感兴趣的人,我解决了这个问题:根据这个答案,我得出结论,这是区域相关的。我的桶在美国东部2区。我试着在另一个地区打开一个水桶,结果成功了!
这可能与文档中的内容有关-us-east-2中的s3只支持版本4签名,而我的代码( hadoop-aws.jar 2.8.0 )可能使用的是旧版本。

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题