pyspark从s3 bucket的子目录中读取所有json文件

cetgtptt  于 2021-05-27  发布在  Hadoop
关注(0)|答案(1)|浏览(575)

我试图从一个名为 world 从一个名为 hello . 当我使用boto3列出该目录的所有对象时,我可以看到几个部件文件(可能是由spark作业创建的),如下所示。

world/
world/_SUCCESS
world/part-r-00000-....json
world/part-r-00001-....json
world/part-r-00002-....json
world/part-r-00003-....json
world/part-r-00004-....json
world/part-r-00005-....json
world/part-r-00006-....json
world/part-r-00007-....json

我编写了以下代码来读取所有这些文件。

spark_session = SparkSession
            .builder
            .config(
            conf=SparkConf().setAll(spark_config).setAppName(app_name)
        ).getOrCreate()
hadoop_conf = spark_session._jsc.hadoopConfiguration()
hadoop_conf.set("fs.s3a.server-side-encryption-algorithm", "AES256")
hadoop_conf.set("fs.s3a.aws.credentials.provider", "org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider")
hadoop_conf.set("fs.s3a.access.key", "my-aws-access-key")
hadoop_conf.set("fs.s3a.secret.key", "my-aws-secret-key")
hadoop_conf.set("com.amazonaws.services.s3a.enableV4", "true")

df = spark_session.read.json("s3a://hello/world/")

得到以下错误

py4j.protocol.Py4JJavaError: An error occurred while calling o98.json.
: com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: , AWS Error Code: null, AWS Error Message: Forbidden, S3 Extended Request ID: 
    at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
    at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
    at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:976)
    at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:956)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:892)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:77)
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1426)
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:557)
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:545)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
    at scala.collection.immutable.List.flatMap(List.scala:355)
    at org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary(DataSource.scala:545)
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:359)
    at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
    at org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:392)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.base/java.lang.Thread.run(Thread.java:834)

我试过了 "s3a://hello/world/*" 以及 "s3a://hello/world/*.json" 但仍然得到相同的错误。
仅供参考,我正在使用以下版本的工具:

pyspark 2.4.5
com.amazonaws:aws-java-sdk:1.7.4
org.apache.hadoop:hadoop-aws:2.7.1
org.apache.hadoop:hadoop-common:2.7.1

有人能帮我吗?

pgvzfuti

pgvzfuti1#

您用来访问bucket/文件夹的凭据似乎没有所需的访问权限。
请检查以下事项
应用程序代码中指定的凭据或角色
附加到amazon弹性计算云(amazonec2)示例概要文件角色的策略
amazon s3 vpc端点策略
amazons3源和目标bucket策略
可以用来在集群的主节点上快速调试的东西很少,可以尝试使用

aws s3 ls s3://hello/world/

如果引发错误,请尝试通过以下链接解决访问控制问题https://aws.amazon.com/premiumsupport/knowledge-center/emr-s3-403-access-denied/

相关问题