apachespark+parquet不尊重使用“分区”staging s3a提交程序的配置

7rtdyuoh  于 2021-05-31  发布在  Hadoop
关注(0)|答案(2)|浏览(483)

我正在使用apachespark(3.0)从本地机器将分区数据(Parquet文件)写入awss3,而没有在机器中安装hadoop。当我有很多文件要写到大约50个分区(partitionby=date)时,我在写入s3时遇到了filenotfoundexception。
然后我遇到了新的s3a提交程序,所以我尝试配置“分区”提交程序。但我仍然可以看到,当文件格式为“parquet”时,spark使用parquetoutputcommitter而不是partitionedstagingcommitter。当我有很多数据要写的时候,我仍然得到filenotfoundexception。
我的配置:

sparkSession.conf().set("spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version", 2);
        sparkSession.conf().set("spark.hadoop.fs.s3a.committer.name", "partitioned");
        sparkSession.conf().set("spark.hadoop.fs.s3a.committer.magic.enabled ", false);
        sparkSession.conf().set("spark.hadoop.fs.s3a.committer.staging.conflict-mode", "append");
        sparkSession.conf().set("spark.hadoop.fs.s3a.committer.staging.unique-filenames", true);
        sparkSession.conf().set("spark.hadoop.fs.s3a.committer.staging.abort.pending.uploads", true);
        sparkSession.conf().set("spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a", "org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory");
        sparkSession.conf().set("spark.sql.sources.commitProtocolClass", "org.apache.spark.internal.io.cloud.PathOutputCommitProtocol");
        sparkSession.conf().set("spark.sql.parquet.output.committer.class", "org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter");
        sparkSession.conf().set("spark.hadoop.fs.s3a.committer.staging.tmp.path", "tmp/staging");

我在做什么?有人能帮忙吗?
注意:我已经在spark中创建了一个jira,但是到目前为止没有任何帮助:spark-31072

我试过了(@rajadayalan)的答案。但它仍然使用fileoutputformatter。我尝试将spark版本降级到2.4.5,但没有任何运气。

20/04/06 12:44:52 INFO ParquetFileFormat: Using user defined output committer for Parquet: org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter
20/04/06 12:44:52 WARN AbstractS3ACommitterFactory:**Using standard FileOutputCommitter to commit work**. This is slow and potentially unsafe.
20/04/06 12:44:52 INFO FileOutputCommitter: File Output Committer Algorithm version is 2
20/04/06 12:44:52 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
20/04/06 12:44:52 INFO AbstractS3ACommitterFactory: Using Commmitter FileOutputCommitter{PathOutputCommitter{context=TaskAttemptContextImpl{JobContextImpl{jobId=job_20200406124452_0000}; taskId=attempt_20200406124452_0000_m_000000_0, status=''}; org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter@61deb03f}; outputPath=s3a://******/observation, workPath=s3a://******/observation/_temporary/0/_temporary/attempt_20200406124452_0000_m_000000_0, algorithmVersion=2, skipCleanup=false, ignoreCleanupFailures=false} for s3a://********/observation
20/04/06 12:44:53 INFO HashAggregateExec: spark.sql.codegen.aggregate.map.twolevel.enabled is set to true, but current version of codegened fast hashmap does not support this aggregate.
20/04/06 12:44:54 INFO CodeGenerator: Code generated in 81.077046 ms
20/04/06 12:44:54 INFO HashAggregateExec: spark.sql.codegen.aggregate.map.twolevel.enabled is set to true, but current version of codegened fast hashmap does not support this aggregate.
20/04/06 12:44:54 INFO CodeGenerator: Code generated in 31.993775 ms
20/04/06 12:44:54 INFO CodeGenerator: Code generated in 9.967359 ms

注意:我的本地计算机中没有安装spark。因此,将spark-hadoop-cloud_.11作为编译时依赖项my build.gradle如下所示:

compile group: 'org.apache.spark', name: 'spark-hadoop-cloud_2.11', version: '2.4.2.3.1.3.0-79'
    compile group: 'org.apache.spark', name: 'spark-sql_2.11', version: '2.4.5'
    // https://mvnrepository.com/artifact/com.fasterxml.jackson.core/jackson-databind
    compile group: 'com.fasterxml.jackson.core', name: 'jackson-databind', version: '2.10.0'
    // https://mvnrepository.com/artifact/org.apache.parquet/parquet-column
    compile group: 'org.apache.parquet', name: 'parquet-column', version: '1.10.1'
    // https://mvnrepository.com/artifact/org.apache.parquet/parquet-hadoop
    compile group: 'org.apache.parquet', name: 'parquet-hadoop', version: '1.10.1'
    compile group: 'org.apache.parquet', name: 'parquet-avro', version: '1.10.1'
    // https://mvnrepository.com/artifact/org.apache.spark/spark-sketch
    compile group: 'org.apache.spark', name: 'spark-sketch_2.11', version: '2.4.5'
    // https://mvnrepository.com/artifact/org.apache.spark/spark-core
    compile group: 'org.apache.spark', name: 'spark-core_2.11', version: '2.4.5'
    // https://mvnrepository.com/artifact/org.apache.spark/spark-catalyst
    compile group: 'org.apache.spark', name: 'spark-catalyst_2.11', version: '2.4.5'
    // https://mvnrepository.com/artifact/org.apache.spark/spark-tags
    compile group: 'org.apache.spark', name: 'spark-tags_2.11', version: '2.4.5'
    compile group: 'org.apache.spark', name: 'spark-avro_2.11', version: '2.4.5'
    // https://mvnrepository.com/artifact/org.apache.spark/spark-hive
    compile group: 'org.apache.spark', name: 'spark-hive_2.11', version: '2.4.5'
    // https://mvnrepository.com/artifact/org.apache.xbean/xbean-asm6-shaded
    compile group: 'org.apache.xbean', name: 'xbean-asm7-shaded', version: '4.15'
   compile group: 'org.apache.hadoop', name: 'hadoop-common', version: '3.2.1'
//    compile group: 'org.apache.hadoop', name: 'hadoop-s3guard', version: '3.2.1'
    compile group: 'org.apache.hadoop', name: 'hadoop-aws', version: '3.2.1'
    compile group: 'org.apache.hadoop', name: 'hadoop-client', version: '3.2.1'
    compile group: 'com.amazonaws', name: 'aws-java-sdk-bundle', version: '1.11.271'
ao218c7q

ao218c7q1#

我从@rajadayalan的建议中得到了一个小小的改变。除了初始问题中的sparksession.config().set()之外,我在编写Parquet文件时在df中添加了option()参数

df.distinct()
               .withColumn("date", date_format(col(EFFECTIVE_PERIOD_START), "yyyy-MM-dd"))
               .repartition(col("date"))
               .write()
               .format(fileFormat)
               .partitionBy("date")
               .mode(SaveMode.Append)
               .option("fs.s3a.committer.name", "partitioned")
               .option("fs.s3a.committer.staging.conflict-mode", "append")
               .option("spark.sql.sources.commitProtocolClass", "org.apache.spark.internal.io.cloud.PathOutputCommitProtocol")
               .option("spark.sql.parquet.output.committer.class", "org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter")
               .option("compression", compressionCodecName.name().toLowerCase())
               .save(DOWNLOADS_NON_COMPACT_PATH);

这就不同了,下面的stacktrace描述了它使用partitionedstagingcommitter
我还可以看到,在s3中,\ u success文件是一个json,而不是空的touch文件(\ u success)。

20/04/06 14:27:26 INFO ParquetFileFormat: Using user defined output committer for Parquet: org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter
20/04/06 14:27:26 INFO FileOutputCommitter: File Output Committer Algorithm version is 1
20/04/06 14:27:26 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
20/04/06 14:27:26 INFO AbstractS3ACommitterFactory: Using committer partitioned to output data to s3a://************/observation
20/04/06 14:27:26 INFO AbstractS3ACommitterFactory: Using Commmitter PartitionedStagingCommitter{StagingCommitter{AbstractS3ACommitter{role=Task committer attempt_20200406142726_0000_m_000000_0, name=partitioned, outputPath=s3a://*********/observation, workPath=file:/tmp/hadoop-**********/s3a/local-1586197641397/_temporary/0/_temporary/attempt_20200406142726_0000_m_000000_0}, conflictResolution=APPEND, wrappedCommitter=FileOutputCommitter{PathOutputCommitter{context=TaskAttemptContextImpl{JobContextImpl{jobId=job_20200406142726_0000}; taskId=attempt_20200406142726_0000_m_000000_0, status=''}; org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter@4494e88a}; outputPath=file:/Users/**********/Downloads/SparkParquetSample/tmp/staging/**********/local-1586197641397/staging-uploads, workPath=null, algorithmVersion=1, skipCleanup=false, ignoreCleanupFailures=false}}} for s3a://parquet-uuid-test/device-metric-observation6
20/04/06 14:27:27 INFO HashAggregateExec: spark.sql.codegen.aggregate.map.twolevel.enabled is set to true, but current version of codegened fast hashmap does not support this aggregate.
20/04/06 14:27:27 INFO CodeGenerator: Code generated in 52.744811 ms
20/04/06 14:27:27 INFO HashAggregateExec: spark.sql.codegen.aggregate.map.twolevel.enabled is set to true, but current version of codegened fast hashmap does not support this aggregate.
20/04/06 14:27:27 INFO CodeGenerator: Code generated in 48.78277 ms
cclgggtu

cclgggtu2#

有同样的问题,从如何让aws上的本地spark写入s3的解决方案可以加载partitionedstagingcommitter。您还必须下载解决方案中提到的spark hadoop cloud jar。
我也使用spark3.0,这个版本的jar工作正常https://repo.hortonworks.com/content/repositories/releases/org/apache/spark/spark-hadoop-cloud_2.11/2.4.2.3.1.3.0-79/
my spark-defaults.conf中的设置

spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version 2
spark.hadoop.fs.s3a.committer.name                           partitioned
spark.hadoop.fs.s3a.committer.magic.enabled                  false
spark.hadoop.fs.s3a.commiter.staging.conflict-mode           append
spark.hadoop.fs.s3a.committer.staging.unique-filenames       true
spark.hadoop.fs.s3a.committer.staging.abort.pending.uploads  true
spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a    
org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory
spark.sql.sources.commitProtocolClass                        
org.apache.spark.internal.io.cloud.PathOutputCommitProtocol
spark.sql.parquet.output.committer.class                     
org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter

相关问题