使用staging s3a committer将parquet写入aws s3时,apachespark中出现不满意的链接错误

hgqdbh6s  于 2021-05-27  发布在  Spark
关注(0)|答案(1)|浏览(601)

我正在尝试用apachespark将Parquet数据写入awss3目录。我在windows10上使用本地计算机时没有安装spark和hadoop,而是将它们添加为sbt依赖项(hadoop3.2.1、spark 2.4.5)。我的sbt如下:

scalaVersion := "2.11.11"

libraryDependencies ++= Seq(
  "org.apache.spark" %% "spark-sql" % "2.4.5",
  "org.apache.spark" %% "spark-hadoop-cloud" % "2.3.2.3.1.0.6-1",

  "org.apache.hadoop" % "hadoop-client" % "3.2.1",
  "org.apache.hadoop" % "hadoop-common" % "3.2.1",
  "org.apache.hadoop" % "hadoop-aws" % "3.2.1",

  "com.amazonaws" % "aws-java-sdk-bundle" % "1.11.704"
)

dependencyOverrides ++= Seq(
  "com.fasterxml.jackson.core" % "jackson-core" % "2.11.0",
  "com.fasterxml.jackson.core" % "jackson-databind" % "2.11.0",
  "com.fasterxml.jackson.module" %% "jackson-module-scala" % "2.11.0"
)

resolvers ++= Seq(
  "apache" at "https://repo.maven.apache.org/maven2",
  "hortonworks" at "https://repo.hortonworks.com/content/repositories/releases/",
)

我使用hadoop和cloudera文档中描述的s3a临时目录提交器。我也知道关于stackoverflow的这两个问题,并将它们用于正确的配置:
apachespark+parquet不尊重使用“分区”staging s3a committer的配置
如何让aws上的localspark写入s3
我已经添加了所有必需的配置(截至我的理解),包括最新的两个特定的Parquet地板:

val spark = SparkSession.builder()
      .appName("test-run-s3a-commiters")
      .master("local[*]")

      .config("spark.hadoop.fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
      .config("spark.hadoop.fs.s3a.endpoint", "s3.eu-central-1.amazonaws.com")
      .config("spark.hadoop.fs.s3a.aws.credentials.provider", "com.amazonaws.auth.profile.ProfileCredentialsProvider")
      .config("spark.hadoop.fs.s3a.connection.maximum", "100")

      .config("spark.hadoop.fs.s3a.committer.name", "directory")
      .config("spark.hadoop.fs.s3a.committer.magic.enabled", "false")
      .config("spark.hadoop.fs.s3a.committer.staging.conflict-mode", "append")
      .config("spark.hadoop.fs.s3a.committer.staging.unique-filenames", "true")
      .config("spark.hadoop.fs.s3a.committer.staging.abort.pending.uploads", "true")
      .config("spark.hadoop.fs.s3a.buffer.dir", "tmp/")
      .config("spark.hadoop.fs.s3a.committer.staging.tmp.path", "hdfs_tmp/")
      .config("spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version", "2")
      .config("spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a", "org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory")

      .config("spark.sql.sources.commitProtocolClass", "org.apache.spark.internal.io.cloud.PathOutputCommitProtocol")
      .config("spark.sql.parquet.output.committer.class", "org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter")
      .getOrCreate()

spark.sparkContext.setLogLevel("info")

从日志中,我可以看到stagingcommitter被实际应用(我还可以看到本地文件系统中指定路径下的中间数据,在执行过程中没有s3中的临时目录,就像默认fileoutputcommitter一样)。
然后我运行简单的代码将测试数据写入s3 bucket:

import spark.implicits._

val sourceDF = spark
  .range(0, 10000)
  .map(id => {
    Thread.sleep(10)
    id
  })

sourceDF
  .write
  .format("parquet")
  .save("s3a://my/test/bucket/")

(我使用 Thread.sleep 模拟一些处理过程,几乎没有时间检查本地temp目录和s3 bucket的中间内容)
但是,我得到一个 java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat 提交任务尝试期间出错。下面是一段日志(减少到1个执行器)和错误堆栈跟踪。

20/05/09 15:13:18 INFO InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 15000
20/05/09 15:13:18 INFO StagingCommitter: Starting: Task committer attempt_20200509151301_0000_m_000000_0: needsTaskCommit() Task attempt_20200509151301_0000_m_000000_0
20/05/09 15:13:18 INFO StagingCommitter: Task committer attempt_20200509151301_0000_m_000000_0: needsTaskCommit() Task attempt_20200509151301_0000_m_000000_0: duration 0:00.005s
20/05/09 15:13:18 INFO StagingCommitter: Starting: Task committer attempt_20200509151301_0000_m_000000_0: commit task attempt_20200509151301_0000_m_000000_0
20/05/09 15:13:18 INFO StagingCommitter: Task committer attempt_20200509151301_0000_m_000000_0: commit task attempt_20200509151301_0000_m_000000_0: duration 0:00.019s
20/05/09 15:13:18 ERROR Utils: Aborting task
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat(Ljava/lang/String;)Lorg/apache/hadoop/io/nativeio/NativeIO$POSIX$Stat;
    at org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat(Native Method)
    at org.apache.hadoop.io.nativeio.NativeIO$POSIX.getStat(NativeIO.java:460)
    at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfoByNativeIO(RawLocalFileSystem.java:821)
    at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:735)
    at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:703)
    at org.apache.hadoop.fs.LocatedFileStatus.<init>(LocatedFileStatus.java:52)
    at org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:2091)
    at org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:2071)
    at org.apache.hadoop.fs.FileSystem$5.hasNext(FileSystem.java:2190)
    at org.apache.hadoop.fs.s3a.S3AUtils.applyLocatedFiles(S3AUtils.java:1295)
    at org.apache.hadoop.fs.s3a.S3AUtils.flatmapLocatedFiles(S3AUtils.java:1333)
    at org.apache.hadoop.fs.s3a.S3AUtils.listAndFilter(S3AUtils.java:1350)
    at org.apache.hadoop.fs.s3a.commit.staging.StagingCommitter.getTaskOutput(StagingCommitter.java:385)
    at org.apache.hadoop.fs.s3a.commit.staging.StagingCommitter.commitTask(StagingCommitter.java:641)
    at org.apache.spark.mapred.SparkHadoopMapRedUtil$.performCommit$1(SparkHadoopMapRedUtil.scala:50)
    at org.apache.spark.mapred.SparkHadoopMapRedUtil$.commitTask(SparkHadoopMapRedUtil.scala:77)
    at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitTask(HadoopMapReduceCommitProtocol.scala:225)
    at org.apache.spark.internal.io.cloud.PathOutputCommitProtocol.commitTask(PathOutputCommitProtocol.scala:220)
    at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:78)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:247)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:242)
    at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:248)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:123)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
20/05/09 15:13:18 ERROR Utils: Aborting task

根据我目前的理解,配置是正确的。该错误可能是由某些版本不兼容或我的本地环境设置引起的。
所提供的代码在orc和csv中正常工作,没有任何错误,但不适用于Parquet地板。
请给出可能导致错误的原因和解决方法?

gojuced7

gojuced71#

对于每个来这里的人,我都找到了解决办法。正如所料,问题与s3a输出提交程序或库依赖关系无关。
由于sbt依赖项中的hadoop版本与我的windows计算机上的winutils.exe(hdfs wrapper)之间的版本不兼容,引发了java本机方法上的UnsatifiedLinkError异常。
我已经从cdarlint/winutils下载了相应的版本,一切正常。英雄联盟

相关问题