error azurenativefilesystemstore:目录不为空

gk7wooem  于 2021-05-27  发布在  Hadoop
关注(0)|答案(1)|浏览(733)

我正在尝试在azurehdinsigth中执行此代码。我有一个与数据湖存储有关的星团Spark。

spark.conf.set(
"fs.azure.sas.data.spmdevsharedstorage.blob.core.windows.net",
"xxxxxxxxxxx key xxxxxxxxxxx"
)

val shared_data = "wasbs://data@spmdevsharedstorage.blob.core.windows.net/"

//Read Csv
val dfCsv = spark.read.option("inferSchema", "true").option("header", true).csv(shared_data + "/test/4G-pixel.csv")
val dfCsv_final_withcolumn = dfCsv.select($"latitude",$"longitude")
val dfCsv_final = dfCsv_final_withcolumn.withColumn("new_latitude",col("latitude")*100)

//write
dfCsv_final.coalesce(1).write.format("com.databricks.spark.csv").option("header", "true").mode("overwrite").save(shared_data + "/test/4G-pixel_edit.csv")

代码可以很好地读取csv文件。因此,在编写新文件csv时,我看到以下错误:

20/04/03 14:58:12 ERROR AzureNativeFileSystemStore: Encountered Storage Exception for delete on Blob: https://spmdevsharedstorage.blob.core.windows.net/data/test/4G-pixel_edit.csv/_temporary/0, Exception Details: This operation is not permitted on a non-empty directory. Error Code: DirectoryIsNotEmpty
org.apache.hadoop.fs.azure.AzureException: com.microsoft.azure.storage.StorageException: This operation is not permitted on a non-empty directory.
  at org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.delete(AzureNativeFileSystemStore.java:2627)
  at org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.delete(AzureNativeFileSystemStore.java:2637)

新文件csv被写入数据湖,但代码停止。我需要你不要看到这个错误。我该怎么修?

3zwjbxry

3zwjbxry1#

我也面临类似的问题。
我用下面的配置解决了这个问题。。将此设置为true。

--conf spark.hadoop.mapreduce.fileoutputcommitter.cleanup.skipped=true

或者

spark.conf.set("spark.hadoop.mapreduce.fileoutputcommitter.cleanup.skipped","true")

相关问题