在cosmodb spark中执行批量导入api时遇到错误

kx1ctssn  于 2021-05-18  发布在  Spark
关注(0)|答案(0)|浏览(180)

我正在尝试使用cosmodb bulk conf在一个spark作业中进行批量导入:

CosmosDBConfig.Endpoint -> config.getString("cosmosdb.endpoint"),
    CosmosDBConfig.Masterkey -> sys.props.get("masterkey").getOrElse("No env"),
    CosmosDBConfig.Database -> config.getString("cosmosdb.database"),
    CosmosDBConfig.Collection -> config.getString("cosmosdb.collection"),
    CosmosDBConfig.Upsert -> config.getString("cosmosdb.upsert"),
    CosmosDBConfig.PreferredRegionsList -> config.getString("cosmosdb.preferredregion"),
    CosmosDBConfig.BulkImport -> "true"

在检索/写入一个大于通常大小(2.9 mb)的文档时,出现以下异常(我的集合定义了分区键):

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 8.0 failed 4 times, most recent failure: Lost task 0.3 in stage 8.0 (TID 246, 10.10.42.5, executor 1): java.lang.Exception: Errors encountered in bulk import API execution. PartitionKeyDefinition: {"paths":["/key/businessUnit/id"],"kind":"Hash"}, Number of failures corresponding to exception of type: com.microsoft.azure.documentdb.DocumentClientException = 1. The failed import docs are:

提前谢谢你的帮助

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题