spark bigquery连接器从元数据服务器获取访问令牌时出错

dfty9e19  于 2021-07-14  发布在  Spark
关注(0)|答案(0)|浏览(332)

我在尝试使用spark bigquery连接器写入bigquery时遇到此错误。应用程序是从hadoop集群(不是dataproc)运行的。
java.io.ioexception:从元数据服务器获取访问令牌时出错,位置为:http://169.254.169.254/computemetadata/v1/instance/service-accounts/default/token 在com.google.cloud.hadoop.repacked.gcs.com.google.cloud.hadoop.util.credentialfactory.getcredentialfrommetadataserviceaccount(credentialfactory。java:236)在com.google.cloud.hadoop.repacked.gcs.com.google.cloud.hadoop.util.credentialconfiguration.getcredential(credentialconfiguration)。java:91)在com.google.cloud.hadoop.fs.gcs.googlehadoopfilesystembase.getcredential(googlehadoopfilesystembase。java:1533)在com.google.cloud.hadoop.fs.gcs.googlehadoopfilesystembase.configure(googlehadoopfilesystembase。java:1554)在com.google.cloud.hadoop.fs.gcs.googlehadoopfilesystembase.initialize(googlehadoopfilesystembase。java:654)在com.google.cloud.hadoop.fs.gcs.googlehadoopfilesystembase.initialize(googlehadoopfilesystembase。java:617)在org.apache.hadoop.fs.filesystem.createfilesystem(文件系统)。java:3303)在org.apache.hadoop.fs.filesystem.access$200(文件系统)。java:124)在org.apache.hadoop.fs.filesystem$cache.getinternal(filesystem。java:3352)在org.apache.hadoop.fs.filesystem$cache.get(filesystem。java:3320)在org.apache.hadoop.fs.filesystem.get(filesystem。java:479)在org.apache.hadoop.fs.path.getfilesystem(路径。java:361)在com.google.cloud.spark.bigquery.bigquerywritehelper.(bigquerywritehelper。scala:62)在com.google.cloud.spark.bigquery.bigqueryinsertablerelation.insert(bigqueryinsertablerelation。scala:42)在com.google.cloud.spark.bigquery.bigqueryrelationprovider.createrelation(bigqueryrelationprovider。scala:112)在org.apache.spark.sql.execution.datasources.saveintodatasourcecommand.run(saveintodatasourcecommand。scala:45)位于org.apache.spark.sql.execution.command.executedcommandexec.sideeffectresult$lzycompute(命令)。scala:70)在org.apache.spark.sql.execution.command.executedcommandexec.sideeffectresult(命令。scala:68)在org.apache.spark.sql.execution.command.executecommandexec.doexecute(commands。scala:86)在org.apache.spark.sql.execution.sparkplan$$anonfun$execute$1.apply(sparkplan。scala:131)在org.apache.spark.sql.execution.sparkplan$$anonfun$执行$1.apply(sparkplan。scala:127)在org.apache.spark.sql.execution.sparkplan$$anonfun$executequery$1.apply(sparkplan。scala:155)在org.apache.spark.rdd.rddoperationscope$.withscope(rddoperationscope。scala:151)在org.apache.spark.sql.execution.sparkplan.executequery(sparkplan。scala:152)在org.apache.spark.sql.execution.sparkplan.execute(sparkplan。scala:127)在org.apache.spark.sql.execution.queryexecution.tordd$lzycompute(queryexecution。scala:80)在org.apache.spark.sql.execution.queryexecution.tordd(queryexecution。scala:80)在org.apache.spark.sql.dataframewriter$$anonfun$runcommand$1.apply(dataframewriter。scala:664)在org.apache.spark.sql.dataframewriter$$anonfun$runcommand$1.apply(dataframewriter。scala:664)位于org.apache.spark.sql.execution.sqlexecution$.withnewexecutionid(sqlexecution)。scala:77)位于org.apache.spark.sql.dataframewriter.runcommand(dataframewriter。scala:664)位于org.apache.spark.sql.dataframewriter.savetov1source(dataframewriter。scala:273)在org.apache.spark.sql.dataframewriter.save(dataframewriter。scala:267)位于org.apache.spark.sql.dataframewriter.save(dataframewriter。scala:225)
这是密码,

dataset.write().format("bigquery")
            .option("temporaryGcsBucket", tempGcsBucket)
            //.option("table", databaseName + "." + tableName)
            .option("project", projectId)
            .option("parentProject", parentProjectId)
            .option("credentials", credentials)
            .mode(saveMode).save(projectId + "." + databaseName + "." + tableName);

我能够使用相同的凭据(服务帐户base64编码)从尝试写入的同一个表中读取数据。我使用的是spark-bigquery-with-dependencies\ u2.11-0.19.1.jar版本的connector。
在项目和父项目相同的低层环境中,相同的代码工作得很好。但在prod中,它们是不同的。

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题