从spark创建/加载Parquet地板表时遇到问题
环境详情:
horotonworks hdp3.0版
Spark2.3.1
Hive3.1
1#. 当试图在hive3.1到spark2.3中创建Parquet表时,spark抛出以下错误。
df.write.format("parquet").mode("overwrite").saveAsTable("database_name.test1")
pyspark.sql.utils.AnalysisException: u'org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Table datamart.test1 failed strict managed table checks due to the following reason: Table is marked as a managed table but is not transactional.);'
2#. 成功地将数据插入到现有的Parquet表中并通过spark检索。
df.write.format("parquet").mode("overwrite").insertInto("database_name.test2")
spark.sql("select * from database_name.test2").show()
spark.read.parquet("/path-to-table-dir/part-00000.snappy.parquet").show()
但是当我试图通过配置单元读取同一个表时,配置单元会话会断开连接并抛出下面的错误。
SELECT * FROM database_name.test2
org.apache.thrift.transport.TTransportException
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
at org.apache.thrift.transport.TSaslTransport.readLength(TSaslTransport.java:376)
at org.apache.thrift.transport.TSaslTransport.readFrame(TSaslTransport.java:453)
at org.apache.thrift.transport.TSaslTransport.read(TSaslTransport.java:435)
at org.apache.thrift.transport.TSaslClientTransport.read(TSaslClientTransport.java:37)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77)
at org.apache.hive.service.rpc.thrift.TCLIService$Client.recv_FetchResults(TCLIService.java:567)
at org.apache.hive.service.rpc.thrift.TCLIService$Client.FetchResults(TCLIService.java:554)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hive.jdbc.HiveConnection$SynchronizedHandler.invoke(HiveConnection.java:1572)
at com.sun.proxy.$Proxy22.FetchResults(Unknown Source)
at org.apache.hive.jdbc.HiveQueryResultSet.next(HiveQueryResultSet.java:373)
at org.apache.hive.beeline.BufferedRows.<init>(BufferedRows.java:56)
at org.apache.hive.beeline.IncrementalRowsWithNormalization.<init>(IncrementalRowsWithNormalization.java:50)
at org.apache.hive.beeline.BeeLine.print(BeeLine.java:2250)
at org.apache.hive.beeline.Commands.executeInternal(Commands.java:1026)
at org.apache.hive.beeline.Commands.execute(Commands.java:1201)
at org.apache.hive.beeline.Commands.sql(Commands.java:1130)
at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1425)
at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:1287)
at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:1071)
at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:538)
at org.apache.hive.beeline.BeeLine.main(BeeLine.java:520)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
Unknown HS2 problem when communicating with Thrift server.
Error: org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe (Write failed) (state=08S01,code=0)
此错误后,配置单元会话被断开,我必须重新连接。所有其他查询都正常工作,只有此查询显示上述错误并断开连接。
1条答案
按热度按时间bwitn5fc1#
发生此问题的原因是在没有配置单元仓库连接器的情况下访问了配置单元表。
默认情况下,spark使用spark目录,下面的文章将解释如何通过spark访问apachehive表。
集成apache-hive和apache-spark-hive-warehouse连接器
从hdp3.0开始,apachehive和apachespark的目录是分开的,它们使用自己的目录;也就是说,它们是互斥的—apache hive目录只能由apache hive或此库访问,而apache spark目录只能由apache spark中的现有API访问。换句话说,一些特性,如acid表或apache ranger with apache hive表,只能通过apachespark中的这个库获得。配置单元中的那些表不应该直接在apachesparkapi中访问。