我已经把我的配置单元引擎改成了tez,想用tez运行查询,但是查询只能用hadoop和配置单元用户执行,当我在beeline或hue中更改用户(user51)时,查询失败了。但当配置单元引擎为mr时,同样的查询在user51上运行良好。
下面是错误调试日志的场景。
为所有用户工作
SET hive.execution.engine=mr;
SELECT count(*) FROM db.mytable;
仅适用于hadoop和hive用户
SET hive.execution.engine=tez;
SELECT count(*) FROM db.mytable;
对其他用户(如user51)的查询失败
SET hive.execution.engine=tez;
SELECT count(*) FROM db.mytable;
错误日志
INFO [HiveServer2-Background-Pool: Thread-643([])]: client.TezClientUtils (TezClientUtils.java:setupTezJarsLocalResources(178)) - Using tez.lib.uris value from configuration: hdfs:///apps/tez/tez.tar.gz
INFO [HiveServer2-Background-Pool: Thread-643([])]: client.TezClientUtils (TezClientUtils.java:setupTezJarsLocalResources(180)) - Using tez.lib.uris.classpath value from configuration: null
DEBUG [IPC Parameter Sending Thread #13([])]: ipc.Client (Client.java:run(1117)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 sending #952 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo
DEBUG [IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51([])]: ipc.Client (Client.java:receiveRpcResponse(1171)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 got value #952
DEBUG [HiveServer2-Background-Pool: Thread-643([])]: ipc.ProtobufRpcEngine (ProtobufRpcEngine.java:invoke(248)) - Call: getFileInfo took 1ms
DEBUG [IPC Parameter Sending Thread #13([])]: ipc.Client (Client.java:run(1117)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 sending #953 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo
DEBUG [IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51([])]: ipc.Client (Client.java:receiveRpcResponse(1171)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 got value #953
DEBUG [HiveServer2-Background-Pool: Thread-643([])]: ipc.ProtobufRpcEngine (ProtobufRpcEngine.java:invoke(248)) - Call: getFileInfo took 1ms
DEBUG [IPC Parameter Sending Thread #13([])]: ipc.Client (Client.java:run(1117)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 sending #954 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo
DEBUG [IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51([])]: ipc.Client (Client.java:receiveRpcResponse(1171)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 got value #954
DEBUG [HiveServer2-Background-Pool: Thread-643([])]: ipc.ProtobufRpcEngine (ProtobufRpcEngine.java:invoke(248)) - Call: getFileInfo took 0ms
DEBUG [IPC Parameter Sending Thread #13([])]: ipc.Client (Client.java:run(1117)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 sending #955 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo
DEBUG [IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51([])]: ipc.Client (Client.java:receiveRpcResponse(1171)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 got value #955
DEBUG [HiveServer2-Background-Pool: Thread-643([])]: ipc.ProtobufRpcEngine (ProtobufRpcEngine.java:invoke(248)) - Call: getFileInfo took 1ms
DEBUG [IPC Parameter Sending Thread #13([])]: ipc.Client (Client.java:run(1117)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 sending #956 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo
DEBUG [IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51([])]: ipc.Client (Client.java:receiveRpcResponse(1171)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 got value #956
DEBUG [HiveServer2-Background-Pool: Thread-643([])]: ipc.ProtobufRpcEngine (ProtobufRpcEngine.java:invoke(248)) - Call: getFileInfo took 1ms
DEBUG [IPC Parameter Sending Thread #13([])]: ipc.Client (Client.java:run(1117)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 sending #957 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo
DEBUG [IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51([])]: ipc.Client (Client.java:receiveRpcResponse(1171)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 got value #957
DEBUG [HiveServer2-Background-Pool: Thread-643([])]: ipc.ProtobufRpcEngine (ProtobufRpcEngine.java:invoke(248)) - Call: getFileInfo took 0ms
DEBUG [IPC Parameter Sending Thread #13([])]: ipc.Client (Client.java:run(1117)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 sending #958 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo
DEBUG [IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51([])]: ipc.Client (Client.java:receiveRpcResponse(1171)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 got value #958
DEBUG [HiveServer2-Background-Pool: Thread-643([])]: ipc.ProtobufRpcEngine (ProtobufRpcEngine.java:invoke(248)) - Call: getFileInfo took 1ms
DEBUG [IPC Parameter Sending Thread #13([])]: ipc.Client (Client.java:run(1117)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 sending #959 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo
DEBUG [IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51([])]: ipc.Client (Client.java:receiveRpcResponse(1171)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 got value #959
DEBUG [HiveServer2-Background-Pool: Thread-643([])]: retry.RetryInvocationHandler (RetryInvocationHandler.java:handleException(366)) - Exception while invoking call #959 ClientNamenodeProtocolTranslatorPB.getFileInfo over null. Not retrying because try once and fail.
org.apache.hadoop.ipc.RemoteException: java.lang.NullPointerException
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1489) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1435) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1345) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at com.sun.proxy.$Proxy31.getFileInfo(Unknown Source) ~[?:?]
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:796) ~[hadoop-hdfs-client-2.8.3-amzn-1.jar:?]
at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_222]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_222]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409) [hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) [hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346) [hadoop-common-2.8.3-amzn-1.jar:?]
at com.sun.proxy.$Proxy32.getFileInfo(Unknown Source) [?:?]
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1717) [hadoop-hdfs-client-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1437) [hadoop-hdfs-client-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1434) [hadoop-hdfs-client-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) [hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1449) [hadoop-hdfs-client-2.8.3-amzn-1.jar:?]
at org.apache.tez.client.TezClientUtils.checkAncestorPermissionsForAllUsers(TezClientUtils.java:1036) [tez-api-0.8.4.jar:0.8.4]
at org.apache.tez.client.TezClientUtils.addLocalResources(TezClientUtils.java:275) [tez-api-0.8.4.jar:0.8.4]
at org.apache.tez.client.TezClientUtils.setupTezJarsLocalResources(TezClientUtils.java:183) [tez-api-0.8.4.jar:0.8.4]
at org.apache.tez.client.TezClient.getTezJarResources(TezClient.java:1057) [tez-api-0.8.4.jar:0.8.4]
at org.apache.tez.client.TezClient.start(TezClient.java:447) [tez-api-0.8.4.jar:0.8.4]
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.startSessionAndContainers(TezSessionState.java:376) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:323) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:703) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:196) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:303) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:168) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2183) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1839) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1526) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1232) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:255) [hive-service-2.3.2-amzn-2.jar:2.3.2-amzn-2]
at org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91) [hive-service-2.3.2-amzn-2.jar:2.3.2-amzn-2]
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:348) [hive-service-2.3.2-amzn-2.jar:2.3.2-amzn-2]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_222]
at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_222]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836) [hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:362) [hive-service-2.3.2-amzn-2.jar:2.3.2-amzn-2]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_222]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_222]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_222]
ERROR [HiveServer2-Background-Pool: Thread-643([])]: exec.Task (TezTask.java:execute(230)) - Failed to execute tez graph.
org.apache.hadoop.ipc.RemoteException: java.lang.NullPointerException
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1489) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1435) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1345) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at com.sun.proxy.$Proxy31.getFileInfo(Unknown Source) ~[?:?]
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:796) ~[hadoop-hdfs-client-2.8.3-amzn-1.jar:?]
at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_222]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_222]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at com.sun.proxy.$Proxy32.getFileInfo(Unknown Source) ~[?:?]
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1717) ~[hadoop-hdfs-client-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1437) ~[hadoop-hdfs-client-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1434) ~[hadoop-hdfs-client-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1449) ~[hadoop-hdfs-client-2.8.3-amzn-1.jar:?]
at org.apache.tez.client.TezClientUtils.checkAncestorPermissionsForAllUsers(TezClientUtils.java:1036) ~[tez-api-0.8.4.jar:0.8.4]
at org.apache.tez.client.TezClientUtils.addLocalResources(TezClientUtils.java:275) ~[tez-api-0.8.4.jar:0.8.4]
at org.apache.tez.client.TezClientUtils.setupTezJarsLocalResources(TezClientUtils.java:183) ~[tez-api-0.8.4.jar:0.8.4]
at org.apache.tez.client.TezClient.getTezJarResources(TezClient.java:1057) ~[tez-api-0.8.4.jar:0.8.4]
at org.apache.tez.client.TezClient.start(TezClient.java:447) ~[tez-api-0.8.4.jar:0.8.4]
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.startSessionAndContainers(TezSessionState.java:376) ~[hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:323) ~[hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:703) ~[hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:196) ~[hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:303) ~[hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:168) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2183) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1839) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1526) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1232) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:255) [hive-service-2.3.2-amzn-2.jar:2.3.2-amzn-2]
at org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91) [hive-service-2.3.2-amzn-2.jar:2.3.2-amzn-2]
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:348) [hive-service-2.3.2-amzn-2.jar:2.3.2-amzn-2]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_222]
at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_222]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836) [hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:362) [hive-service-2.3.2-amzn-2.jar:2.3.2-amzn-2]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_222]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_222]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_222]
ERROR [HiveServer2-Background-Pool: Thread-643([])]: ql.Driver (SessionState.java:printError(1126)) - FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask
我不知道发生了什么。有人能帮忙吗?
1条答案
按热度按时间agxfikkp1#
最后我找到了解决办法。我们在hdfs-site.xml中添加了一些hdfs授权属性,在tez引擎上执行查询时,tez正在hdfs中创建一些临时文件和目录。因此,我从hdfs-site.xml中删除了以下附加属性,并重新启动了hadoop服务。
附加属性
希望这对某人有帮助。