我的spark hive java程序卡住了

fnx2tebb  于 2021-06-02  发布在  Hadoop
关注(0)|答案(0)|浏览(272)

我正在尝试运行maven java代码,其中我正在尝试使用sparksql查询配置单元表。这是我的程序:

SparkSession spark = SparkSession
            .builder()
            .appName("Java Spark Hive Example")
            .master("local[*]")
            .config("hive.metastore.uris", "thrift://localhost:9083/")
            .enableHiveSupport()
            .getOrCreate();
try
{
        spark.sql("select count(*) from health").show();
}
catch (Exception AnalysisException)
{
    System.out.print("\nTable is not found\n");
}

这就是我被困的地方。

log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/02/18 20:17:49 INFO SparkContext: Running Spark version 2.1.0
17/02/18 20:17:50 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/02/18 20:17:50 WARN Utils: Your hostname, aims1 resolves to a loopback address: 127.0.1.1; using 10.0.0.8 instead (on interface wlxa42b8c61853d)
17/02/18 20:17:50 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
17/02/18 20:17:50 INFO SecurityManager: Changing view acls to: aims1
17/02/18 20:17:50 INFO SecurityManager: Changing modify acls to: aims1
17/02/18 20:17:50 INFO SecurityManager: Changing view acls groups to: 
17/02/18 20:17:50 INFO SecurityManager: Changing modify acls groups to: 
17/02/18 20:17:50 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(aims1); groups with view permissions: Set(); users  with modify permissions: Set(aims1); groups with modify permissions: Set()
17/02/18 20:17:50 INFO Utils: Successfully started service 'sparkDriver' on port 37935.
17/02/18 20:17:50 INFO SparkEnv: Registering MapOutputTracker
17/02/18 20:17:50 INFO SparkEnv: Registering BlockManagerMaster
17/02/18 20:17:50 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/02/18 20:17:50 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/02/18 20:17:50 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-9f693b47-ab63-472b-b23b-82f30bb10127
17/02/18 20:17:50 INFO MemoryStore: MemoryStore started with capacity 1942.8 MB
17/02/18 20:17:50 INFO SparkEnv: Registering OutputCommitCoordinator
17/02/18 20:17:50 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/02/18 20:17:50 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.0.0.8:4040
17/02/18 20:17:50 INFO Executor: Starting executor ID driver on host localhost
17/02/18 20:17:50 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 37931.
17/02/18 20:17:50 INFO NettyBlockTransferService: Server created on 10.0.0.8:37931
17/02/18 20:17:50 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
17/02/18 20:17:50 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.0.0.8, 37931, None)
17/02/18 20:17:50 INFO BlockManagerMasterEndpoint: Registering block manager 10.0.0.8:37931 with 1942.8 MB RAM, BlockManagerId(driver, 10.0.0.8, 37931, None)
17/02/18 20:17:50 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.0.0.8, 37931, None)
17/02/18 20:17:50 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.0.0.8, 37931, None)
17/02/18 20:17:51 INFO SharedState: Warehouse path is 'file:/home/aims1/Downloads/sparkhive1/spark-warehouse'.
17/02/18 20:17:51 INFO HiveUtils: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
17/02/18 20:17:51 INFO metastore: Trying to connect to metastore with URI thrift://localhost:9083/

很长一段时间后,出现了一个异常,称为超时。
在我的课程中我如何处理这种情况?请帮帮我。

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题