spark python脚本未写入hbase

dzjeubhm  于 2021-06-09  发布在  Hbase
关注(0)|答案(1)|浏览(409)

我正在尝试运行这个博客的脚本

import sys  
import json  
from pyspark import SparkContext  
from pyspark.streaming import StreamingContext  
def SaveRecord(rdd):  
    host = 'sparkmaster.example.com'  
    table = 'cats'  
    keyConv = "org.apache.spark.examples.pythonconverters.StringToImmutableBytesWritableConverter"  
    valueConv = "org.apache.spark.examples.pythonconverters.StringListToPutConverter"  
    conf = {"hbase.zookeeper.quorum": host,  
        "hbase.mapred.outputtable": table,  
        "mapreduce.outputformat.class": "org.apache.hadoop.hbase.mapreduce.TableOutputFormat",  
        "mapreduce.job.output.key.class": "org.apache.hadoop.hbase.io.ImmutableBytesWritable",  
        "mapreduce.job.output.value.class": "org.apache.hadoop.io.Writable"}  
    datamap = rdd.map(lambda x: (str(json.loads(x)["id"]),[str(json.loads(x)["id"]),"cfamily","cats_json",x]))  
    datamap.saveAsNewAPIHadoopDataset(conf=conf,keyConverter=keyConv,valueConverter=valueConv)  

if __name__ == "__main__":  
    if len(sys.argv) != 3:  
      print("Usage: StreamCatsToHBase.py <hostname> <port>")  
      exit(-1)  

    sc = SparkContext(appName="StreamCatsToHBase")  
    ssc = StreamingContext(sc, 1)  
    lines = ssc.socketTextStream(sys.argv[1], int(sys.argv[2]))  
    lines.foreachRDD(SaveRecord)  

    ssc.start()       # Start the computation  
    ssc.awaitTermination() # Wait for the computation to terminate

我不能运行它。我尝试了三种不同的命令行选项,但没有一种生成输出,也没有将数据写入hbase表
下面是我尝试的命令行选项 spark-submit --jars /usr/local/spark/lib/spark-examples-1.5.2-hadoop2.4.0.jar --jars /usr/local/hbase/lib/hbase-examples-1.1.2.jar sp_json.py localhost 2389 > sp_json.log spark-submit --driver-class-path /usr/local/spark/lib/spark-examples-1.5.2-hadoop2.4.0.jar sp_json.py localhost 2389 > sp_json.log spark-submit --driver-class-path /usr/local/spark/lib/spark-examples-1.5.2-hadoop2.4.0.jar --jars /usr/local/hbase/lib/hbase-examples-1.1.2.jar sp_json.py localhost 2389 > sp_json.log 这是日志文件。太冗长了。这是在apachespark中调试困难的原因之一,因为它会输出太多的信息。

wlzqhblo

wlzqhblo1#

最后使用以下命令语法使它工作 spark-submit --jars /usr/local/spark/lib/spark-examples-1.5.2-hadoop2.4.0.jar,/usr/local/hbase/lib/hbase-examples-1.1.2.jar sp_json.py localhost 2399 > sp_json.log

相关问题