我们总是遇到错误:
原因:net.opentsdb.uid.nosuchuniquename:net.opentsdb.uid.uniqueid$1getidcb.call(uniqueid)处的“metrics”没有这样的名称:“test”。java:450)~[tsdb-2.4.0.jar:]at net.opentsdb.uid.uniqueid$1getidcb.call(uniqueid。java:447)~[tsdb-2.4.0.jar:]。。。省略34个公共框架
错误[asynchbase i/o worker#13]唯一ID:尝试#1为分配uid失败metrics:test at 步骤#2 org.hbase.async.remoteexception:org.apache.hadoop.hbase.donotretryioexception:java.lang.noclassdeffounderror:无法初始化类org.apache.hadoop.hbase.shaded.protobufuutil$classloaderholder
我看到网络上常见的错误是,如果缺少以下三个参数:
tsd.core.auto_create_metrics = true
tsd.core.auto_create_tagks = true
tsd.core.auto_create_tagvs = true
我们正在向开放的tsdb发送数据。
echo "put test 1548838714 1 tag1=1" | nc 192.168.150.101 4243
此外,我们还注意到在尝试执行echo命令时有时会出错(如果opentsdb是使用build/tsdb tsd运行的,而不是通过/etc/init.d/opentsdb(通过使用命令服务opentsdb start):
这是配置文件:
# --------- NETWORK ----------
# The TCP port TSD should use for communications
# ***REQUIRED***
tsd.network.port = 4243
# The IPv4 network address to bind to, defaults to all addresses
# tsd.network.bind = 0.0.0.0
# Enables Nagel's algorithm to reduce the number of packets sent over the
# network, default is True
# tsd.network.tcpnodelay = true
# Determines whether or not to send keepalive packets to peers, default
# is True
# tsd.network.keepalive = true
# Determines if the same socket should be used for new connections, default
# is True
# tsd.network.reuseaddress = true
# Number of worker threads dedicated to Netty, defaults to # of CPUs * 2
# tsd.network.worker_threads = 8
# Whether or not to use NIO or tradditional blocking IO, defaults to True
# tsd.network.async_io = true
# ----------- HTTP -----------
# The location of static files for the HTTP GUI interface.
# ***REQUIRED***
tsd.http.staticroot = /opt/opentsdb-2.4.0/build/staticroot/
# Where TSD should write it's cache files to
# ***REQUIRED***
tsd.http.cachedir = /opt/opentsdb-2.4.0/build/CACHE
# --------- CORE ----------
# Whether or not to automatically create UIDs for new metric types, default
# is False
tsd.core.auto_create_metrics = true
# --------- STORAGE ----------
# Whether or not to enable data compaction in HBase, default is True
# tsd.storage.enable_compaction = true
# How often, in milliseconds, to flush the data point queue to storage,
# default is 1,000
# tsd.storage.flush_interval = 1000
# Name of the HBase table where data points are stored, default is "tsdb"
tsd.storage.hbase.data_table = tsdb
# Name of the HBase table where UID information is stored, default is "tsdb-uid"
tsd.storage.hbase.uid_table = tsdb-uid
# Path under which the znode for the -ROOT- region is located, default is "/hbase"
tsd.storage.hbase.zk_basedir = /hbase-unsecure
# A comma separated list of Zookeeper hosts to connect to, with or without
# port specifiers, default is "localhost"
# tsd.storage.hbase.zk_quorum = localhost
tsd.storage.hbase.zk_quorum = namenode1.local,namenode2.local
tsd.http.request.enable_chunked = true
tsd.http.request.max_chunk = 16000
tsd.storage.fix_duplicates = true
tsd.storage.max_tags = 45
tsd.storage.uid.width.metric = 4
tsd.storage.uid.width.tagk = 4
tsd.storage.uid.width.tagv = 4
tsd.core.uid.random_metrics = true
tsd.core.auto_create_tagks = true
tsd.core.auto_create_tagvs = true
暂无答案!
目前还没有任何答案,快来回答吧!