Hive1.2元存储服务在配置为s3存储而不是hdfs后不会启动

b5buobof  于 2021-06-21  发布在  Mysql
关注(0)|答案(1)|浏览(517)

我有一个独立模式的apachespark集群(2.2.0)。到目前为止,我们一直在使用hdfs来存储Parquet文件。我使用ApacheHive1.2的HiveMetaStore服务,使用thriftserver,通过jdbc访问Sparkover。
现在我想用s3对象存储代替hdfs。我已将以下配置添加到hive-site.xml中:

<property>
  <name>fs.s3a.access.key</name>
  <value>access_key</value>
  <description>Profitbricks Access Key</description>
</property>
<property>
  <name>fs.s3a.secret.key</name>
  <value>secret_key</value>
  <description>Profitbricks Secret Key</description>
</property>
<property>
  <name>fs.s3a.endpoint</name>
  <value>s3-de-central.profitbricks.com</value>
  <description>ProfitBricks S3 Object Storage Endpoint</description>
</property>
<property>
  <name>fs.s3a.endpoint.http.port</name>
  <value>80</value>
  <description>ProfitBricks S3 Object Storage Endpoint HTTP Port</description>
</property>
<property>
  <name>fs.s3a.endpoint.https.port</name>
  <value>443</value>
  <description>ProfitBricks S3 Object Storage Endpoint HTTPS Port</description>
</property>
<property>
  <name>hive.metastore.warehouse.dir</name>
  <value>s3a://dev.spark.my_bucket/parquet/</value>
  <description>Profitbricks S3 Object Storage Hive Warehouse Location</description>
</property>

我在MySQL5.7数据库中有hive元存储。我已将以下jar文件添加到hive lib文件夹中:
aws-java-sdk-1.7.4.jar文件
hadoop-aws-2.7.3.jar
我删除了mysql上的旧配置单元元存储架构,然后使用以下命令启动元存储服务: hive --service metastore & 我得到以下错误:

java.lang.NoClassDefFoundError: com/fasterxml/jackson/databind/ObjectMapper
        at com.amazonaws.util.json.Jackson.<clinit>(Jackson.java:27)
        at com.amazonaws.internal.config.InternalConfig.loadfrom(InternalConfig.java:182)
        at com.amazonaws.internal.config.InternalConfig.load(InternalConfig.java:199)
        at com.amazonaws.internal.config.InternalConfig$Factory.<clinit>(InternalConfig.java:232)
        at com.amazonaws.ServiceNameFactory.getServiceName(ServiceNameFactory.java:34)
        at com.amazonaws.AmazonWebServiceClient.computeServiceName(AmazonWebServiceClient.java:703)
        at com.amazonaws.AmazonWebServiceClient.getServiceNameIntern(AmazonWebServiceClient.java:676)
        at com.amazonaws.AmazonWebServiceClient.computeSignerByURI(AmazonWebServiceClient.java:278)
        at com.amazonaws.AmazonWebServiceClient.setEndpoint(AmazonWebServiceClient.java:160)
        at com.amazonaws.services.s3.AmazonS3Client.setEndpoint(AmazonS3Client.java:475)
        at com.amazonaws.services.s3.AmazonS3Client.init(AmazonS3Client.java:447)
        at com.amazonaws.services.s3.AmazonS3Client.<init>(AmazonS3Client.java:391)
        at com.amazonaws.services.s3.AmazonS3Client.<init>(AmazonS3Client.java:371)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:235)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2811)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100)
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2848)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2830)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
        at org.apache.hadoop.hive.metastore.Warehouse.getFs(Warehouse.java:104)
        at org.apache.hadoop.hive.metastore.Warehouse.getDnsPath(Warehouse.java:140)
        at org.apache.hadoop.hive.metastore.Warehouse.getDnsPath(Warehouse.java:146)
        at org.apache.hadoop.hive.metastore.Warehouse.getWhRoot(Warehouse.java:159)
        at org.apache.hadoop.hive.metastore.Warehouse.getDefaultDatabasePath(Warehouse.java:177)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:601)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:620)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:461)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:66)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
        at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5762)
        at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5757)
        at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:5990)
        at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:5915)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:234)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Caused by: java.lang.ClassNotFoundException: com.fasterxml.jackson.databind.ObjectMapper
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)

缺少的类属于jackson库,然后我复制了位于spark-2.2.0-bin-hadoop2.7/jars/文件夹中的jackson-*.jar,它是:
jackson-annotations-2.6.5.jar文件
jackson-core-2.6.5.jar文件
jackson-core-asl-1.9.13.jar文件
jackson-databind-2.6.5.jar文件
jackson-jaxrs-1.9.13.jar
jackson-mapper-asl-1.9.13.jar
jackson-module-paranamer-2.6.5.jar
jackson-module-scala_2.11-2.6.5.jar
jackson-xc-1.9.13.jar文件
但后来我发现了以下错误:

2018-01-05 17:51:00,819 ERROR [main]: metastore.HiveMetaStore (HiveMetaStore.java:main(5920)) - Metastore Thrift Server threw an exception...
java.lang.NumberFormatException: For input string: "100M"
        at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
        at java.lang.Long.parseLong(Long.java:589)
        at java.lang.Long.parseLong(Long.java:631)
        at org.apache.hadoop.conf.Configuration.getLong(Configuration.java:1319)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:248)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2811)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100)
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2848)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2830)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
        at org.apache.hadoop.hive.metastore.Warehouse.getFs(Warehouse.java:104)
        at org.apache.hadoop.hive.metastore.Warehouse.getDnsPath(Warehouse.java:140)
        at org.apache.hadoop.hive.metastore.Warehouse.getDnsPath(Warehouse.java:146)
        at org.apache.hadoop.hive.metastore.Warehouse.getWhRoot(Warehouse.java:159)
        at org.apache.hadoop.hive.metastore.Warehouse.getDefaultDatabasePath(Warehouse.java:177)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:601)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:620)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:461)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:66)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
        at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5762)
        at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5757)
        at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:5990)
        at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:5915)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:234)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:148)

我认为这里的错误与jar版本不兼容有关,但是我找不到正确的版本。
有人能帮我吗?

ogq8wdun

ogq8wdun1#

您绝对不能将hadoop common、hadoop aws、aws-s3-sdk和jackson版本与所有期望的版本混合,否则您将看到堆栈跟踪。
而且它都是开源的,所以如果您在本地d/l所有的源jar,您的ide将帮助您找到导致堆栈跟踪的原因。我们都是这样做的。这并不神奇,现代ides(intellij idea)甚至有特殊的堆栈调试。
这个是因为 fs.s3a.multipart.size 设置在hadoop公共 /core-default.xml 资源是100m,它与hadoop-13680和范围解析处理数字类似“100m”,而不是104857600。这个堆栈跟踪显示“Hadoop2.8+配置”
您可以尝试将configs中的属性设置为该数值,但这是一个警告信号,表明jar的版本不同步,在其他内容中断之前,您可能只会得到几行。
修正:确保 hadoop-common.jar 以及 hadoop-aws.jar 正在同步。看起来你已经把Jackson和aws的人排好了,尽管Jackson已经够复杂了,你永远不能把这当成理所当然的事。

相关问题