我试图使用Hudi没有Flink管道,以 parquet 格式将数据发布到S3对象存储。我在这样做时遇到了以下错误:
java.lang.NoClassDefFoundError: Could not initialize class org.apache.hudi.common.bootstrap.index.HFileBootstrapIndex
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Unknown Source)
at org.apache.hudi.common.util.ReflectionUtils.getClass(ReflectionUtils.java:55)
at org.apache.hudi.common.util.ReflectionUtils.loadClass(ReflectionUtils.java:79)
at org.apache.hudi.common.bootstrap.index.BootstrapIndex.getBootstrapIndex(BootstrapIndex.java:163)
at org.apache.hudi.common.table.view.AbstractTableFileSystemView.init(AbstractTableFileSystemView.java:118)
at org.apache.hudi.common.table.view.HoodieTableFileSystemView.init(HoodieTableFileSystemView.java:113)
at org.apache.hudi.common.table.view.HoodieTableFileSystemView.<init>(HoodieTableFileSystemView.java:107)
at org.apache.hudi.common.table.view.FileSystemViewManager.createInMemoryFileSystemView(FileSystemViewManager.java:177)
at org.apache.hudi.common.table.view.FileSystemViewManager.lambda$createViewManager$5fcdabfe$1(FileSystemViewManager.java:272)
at org.apache.hudi.common.table.view.FileSystemViewManager.lambda$getFileSystemView$1(FileSystemViewManager.java:115)
at java.base/java.util.concurrent.ConcurrentHashMap.computeIfAbsent(Unknown Source)
at org.apache.hudi.common.table.view.FileSystemViewManager.getFileSystemView(FileSystemViewManager.java:114)
at org.apache.hudi.table.HoodieTable.getSliceView(HoodieTable.java:320)
at org.apache.hudi.sink.partitioner.profile.DeltaWriteProfile.getFileSystemView(DeltaWriteProfile.java:93)
at org.apache.hudi.sink.partitioner.profile.WriteProfile.<init>(WriteProfile.java:120)
at org.apache.hudi.sink.partitioner.profile.DeltaWriteProfile.<init>(DeltaWriteProfile.java:44)
at org.apache.hudi.sink.partitioner.profile.WriteProfiles.getWriteProfile(WriteProfiles.java:75)
at org.apache.hudi.sink.partitioner.profile.WriteProfiles.lambda$singleton$0(WriteProfiles.java:64)
at java.base/java.util.HashMap.computeIfAbsent(Unknown Source)
at org.apache.hudi.sink.partitioner.profile.WriteProfiles.singleton(WriteProfiles.java:63)
at org.apache.hudi.sink.partitioner.BucketAssigners.create(BucketAssigners.java:56)
at org.apache.hudi.sink.partitioner.BucketAssignFunction.open(BucketAssignFunction.java:122)
at org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:34)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:101)
at org.apache.flink.streaming.api.operators.KeyedProcessOperator.open(KeyedProcessOperator.java:55)
at org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:107)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:734)
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:709)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:675)
at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:952)
at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:921)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:745)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
at java.base/java.lang.Thread.run(Unknown Source)
Suppressed: java.lang.NullPointerException
at org.apache.hudi.sink.partitioner.BucketAssignFunction.close(BucketAssignFunction.java:247)
at org.apache.flink.api.common.functions.util.FunctionUtils.closeFunction(FunctionUtils.java:41)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.close(AbstractUdfStreamOperator.java:115)
at org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.close(StreamOperatorWrapper.java:163)
at org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.closeAllOperators(RegularOperatorChain.java:125)
at org.apache.flink.streaming.runtime.tasks.StreamTask.closeAllOperators(StreamTask.java:1043)
at org.apache.flink.util.IOUtils.closeAll(IOUtils.java:255)
at org.apache.flink.core.fs.AutoCloseableRegistry.doClose(AutoCloseableRegistry.java:72)
at org.apache.flink.util.AbstractAutoCloseableRegistry.close(AbstractAutoCloseableRegistry.java:127)
at org.apache.flink.streaming.runtime.tasks.StreamTask.cleanUp(StreamTask.java:951)
at org.apache.flink.runtime.taskmanager.Task.lambda$restoreAndInvoke$0(Task.java:934)
at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:952)
at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:934)
以下是创建的hoodie.properties:
hoodie.compaction.payload.class=org.apache.hudi.common.model.EventTimeAvroPayload
hoodie.table.type=MERGE_ON_READ
hoodie.table.partition.fields=eventId,eventName
hoodie.table.cdc.enabled=false
hoodie.archivelog.folder=archived
hoodie.timeline.layout.version=1
hoodie.table.checksum=3942898242
hoodie.datasource.write.drop.partition.columns=false
hoodie.table.recordkey.fields=event_id,event_timestamp
hoodie.table.name=sink_table
hoodie.compaction.record.merger.strategy=eeb8d96f-b1e4-49fd-bbf8-28ac514178e5
hoodie.datasource.write.hive_style_partitioning=false
hoodie.table.keygenerator.class=org.apache.hudi.keygen.ComplexAvroKeyGenerator
hoodie.datasource.write.partitionpath.urlencode=false
hoodie.table.version=5
我尝试在类路径中使用hudi-flink 1.16-bundle-0.13.1,但这只会产生很少的NoClassDefFound
错误。所以独立添加了所需的库。有没有人能帮帮我,我错过了什么?蒂娅
Flink版本:1.17.1虎地版本:0.13.1
1条答案
按热度按时间zd287kbt1#
我通过将
hudi-flink-bundle
添加到插件flink_dir/plugins/hudi/<hudi-flink-jar>
来解决这个问题。我把它从图书馆里拿走了。