我正在用fargate在ecs上运行flink 1.7.2。我已经将我的作业的状态后端设置为rocksdb,路径=s3://。。。
在dockerfile中,我的基本图像是 1.7.2-hadoop27-scala_2.11
,我运行以下两个命令:
RUN echo "fs.s3a.aws.credentials.provider: org.apache.flink.fs.s3hadoop.shaded.com.amazonaws.auth.ContainerCredentialsProvider" >> "$FLINK_CONF_DIR/flink-conf.yaml"
RUN cp /opt/flink/opt/flink-s3-fs-hadoop-1.7.2.jar /opt/flink/lib/flink-s3-fs-hadoop-1.7.2.jar
就像书上说的https://issues.apache.org/jira/browse/flink-8439
但是,我得到以下例外:
Caused by: java.io.IOException: From option fs.s3a.aws.credentials.provider java.lang.ClassNotFoundException: Class org.apache.flink.fs.s3hadoop.shaded.com.amazonaws.auth.ContainerCredentialsProvider not found
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AUtils.loadAWSProviderClasses(S3AUtils.java:592)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProviderSet(S3AUtils.java:556)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(DefaultS3ClientFactory.java:52)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:256)
at org.apache.flink.fs.s3.common.AbstractS3FileSystemFactory.create(AbstractS3FileSystemFactory.java:125)
at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:395)
at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:318)
at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298)
at org.apache.flink.runtime.state.filesystem.FsCheckpointStorage.<init>(FsCheckpointStorage.java:58)
at org.apache.flink.runtime.state.filesystem.FsStateBackend.createCheckpointStorage(FsStateBackend.java:444)
at org.apache.flink.contrib.streaming.state.RocksDBStateBackend.createCheckpointStorage(RocksDBStateBackend.java:407)
at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.<init>(CheckpointCoordinator.java:249)
... 17 more
Caused by: java.lang.ClassNotFoundException: Class org.apache.flink.fs.s3hadoop.shaded.com.amazonaws.auth.ContainerCredentialsProvider not found
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2375)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.conf.Configuration.getClasses(Configuration.java:2446)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AUtils.loadAWSProviderClasses(S3AUtils.java:589)
... 28 more
看着镜子 flink-s3-fs-hadoop-1.7.2.jar
我看到这个班的学生 ContainerCredentialsProvider
实际上是 org.apache.flink.fs.s3base.shaded.com.amazonaws.auth
我已经试过了:
添加 aws-sdk-core
jar,并将凭据提供程序设置为 com.amazonaws.auth.ContainerCredentialsProvider
(没有阴影)但是我得到了上面的问题链接中提到的问题
将凭据提供程序设置为 org.apache.flink.fs.s3base.shaded.com.amazonaws.auth.ContainerCredentialsProvider
但后来密码输入了 S3FileSystemFactory.java
在它前面加上 org.apache.flink.fs.s3hadoop.shaded.
有什么办法可以找到这个班吗?
1条答案
按热度按时间bvjxkvbb1#
此问题在之后的某个版本中得到解决。
我在flink 1.9.0集群上运行了它,如下所示:
这个班级被发现并开始运作。
您可以在中看到:https://github.com/apache/flink/blob/master/flink-filesystems/flink-s3-fs-hadoop/src/main/java/org/apache/flink/fs/s3hadoop/s3filesystemfactory.java
那就是
FLINK_SHADING_PREFIX
现在是正确的