我试图使用Alpakka S3连接到一个minio示例以存储文件,但我遇到了问题,因为我将库版本从1.1.2
升级到2.0.0
。
这是一个简单的服务类,它只有两种方法来创建bucket。我尝试了两种方法,第一种是从本地配置文件(在我的例子中是application.conf
)加载alpakka设置,第二种是通过S3Ext
直接创建设置。
两种方法都失败了,我也不确定是什么问题。关于错误,似乎是设置没有正确加载,但我不知道我在这里做错了什么。
我正在使用的:
- 播放器框架2.8.1
- 脚本2.13.2
- akka -溪流-阿尔帕克卡-s3 2.0.0
下面是服务类:
package services
import akka.actor.ActorSystem
import akka.stream.alpakka.s3._
import akka.stream.alpakka.s3.scaladsl.S3
import akka.stream.scaladsl.Sink
import akka.stream.{Attributes, Materializer}
import javax.inject.{Inject, Singleton}
import software.amazon.awssdk.auth.credentials.{AwsBasicCredentials, AwsCredentials, AwsCredentialsProvider}
import software.amazon.awssdk.regions.Region
import software.amazon.awssdk.regions.providers.AwsRegionProvider
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.Future
@Singleton
class AlpakkaS3PlaygroundService @Inject()(
materializer: Materializer,
system: ActorSystem,
) {
def makeBucket(bucketName: String): Future[String] = {
S3.makeBucket(bucketName)(materializer) map { _ =>
"bucket created"
}
}
def makeBucket2(bucketName: String): Future[String] = {
val s3Host = "http://localhost:9000"
val s3AccessKey = "access_key"
val s3SecretKey = "secret_key"
val s3Region = "eu-central-1"
val credentialsProvider = new AwsCredentialsProvider {
override def resolveCredentials(): AwsCredentials = AwsBasicCredentials.create(s3AccessKey, s3SecretKey)
}
val regionProvider = new AwsRegionProvider {
override def getRegion: Region = Region.of(s3Region)
}
val settings: S3Settings = S3Ext(system).settings
.withEndpointUrl(s3Host)
.withBufferType(MemoryBufferType)
.withCredentialsProvider(credentialsProvider)
.withListBucketApiVersion(ApiVersion.ListBucketVersion2)
.withS3RegionProvider(regionProvider)
val attributes: Attributes = S3Attributes.settings(settings)
S3.makeBucketSource(bucketName)
.withAttributes(attributes)
.runWith(Sink.head)(materializer) map { _ =>
"bucket created"
}
}
}
application.conf
中的配置如下所示:
akka.stream.alpakka.s3 {
aws {
credentials {
provider = static
access-key-id = "access_key"
secret-access-key = "secret_key"
}
region {
provider = static
default-region = "eu-central-1"
}
}
endpoint-url = "http://localhost:9000"
}
如果使用服务的第一个方法(makeBucket(...)
),我会看到此错误:
SdkClientException: Unable to load region from any of the providers in the chain software.amazon.awssdk.regions.providers.DefaultAwsRegionProviderChain@34cb16dc:
[software.amazon.awssdk.regions.providers.SystemSettingsRegionProvider@804e08b: Unable to load region from system settings. Region must be specified either via environment variable (AWS_REGION) or system property (aws.region)., software.amazon.awssdk.regions.providers.AwsProfileRegionProvider@4d5f4b4d: No region provided in profile: default, software.amazon.awssdk.regions.providers.InstanceProfileRegionProvider@557feb58: Unable to contact EC2 metadata service.]
错误信息非常精确,我知道是什么问题,但我不知道该怎么办,因为我指定了文档中概述的设置。有什么想法吗?
在服务的第二个方法(makeBucket2(...)
)中,我尝试显式设置S3设置,但似乎也不起作用。错误如下:
play.api.http.HttpErrorHandlerExceptions$$anon$1: Execution exception[[S3Exception: 404 page not found
]]
at play.api.http.HttpErrorHandlerExceptions$.throwableToUsefulException(HttpErrorHandler.scala:335)
at play.api.http.DefaultHttpErrorHandler.onServerError(HttpErrorHandler.scala:253)
at play.core.server.AkkaHttpServer$$anonfun$2.applyOrElse(AkkaHttpServer.scala:424)
at play.core.server.AkkaHttpServer$$anonfun$2.applyOrElse(AkkaHttpServer.scala:420)
at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:453)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:92)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:92)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:47)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:47)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:177)
Caused by: akka.stream.alpakka.s3.S3Exception: 404 page not found
这里看起来好像根本没有考虑定义的设置,因为服务似乎没有找到。这实际上是我在以前版本的软件中使用的方法,我使用的是akka-stream-alpakka-s3
版本1.1.2,它按预期工作。
当然,我想使用Alpakka S3不仅仅是为了创建bucket,但是为了展示和概述我的问题,我只使用了这个例子来保持它的简单。我想,如果这个问题得到解决,alpakka提供的所有其他方法都将工作。
我真的红了几次文档,但我仍然无法解决这个问题,所以我希望有人在这里可以帮助我。
2条答案
按热度按时间qcbq4gxm1#
至少从2.0.0开始,Alpakka S3的配置路径现在是
alpakka.s3
,而不是akka.stream.alpakka.s3
。epggiuax2#
我在光弯论坛here得到了帮助。
通过设置以下参数解决了该问题:
因为文档中说,这个值将被弃用,所以我没有考虑指定它。
在我最初的文章中,我概述了两种设置参数的方法,一种是通过
application.conf
,另一种是通过S3Ext
编程。这里最后一行是至关重要的,尽管我得到了一个反对的警告。
但最终,这是解决问题。