firebase 如何在Google Cloud Storage服务器端使用Java创建缩略图或JPEG图像

jvidinwx  于 2023-08-07  发布在  Go
关注(0)|答案(2)|浏览(91)

如何从Google云存储中获取视频并从其中一帧生成jpeg图像?
该帧可用作视频的缩略图。或者通过定期获取帧,这些帧可用于在拖动视频时制作预览帧。
我喜欢用Java服务器端(在Google应用引擎上)来做这个。这是我可以用来在Google云存储中获取视频的Blob的代码。我做了那件事之后还有什么选择?

Storage storage = StorageOptions.getDefaultInstance().getService();
BlobId blobId = BlobId.of(BUCKET, OBJECT_NAME);
Blob blob = storage.get(blobId);

字符串
我正在寻找一个快速和轻量级的解决方案,因为这将在谷歌应用引擎的服务器端运行。

oug3syen

oug3syen1#

虽然@VonC的回答非常好,但我还是建议您使用Transcoder API提供的功能,而不是自定义解决方案。
具体来说,代码转换器API为您提供了generating a spritesheet of video frames的功能。
如文档中所示,您有两个生成缩略图的选项:
您有两个选项可用于生成信息表:

  • 生成在输入视频时间轴上均匀分布的设定数量的缩略图图像。
  • 在输入视频时间轴上周期性地生成缩略图,即每n秒。

API为不同的编程语言提供SDK,其中包括Java(2)。这个相关的例子也可能有所帮助。供参考:

import com.google.cloud.video.transcoder.v1.AudioStream;
import com.google.cloud.video.transcoder.v1.CreateJobRequest;
import com.google.cloud.video.transcoder.v1.ElementaryStream;
import com.google.cloud.video.transcoder.v1.Input;
import com.google.cloud.video.transcoder.v1.Job;
import com.google.cloud.video.transcoder.v1.JobConfig;
import com.google.cloud.video.transcoder.v1.LocationName;
import com.google.cloud.video.transcoder.v1.MuxStream;
import com.google.cloud.video.transcoder.v1.Output;
import com.google.cloud.video.transcoder.v1.SpriteSheet;
import com.google.cloud.video.transcoder.v1.TranscoderServiceClient;
import com.google.cloud.video.transcoder.v1.VideoStream;
import java.io.IOException;

public class CreateJobWithSetNumberImagesSpritesheet {

  public static final String smallSpritesheetFilePrefix = "small-sprite-sheet";
  public static final String largeSpritesheetFilePrefix = "large-sprite-sheet";
  public static final String spritesheetFileSuffix = "0000000000.jpeg";

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String projectId = "my-project-id";
    String location = "us-central1";
    String inputUri = "gs://my-bucket/my-video-file";
    String outputUri = "gs://my-bucket/my-output-folder/";

    createJobWithSetNumberImagesSpritesheet(projectId, location, inputUri, outputUri);
  }

  // Creates a job from an ad-hoc configuration and generates two spritesheets from the input video.
  // Each spritesheet contains a set number of images.
  public static void createJobWithSetNumberImagesSpritesheet(
      String projectId, String location, String inputUri, String outputUri) throws IOException {
    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests.
    try (TranscoderServiceClient transcoderServiceClient = TranscoderServiceClient.create()) {

      VideoStream videoStream0 =
          VideoStream.newBuilder()
              .setH264(
                  VideoStream.H264CodecSettings.newBuilder()
                      .setBitrateBps(550000)
                      .setFrameRate(60)
                      .setHeightPixels(360)
                      .setWidthPixels(640))
              .build();

      AudioStream audioStream0 =
          AudioStream.newBuilder().setCodec("aac").setBitrateBps(64000).build();

      // Generates a 10x10 spritesheet of small images from the input video. To preserve the source
      // aspect ratio, you should set the spriteWidthPixels field or the spriteHeightPixels
      // field, but not both.
      SpriteSheet smallSpriteSheet =
          SpriteSheet.newBuilder()
              .setFilePrefix(smallSpritesheetFilePrefix)
              .setSpriteHeightPixels(32)
              .setSpriteWidthPixels(64)
              .setColumnCount(10)
              .setRowCount(10)
              .setTotalCount(100)
              .build();

      // Generates a 10x10 spritesheet of larger images from the input video.
      SpriteSheet largeSpriteSheet =
          SpriteSheet.newBuilder()
              .setFilePrefix(largeSpritesheetFilePrefix)
              .setSpriteHeightPixels(72)
              .setSpriteWidthPixels(128)
              .setColumnCount(10)
              .setRowCount(10)
              .setTotalCount(100)
              .build();

      JobConfig config =
          JobConfig.newBuilder()
              .addInputs(Input.newBuilder().setKey("input0").setUri(inputUri))
              .setOutput(Output.newBuilder().setUri(outputUri))
              .addElementaryStreams(
                  ElementaryStream.newBuilder()
                      .setKey("video_stream0")
                      .setVideoStream(videoStream0))
              .addElementaryStreams(
                  ElementaryStream.newBuilder()
                      .setKey("audio_stream0")
                      .setAudioStream(audioStream0))
              .addMuxStreams(
                  MuxStream.newBuilder()
                      .setKey("sd")
                      .setContainer("mp4")
                      .addElementaryStreams("video_stream0")
                      .addElementaryStreams("audio_stream0")
                      .build())
              .addSpriteSheets(smallSpriteSheet) // Add the spritesheet config to the job config
              .addSpriteSheets(largeSpriteSheet) // Add the spritesheet config to the job config
              .build();

      var createJobRequest =
          CreateJobRequest.newBuilder()
              .setJob(
                  Job.newBuilder()
                      .setInputUri(inputUri)
                      .setOutputUri(outputUri)
                      .setConfig(config)
                      .build())
              .setParent(LocationName.of(projectId, location).toString())
              .build();

      // Send the job creation request and process the response.
      Job job = transcoderServiceClient.createJob(createJobRequest);
      System.out.println("Job: " + job.getName());
    }
  }
}

字符串
转码器作业的输入从云存储获得。
因此,为了使用此代码启动转码过程,可能需要定义一个视频的[Cloud Function that responsibility to the creation] - object finalize event -(https://cloud.google.com/functions/docs/calling/storage)。
您可以在GCP documentation中找到处理此类事件的函数示例。

import com.google.cloud.functions.CloudEventsFunction;
import com.google.events.cloud.storage.v1.StorageObjectData;
import com.google.protobuf.InvalidProtocolBufferException;
import com.google.protobuf.util.JsonFormat;
import io.cloudevents.CloudEvent;
import java.nio.charset.StandardCharsets;
import java.util.logging.Logger;

public class HelloGcs implements CloudEventsFunction {
  private static final Logger logger = Logger.getLogger(HelloGcs.class.getName());

  @Override
  public void accept(CloudEvent event) throws InvalidProtocolBufferException {
    logger.info("Event: " + event.getId());
    logger.info("Event Type: " + event.getType());

    if (event.getData() == null) {
      logger.warning("No data found in cloud event payload!");
      return;
    }

    String cloudEventData = new String(event.getData().toBytes(), StandardCharsets.UTF_8);
    StorageObjectData.Builder builder = StorageObjectData.newBuilder();
    JsonFormat.parser().merge(cloudEventData, builder);
    StorageObjectData data = builder.build();

    logger.info("Bucket: " + data.getBucket());
    logger.info("File: " + data.getName());
    logger.info("Metageneration: " + data.getMetageneration());
    logger.info("Created: " + data.getTimeCreated());
    logger.info("Updated: " + data.getUpdated());
  }
}


函数中的最终代码可能与此类似(请原谅任何不准确,我只是尝试将两个示例组合起来):

import com.google.cloud.functions.CloudEventsFunction;
import com.google.events.cloud.storage.v1.StorageObjectData;
import com.google.protobuf.InvalidProtocolBufferException;
import com.google.protobuf.util.JsonFormat;
import io.cloudevents.CloudEvent;
import java.nio.charset.StandardCharsets;
import java.util.logging.Logger;

import com.google.cloud.video.transcoder.v1.AudioStream;
import com.google.cloud.video.transcoder.v1.CreateJobRequest;
import com.google.cloud.video.transcoder.v1.ElementaryStream;
import com.google.cloud.video.transcoder.v1.Input;
import com.google.cloud.video.transcoder.v1.Job;
import com.google.cloud.video.transcoder.v1.JobConfig;
import com.google.cloud.video.transcoder.v1.LocationName;
import com.google.cloud.video.transcoder.v1.MuxStream;
import com.google.cloud.video.transcoder.v1.Output;
import com.google.cloud.video.transcoder.v1.SpriteSheet;
import com.google.cloud.video.transcoder.v1.TranscoderServiceClient;
import com.google.cloud.video.transcoder.v1.VideoStream;
import java.io.IOException;

public class TranscodingFunction implements CloudEventsFunction {
  private static final Logger logger = Logger.getLogger(HelloGcs.class.getName());

  public static final String smallSpritesheetFilePrefix = "small-sprite-sheet";
  public static final String largeSpritesheetFilePrefix = "large-sprite-sheet";
  public static final String spritesheetFileSuffix = "0000000000.jpeg";

  String projectId = "my-project-id";
  String location = "us-central1";

  @Override
  public void accept(CloudEvent event) throws InvalidProtocolBufferException {
    logger.info("Event: " + event.getId());
    logger.info("Event Type: " + event.getType());

    if (event.getData() == null) {
      logger.warning("No data found in cloud event payload!");
      return;
    }

    String cloudEventData = new String(event.getData().toBytes(), StandardCharsets.UTF_8);
    StorageObjectData.Builder builder = StorageObjectData.newBuilder();
    JsonFormat.parser().merge(cloudEventData, builder);
    StorageObjectData data = builder.build();

    logger.info("Bucket: " + data.getBucket());
    logger.info("File: " + data.getName());
    logger.info("Metageneration: " + data.getMetageneration());
    logger.info("Created: " + data.getTimeCreated());
    logger.info("Updated: " + data.getUpdated());

    String inputUri = data.getBucket() + data.getName();
    String outputUri = "gs://my-bucket/my-output-folder/";

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests.
    try (TranscoderServiceClient transcoderServiceClient = TranscoderServiceClient.create()) {

      VideoStream videoStream0 =
          VideoStream.newBuilder()
              .setH264(
                  VideoStream.H264CodecSettings.newBuilder()
                      .setBitrateBps(550000)
                      .setFrameRate(60)
                      .setHeightPixels(360)
                      .setWidthPixels(640))
              .build();

      AudioStream audioStream0 =
          AudioStream.newBuilder().setCodec("aac").setBitrateBps(64000).build();

      // Generates a 10x10 spritesheet of small images from the input video. To preserve the source
      // aspect ratio, you should set the spriteWidthPixels field or the spriteHeightPixels
      // field, but not both.
      SpriteSheet smallSpriteSheet =
          SpriteSheet.newBuilder()
              .setFilePrefix(smallSpritesheetFilePrefix)
              .setSpriteHeightPixels(32)
              .setSpriteWidthPixels(64)
              .setColumnCount(10)
              .setRowCount(10)
              .setTotalCount(100)
              .build();

      // Generates a 10x10 spritesheet of larger images from the input video.
      SpriteSheet largeSpriteSheet =
          SpriteSheet.newBuilder()
              .setFilePrefix(largeSpritesheetFilePrefix)
              .setSpriteHeightPixels(72)
              .setSpriteWidthPixels(128)
              .setColumnCount(10)
              .setRowCount(10)
              .setTotalCount(100)
              .build();

      JobConfig config =
          JobConfig.newBuilder()
              .addInputs(Input.newBuilder().setKey("input0").setUri(inputUri))
              .setOutput(Output.newBuilder().setUri(outputUri))
              .addElementaryStreams(
                  ElementaryStream.newBuilder()
                      .setKey("video_stream0")
                      .setVideoStream(videoStream0))
              .addElementaryStreams(
                  ElementaryStream.newBuilder()
                      .setKey("audio_stream0")
                      .setAudioStream(audioStream0))
              .addMuxStreams(
                  MuxStream.newBuilder()
                      .setKey("sd")
                      .setContainer("mp4")
                      .addElementaryStreams("video_stream0")
                      .addElementaryStreams("audio_stream0")
                      .build())
              .addSpriteSheets(smallSpriteSheet) // Add the spritesheet config to the job config
              .addSpriteSheets(largeSpriteSheet) // Add the spritesheet config to the job config
              .build();

      var createJobRequest =
          CreateJobRequest.newBuilder()
              .setJob(
                  Job.newBuilder()
                      .setInputUri(inputUri)
                      .setOutputUri(outputUri)
                      .setConfig(config)
                      .build())
              .setParent(LocationName.of(projectId, location).toString())
              .build();

      // Send the job creation request and process the response.
      Job job = transcoderServiceClient.createJob(createJobRequest);
      System.out.println("Job: " + job.getName());
    }
  }
}


应正确配置事件触发器和函数。
请注意,转码作业是异步的,您可能需要一些额外的工作来读取结果。
虽然可能更复杂,但基于这个想法,this blog postcompanion one可能会有所帮助。

vwoqyblh

vwoqyblh2#

从存储在Google Cloud Storage中的视频创建缩略图或JPEG图像需要几个步骤,可能需要的不仅仅是Java来完成。通常,您会使用像FFmpeg这样的媒体处理软件从视频中生成帧。
然而,直接在App Engine中使用FFmpeg或类似工具可能会很棘手,因为在App Engine环境中安装和执行第三方二进制文件受到限制。因此,典型的解决方案涉及使用Google Cloud Functions或Google Cloud Run来执行FFmpeg命令,并使用App Engine来触发这些函数。
这意味着:

  • 将视频上传到Google Cloud Storage。这可以通过在App Engine上运行的应用程序来完成。
  • 创建侦听上载事件的Google Cloud FunctionCloud Run instance。该函数将使用类似FFmpeg的工具从视频中生成图像。它会将图像保存回谷歌云存储桶。
  • 当上传视频时,触发云功能/云运行示例。

下面是一个Google Cloud函数的Python示例,它可以做到这一点(它不是Java,但它可以让你知道函数需要做什么)。
(来源于brown-mida/elvo/etl/processed_dag.py#to_public_png()
该函数在上传新视频时触发,它使用FFmpeg从视频中抓取图像,然后将图像存储在桶中:

import os
import tempfile
from google.cloud import storage
import subprocess

def generate_thumbnail(data, context):
    file_data = data
    file_name = file_data['name']
    bucket_name = file_data['bucket']

    storage_client = storage.Client()
    bucket = storage_client.get_bucket(bucket_name)
    blob = bucket.blob(file_name)

    _, temp_local_filename = tempfile.mkstemp()
    
    blob.download_to_filename(temp_local_filename)
    print(f"Video downloaded to {temp_local_filename}.")

    thumbnail_file = temp_local_filename + '.png'
    
    command = f"ffmpeg -i {temp_local_filename} -ss 00:00:01 -vframes 1 {thumbnail_file}"
    subprocess.call(command, shell=True)

    print(f"Thumbnail created at {thumbnail_file}.")

    thumbnail_blob = bucket.blob(file_name + '.png')
    thumbnail_blob.upload_from_filename(thumbnail_file)

    print(f"Thumbnail uploaded to {bucket_name}/{file_name}.png")

    os.remove(temp_local_filename)
    os.remove(thumbnail_file)

字符串
通过该设置,生成缩略图的工作将从Java App Engine服务卸载到Cloud Function。这将提供一个可扩展的解决方案来创建视频缩略图,而无需使用来自App Engine服务的资源。
您可以使用各种运行时环境创建云函数,如Python,Node.js,Java,Go,.NET等。这里使用Python是因为FFmpeg交互更简单。
警告:由于使用Cloud Functions/Cloud Run以及生成的映像的额外存储需求,此类操作可能会导致额外成本。
最后,如果您需要坚持使用Java并只使用AppEngine,请考虑使用App Engine flexible environment,这将允许您使用FFmpeg或其他工具。
在Java中创建Cloud Function或Cloud Run示例来执行此任务比使用Python等脚本语言复杂,因为Java不支持运行FFmpeg等shell命令。
但是,如果您使用自定义Dockerfile并从Java运行shell命令,则使用FFmpeg创建Cloud Function或Cloud Run示例是可以的。这通常是在Cloud Run示例中完成的,而不是在Cloud Function中完成的,因为Cloud Run允许您使用自定义的Dockerfile在运行时中包含FFmpeg。
例如,这里有一个简单的Java应用程序,它使用FFmpeg从视频中提取帧。它可以用作Cloud Run示例的基础。

import java.io.File;
import java.io.IOException;

public class Main {
    public static void main(String[] args) {
        String videoPath = "/path/to/video.mp4";
        String imagePath = "/path/to/output.png";

        // That command extracts a frame from the 1st second of the video
        String command = String.format("ffmpeg -i %s -ss 00:00:01 -vframes 1 %s", videoPath, imagePath);

        try {
            Process process = Runtime.getRuntime().exec(command);
            process.waitFor();
            System.out.println("Frame extracted successfully!");
        } catch (IOException | InterruptedException e) {
            e.printStackTrace();
        }
    }
}


(Note Java进程将需要具有执行FFmpeg命令和访问文件所必需的权限。)
在实际应用程序中,这应该 Package 在Web服务器中(如Sping Boot )来处理HTTP请求和响应。视频将从Google Cloud Storage下载到一个临时文件中,运行FFmpeg命令生成帧,然后将图像文件上传回Google Cloud Storage。您还需要包括错误处理和日志记录,以确保流程顺利进行,并且可以诊断任何问题。
正如所评论的,您确实有JCodec,一个视频编码器和解码器的纯Java实现,包括从视频生成帧的功能。你可以在Google App Engine Standard Environment上使用它。
这就是如何使用JCodec从视频中提取帧:

import org.jcodec.api.JCodecException;
import org.jcodec.api.awt.AWTFrameGrab;
import org.jcodec.common.FileChannelWrapper;
import org.jcodec.common.NIOUtils;
import org.jcodec.common.io.SeekableByteChannel;
import org.jcodec.scale.AWTUtil;

import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import java.nio.file.Path;
import java.nio.file.Paths;
import javax.imageio.ImageIO;

public class FrameGrabber {

    public static void getFrame(String videoPath, String framePath, double frameNumber) throws IOException, JCodecException {
        SeekableByteChannel bc = null;
        try {
            bc = NIOUtils.readableFileChannel(videoPath);
            BufferedImage frame = AWTFrameGrab.getFrame(new FileChannelWrapper(bc), frameNumber);
            BufferedImage rgbFrame = AWTUtil.toBufferedImage(frame);
            ImageIO.write(rgbFrame, "png", new File(framePath));
        } finally {
            NIOUtils.closeQuietly(bc);
        }
    }

    public static void main(String[] args) {
        try {
            String videoPath = "path/to/your/video.mp4";
            String framePath = "path/to/save/frame.png";
            getFrame(videoPath, framePath, 1);
            System.out.println("Frame extracted successfully!");
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}


您可以使用getFrame方法,该方法打开视频文件,在指定位置提取帧(帧编号是双精度表示视频的秒数),将帧转换为RGB图像,并将帧保存为文件。
您需要调整此代码以处理Blob对象,可能首先将blob下载到App Engine环境中的临时文件中,然后使用JCodec提取帧,最后将生成的图像文件上传回Google Cloud Storage。请注意,Google App Engine对文件系统访问有限制,但您可以使用临时目录(/tmp)写入文件。
由于解码视频帧是一项CPU密集型任务,并且可能需要一些时间,具体取决于视频的大小和长度,因此您需要小心使用App Engine示例的资源和执行时间限制。

相关问题