ios 在AVFoundation中合并时,如何为视频设置单独的转换?

pdkcd3nj  于 12个月前  发布在  iOS
关注(0)|答案(1)|浏览(82)

我希望合并几个视频在一起(所有来自不同的来源)在Swift与AVFoundation。生成的视频应该是肖像格式。
我写的函数将视频合并到一个视频中。然而,从移动的手机(如iPhone)拍摄的视频似乎是以横向方式导出的,而其余的则是以纵向方式导出的。然后,视频将被向上拉伸,以适应肖像宽高比。iPhone似乎将视频保存为横向(即使是纵向),然后系统使用元数据将其显示为纵向。
为了解决这个问题,我尝试检测视频是否是横向的(或另一个旋转),然后手动将其转换为纵向。然而,当我这样做时,似乎转换应用于整个轨道,这导致整个构图呈现在风景中,一些视频呈现在风景中,另一些则呈现在肖像中。我不知道如何只对一个视频进行转换。我试过使用多个轨道,但只有一个视频显示,其余的轨道被忽略。下面是导出视频的示例(它的渲染方式如下,它应该渲染为9:16,但经过变换后,它渲染为16:9,请注意,第二个剪辑失真了,尽管它最初是以纵向记录的)。

下面是我的代码:

private static func mergeVideos(
    videoPaths: [URL],
    outputURL: URL,
    handler: @escaping (_ path: URL)-> Void
  ) {
    let videoComposition = AVMutableComposition()
    var lastTime: CMTime = .zero
    
    guard let videoCompositionTrack = videoComposition.addMutableTrack(withMediaType: .video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) else { return }
    
    for path in videoPaths {
      let assetVideo = AVAsset(url: path)
      
      getTracks(assetVideo, .video) { videoTracks in
        // Add video track
        do {
          try videoCompositionTrack.insertTimeRange(CMTimeRangeMake(start: .zero, duration: assetVideo.duration), of: videoTracks[0], at: lastTime)
          
          // Apply the original transform
          if let assetVideoTrack = assetVideo.tracks(withMediaType: AVMediaType.video).last {
            let t = assetVideoTrack.preferredTransform
            let size = assetVideoTrack.naturalSize
            
            let videoAssetOrientation: CGImagePropertyOrientation

            if size.width == t.tx && size.height == t.ty {
              print("down")
              
              videoAssetOrientation = .down
              videoCompositionTrack.preferredTransform = CGAffineTransform(rotationAngle: .pi) // 180 degrees
            } else if t.tx == 0 && t.ty == 0 {
              print("up")
              
              videoCompositionTrack.preferredTransform = assetVideoTrack.preferredTransform
              videoAssetOrientation = .up
            } else if t.tx == 0 && t.ty == size.width {
              print("left")
              
              videoAssetOrientation = .left
              videoCompositionTrack.preferredTransform = CGAffineTransform(rotationAngle: .pi / 2) // 90 degrees to the right

            } else {
              print("right")
              
              videoAssetOrientation = .right
              videoCompositionTrack.preferredTransform = CGAffineTransform(rotationAngle: -.pi / 2) // 90 degrees to the left
            }
          }
          
        } catch {
          print("Failed to insert video track")
          return
        }
        
        self.getTracks(assetVideo, .audio) { audioTracks in
          // Add audio track only if it exists
          if !audioTracks.isEmpty {
            do {
              try videoCompositionTrack.insertTimeRange(CMTimeRangeMake(start: .zero, duration: assetVideo.duration), of: audioTracks[0], at: lastTime)
            } catch {
              print("Failed to insert audio track")
              return
            }
          }
          
          // Update time
          lastTime = CMTimeAdd(lastTime, assetVideo.duration)
        }
      }
    }
        
    guard let exporter = AVAssetExportSession(asset: videoComposition, presetName: AVAssetExportPresetHighestQuality) else { return }
    exporter.outputURL = outputURL
    exporter.outputFileType = AVFileType.mp4
    exporter.shouldOptimizeForNetworkUse = true
    exporter.exportAsynchronously(completionHandler: {
      switch exporter.status {
      case .failed:
        print("Export failed \(exporter.error!)")
      case .completed:
        print("completed export")
        handler(outputURL)
      default:
        break
      }
    })
  }

有人知道我在这里错过了什么吗?任何帮助都非常感谢。

92vpleto

92vpleto1#

转换为视频CompositionTrack影响整个轨迹。你可以使用AVVideoComposition来做,它使用AVVideoCompositionInstruction来定时做视频处理。
下面是代码,没有不重要的部分,并将videoComposition重命名为mainCompositon以避免混淆:

private static func mergeVideos(
    videoPaths: [URL],
    outputURL: URL,
    handler: @escaping (_ path: URL)-> Void
) {
    let mainComposition = AVMutableComposition()
    var lastTime: CMTime = .zero
    
    guard let videoCompositionTrack = mainComposition.addMutableTrack(withMediaType: .video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) else { return }
    let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoCompositionTrack)
    
    for path in videoPaths {
      let assetVideo = AVAsset(url: path)
      
      getTracks(assetVideo, .video) { videoTracks in
        // Add video track
        do {
          try videoCompositionTrack.insertTimeRange(CMTimeRangeMake(start: .zero, duration: assetVideo.duration), of: videoTracks[0], at: lastTime)
            
          // Apply the original transform
          if let assetVideoTrack = assetVideo.tracks(withMediaType: AVMediaType.video).last {
              let t = assetVideoTrack.preferredTransform
              layerInstruction.setTransform(t, at: lastTime) // apply transfrom to track at time.
          }
          
        } catch {
          print("Failed to insert video track")
          return
        }
        
        // deal with audio part ...
      }
    }
    
    let videoCompostion = AVMutableVideoComposition()
    let instruction = AVMutableVideoCompositionInstruction()
    instruction.timeRange = CMTimeRange(start: .zero, end: lastTime)
    videoCompostion.instructions = [instruction]
    
    guard let exporter = AVAssetExportSession(asset: mainComposition, presetName: AVAssetExportPresetHighestQuality) else { return }
    
    // assign videoComposition to exporter
    exporter.videoComposition = videoCompostion
    
    // other export part ...
}

PS.你最好添加getTracks(_:, _:)方法来完成代码。

相关问题