如何在iOS Swift中为WebRTC视频呼叫添加画中画(PIP)

qnakjoqk  于 2023-08-08  发布在  iOS
关注(0)|答案(2)|浏览(560)

我们使用了以下步骤集成PIP(画中画)用于WebRTC视频通话:
1.我们在项目中启用音频、空中播放和画中画功能模式。
1.我们已经添加了一个授权文件,在多任务处理时访问相机,请参阅Accessing the Camera While Multitasking。)
1.从文档链接中,我们遵循:

调配您的应用

在您的帐户拥有使用该权利的权限后,您可以通过以下步骤使用该权利创建新的预配配置文件:
1.登录您的Apple开发者帐户。
1.转到证书、标识符和配置文件。
1.为您的应用生成新的配置文件。
1.从您帐户的其他权限中选择多任务摄像机访问权限。
1.我们还集成了下面的链接,但是如何在这个SampleBufferVideoCallView中添加视频渲染层视图我们没有任何特别的提示。https://developer.apple.com/documentation/avkit/adopting_picture_in_picture_for_video_calls?changes=__8
1.此外,RTCMTLVideoView创建MTKView不受支持,但我们使用了WebRTC默认视频渲染视图,如RTCEAGLVideoView用于GLKView
PIP与WebRTC iOS Swift集成代码:

class SampleBufferVideoCallView: UIView {
    override class var layerClass: AnyClass {
        get { return AVSampleBufferDisplayLayer.self }
    }
    
    var sampleBufferDisplayLayer: AVSampleBufferDisplayLayer {
        return layer as! AVSampleBufferDisplayLayer
    }
}

func startPIP() {
    if #available(iOS 15.0, *) {
        let sampleBufferVideoCallView = SampleBufferVideoCallView()
        let pipVideoCallViewController = AVPictureInPictureVideoCallViewController()
        pipVideoCallViewController.preferredContentSize = CGSize(width: 1080, height: 1920)
        pipVideoCallViewController.view.addSubview(sampleBufferVideoCallView)
        
        let remoteVideoRenderar = RTCEAGLVideoView()
        remoteVideoRenderar.contentMode = .scaleAspectFill
        remoteVideoRenderar.frame = viewUser.frame
        viewUser.addSubview(remoteVideoRenderar)
        
        let pipContentSource = AVPictureInPictureController.ContentSource(
            activeVideoCallSourceView: self.viewUser,
            contentViewController: pipVideoCallViewController)
        
        let pipController = AVPictureInPictureController(contentSource: pipContentSource)
        pipController.canStartPictureInPictureAutomaticallyFromInline = true
        pipController.delegate = self
        
    } else {
        // Fallback on earlier versions
    }
}

字符串
如何将viewUser GLKView添加到pipContentSource中,以及如何将远程视频缓冲区视图集成到SampleBufferVideoCallView中?
是否可以通过这种方式或任何其他方式在AVSampleBufferDisplayLayer中渲染视频缓冲层视图?

f0brbegy

f0brbegy1#

当被问及此问题时,Apple给出了以下建议:
为了提出建议,我们需要更多地了解您尝试渲染视频的代码。
正如您所提到的文章中所讨论的,要提供PiP支持,您必须首先提供一个source view来显示在视频呼叫视图控制器中--您需要将UIView添加到AVPictureInPictureVideoCallViewController。系统支持根据您的需要显示来自AVPlayerLayerAVSampleBufferDisplayLayer的内容。不支持MTKView/GLKView。视频通话应用程序需要显示远程视图,因此使用AVSampleBufferDisplayLayer来执行此操作。
为了在源视图中处理绘图,您可以在缓冲区流转换为GLKView之前访问它,并将其提供给AVPictureInPictureViewController的内容。例如,您可以从视频馈送帧创建CVPixelBuffers,然后从这些帧创建CMSampleBuffers一旦您有了CMSampleBuffers,您就可以开始将这些帧提供给AVSampleBufferDisplayLayer进行显示。看看这里定义的方法,看看这是如何做到的。这里有一些存档的ObjC示例代码AVGreenScreenPlayer,可以帮助您开始使用AVSampleBufferDisplayLayer(注意:它是Mac代码,但AVSampleBufferDisplayLayer API在不同平台上是相同的)。
此外,为了实现PiP支持,您需要为AVPictureInPictureControllerDelegate和AVSampleBufferDisplayLayer AVPictureInPictureSampleBufferPlaybackDelegate提供委托方法。有关AVPictureInPictureSampleBufferPlaybackDelegate委托的更多信息,请参阅最近的WWDC视频What's new in AVKit
但不确定这是否能解决问题。

wn9m85ua

wn9m85ua2#

要使用提供的代码在视频通话中显示带有WebRTC的画中画(PIP),请执行以下步骤:
步骤1:初始化WebRTC视频通话确保您已经设置了WebRTC视频通话,并建立了必要的信令和对等连接。这段代码假设您已经有一个remoteVideoTrack,它表示从远程用户接收的视频流。
步骤2:创建一个FrameRenderer对象示例化FrameRenderer对象,它将负责渲染从远程用户接收到的视频帧,以用于PIP显示。
//在初始化视频通话的位置添加此代码(在渲染开始之前)

var frameRenderer: FrameRenderer?

字符串
步骤3:将远程视频渲染到FrameRenderer在renderRemoteVideo函数中,将来自remoteVideoTrack的视频帧添加到FrameRenderer对象,以便在PIP视图中渲染它们。

func renderRemoteVideo(to renderer: RTCVideoRenderer) {
    // Make sure you have already initialized the remoteVideoTrack from the WebRTC video call.

    if frameRenderer == nil {
        frameRenderer = FrameRenderer(uID: recUserID)
    }

    self.remoteVideoTrack?.add(frameRenderer!)
}


步骤4:从渲染远程视频中移除FrameRenderRemoteVideo在removeRenderRemoteVideo函数中,当您想要停止PIP显示时,将FrameRenderer对象从渲染视频帧中移除。

func removeRenderRemoteVideo(to renderer: RTCVideoRenderer) {
    if frameRenderer != nil {
        self.remoteVideoTrack?.remove(frameRenderer!)
    }
}


第5步:定义FrameRenderer类FrameRenderer类负责在PIP视图中渲染从WebRTC接收的视频帧。

// Import required frameworks
import Foundation
import WebRTC
import AVKit
import VideoToolbox
import Accelerate
import libwebp

// Define closure type for handling CMSampleBuffer, orientation, scaleFactor, and userID
typealias CMSampleBufferRenderer = (CMSampleBuffer, CGImagePropertyOrientation, CGFloat, Int) -> ()

// Define closure variables for handling CMSampleBuffer from FrameRenderer
var getCMSampleBufferFromFrameRenderer: CMSampleBufferRenderer = { _,_,_,_ in }
var getCMSampleBufferFromFrameRendererForPIP: CMSampleBufferRenderer = { _,_,_,_ in }
var getLocalVideoCMSampleBufferFromFrameRenderer: 
CMSampleBufferRenderer = { _,_,_,_ in }

// Define the FrameRenderer class responsible for rendering video frames
class FrameRenderer: NSObject, RTCVideoRenderer {
// VARIABLES
var scaleFactor: CGFloat?
var recUserID: Int = 0
var frameImage = UIImage()
var videoFormatDescription: CMFormatDescription?
var didGetFrame: ((CMSampleBuffer) -> ())?
private var ciContext = CIContext()

init(uID: Int) {
    super.init()
    recUserID = uID
}

// Set the aspect ratio based on the size
func setSize(_ size: CGSize) {
    self.scaleFactor = size.height > size.width ? size.height / size.width : size.width / size.height
}

// Render a video frame received from WebRTC
func renderFrame(_ frame: RTCVideoFrame?) {
    guard let pixelBuffer = self.getCVPixelBuffer(frame: frame) else {
        return
    }

    // Extract timing information from the frame and create a CMSampleBuffer
    let timingInfo = covertFrameTimestampToTimingInfo(frame: frame)!
    let cmSampleBuffer = self.createSampleBufferFrom(pixelBuffer: pixelBuffer, timingInfo: timingInfo)!

    // Determine the video orientation and handle the CMSampleBuffer accordingly
    let oriented: CGImagePropertyOrientation?
    switch frame!.rotation.rawValue {
    case RTCVideoRotation._0.rawValue:
        oriented = .right
    case RTCVideoRotation._90.rawValue:
        oriented = .right
    case RTCVideoRotation._180.rawValue:
        oriented = .right
    case RTCVideoRotation._270.rawValue:
        oriented = .left
    default:
        oriented = .right
    }

    // Pass the CMSampleBuffer to the appropriate closure based on the user ID
    if objNewUserDM?.userId == self.recUserID {
        getLocalVideoCMSampleBufferFromFrameRenderer(cmSampleBuffer, oriented!, self.scaleFactor!, self.recUserID)
    } else {
        getCMSampleBufferFromFrameRenderer(cmSampleBuffer, oriented!, self.scaleFactor!, self.recUserID)
        getCMSampleBufferFromFrameRendererForPIP(cmSampleBuffer, oriented!, self.scaleFactor!, self.recUserID)
    }

    // Call the didGetFrame closure if it exists
    if let closure = didGetFrame {
        closure(cmSampleBuffer)
    }
}

// Function to create a CVPixelBuffer from a CIImage
func createPixelBufferFrom(image: CIImage) -> CVPixelBuffer? {
    let attrs = [
        kCVPixelBufferCGImageCompatibilityKey: false,
        kCVPixelBufferCGBitmapContextCompatibilityKey: false,
        kCVPixelBufferWidthKey: Int(image.extent.width),
        kCVPixelBufferHeightKey: Int(image.extent.height)
    ] as CFDictionary

    var pixelBuffer: CVPixelBuffer?
    let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(image.extent.width), Int(image.extent.height), kCVPixelFormatType_32BGRA, attrs, &pixelBuffer)

    if status == kCVReturnSuccess {
        self.ciContext.render(image, to: pixelBuffer!)
        return pixelBuffer
    } else {
        // Failed to create a CVPixelBuffer
        portalPrint("Error creating CVPixelBuffer.")
        return nil
    }
}

// Function to create a CVPixelBuffer from a CIImage using an existing CVPixelBuffer
func buffer(from image: CIImage, oldCVPixelBuffer: CVPixelBuffer) -> CVPixelBuffer? {
    let attrs = [
        kCVPixelBufferMetalCompatibilityKey: kCFBooleanTrue,
        kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue,
        kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue
    ] as CFDictionary

    var pixelBuffer: CVPixelBuffer?
    let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(image.extent.width), Int(image.extent.height), kCVPixelFormatType_32BGRA, attrs, &pixelBuffer)

    if status == kCVReturnSuccess {
        oldCVPixelBuffer.propagateAttachments(to: pixelBuffer!)
        return pixelBuffer
    } else {
        // Failed to create a CVPixelBuffer
        portalPrint("Error creating CVPixelBuffer.")
        return nil
    }
}


步骤6:实现PIP功能根据提供的代码,您似乎已经使用AVPictureInPictureController实现了PIP功能。确保在视频通话期间启用PIP时调用startPIP函数。SampleBufferVideoCallView用于显示从frameRenderer接收的PIP视频帧。

/// start PIP Method
fileprivate func startPIP() {
    runOnMainThread() {
        if #available(iOS 15.0, *) {
            if AVPictureInPictureController.isPictureInPictureSupported() {
                let sampleBufferVideoCallView = SampleBufferVideoCallView()
                
                getCMSampleBufferFromFrameRendererForPIP = { [weak self] cmSampleBuffer, videosOrientation, scalef, userId  in
                    guard let weakself = self else {
                        return
                    }
                    if weakself.viewModel != nil {
                        if objNewUserDM?.userId != userId && weakself.viewModel.pipUserId == userId {
                            runOnMainThread {
                                sampleBufferVideoCallView.sampleBufferDisplayLayer.enqueue(cmSampleBuffer)
                            }
                        }
                    }
                }
                
                sampleBufferVideoCallView.contentMode = .scaleAspectFit
                
                self.pipVideoCallViewController = AVPictureInPictureVideoCallViewController()
                
                // Pretty much just for aspect ratio, normally used for pop-over
                self.pipVideoCallViewController.preferredContentSize = CGSize(width: 1080, height: 1920)
                
                self.pipVideoCallViewController.view.addSubview(sampleBufferVideoCallView)
                
                sampleBufferVideoCallView.translatesAutoresizingMaskIntoConstraints = false
                let constraints = [
                    sampleBufferVideoCallView.leadingAnchor.constraint(equalTo: self.pipVideoCallViewController.view.leadingAnchor),
                    sampleBufferVideoCallView.trailingAnchor.constraint(equalTo: self.pipVideoCallViewController.view.trailingAnchor),
                    sampleBufferVideoCallView.topAnchor.constraint(equalTo: self.pipVideoCallViewController.view.topAnchor),
                    sampleBufferVideoCallView.bottomAnchor.constraint(equalTo: self.pipVideoCallViewController.view.bottomAnchor)
                ]
                NSLayoutConstraint.activate(constraints)
                
                sampleBufferVideoCallView.bounds = self.pipVideoCallViewController.view.frame
                
                let pipContentSource = AVPictureInPictureController.ContentSource(
                    activeVideoCallSourceView: self.view,
                    contentViewController: self.pipVideoCallViewController
                )
                
                self.pipController = AVPictureInPictureController(contentSource: pipContentSource)
                self.pipController.canStartPictureInPictureAutomaticallyFromInline = true
                self.pipController.delegate = self
                
                print("Is pip supported: \(AVPictureInPictureController.isPictureInPictureSupported())")
                print("Is pip possible: \(self.pipController.isPictureInPicturePossible)")
            }
        } else {
            // Fallback on earlier versions
            print("PIP is not supported in this device")
        }
    }
}


注意:FrameRenderer对象应在应用程序中定义,并且应确保PIP视图的位置和大小设置正确,以实现所需的PIP效果。此外,请记住处理调用结束场景并优雅地释放frameRenderer和WebRTC连接。
请记住,提供的代码假设您已经有了必要的WebRTC设置,并且此代码仅关注PIP渲染方面。此外,PIP从iOS 15.0开始受支持,因此请确保适当处理运行早期版本的设备。

相关问题