swift SNAudioStreamAnalyzer未停止声音分类请求

brccelvz  于 2023-01-19  发布在  Swift
关注(0)|答案(2)|浏览(143)

我是一名学习iOS开发的学生,目前正在进行一个简单的人工智能项目,该项目利用SNAudioStreamAnalyzer对来自设备麦克风的音频流进行分类。我可以启动流并分析音频,没有问题,但我注意到,我似乎无法让我的应用停止分析并在完成后关闭音频输入流。在开始时,我初始化音频引擎并创建分类请求,如下所示:

private func startAudioEngine() {
        do {
            // start the stream of audio data
            try audioEngine.start()
            let snoreClassifier = try? SnoringClassifier2_0().model
            let classifySoundRequest = try audioAnalyzer.makeRequest(snoreClassifier)
            try streamAnalyzer.add(classifySoundRequest,
                                   withObserver: self.audioAnalyzer)
        } catch {
            print("Unable to start AVAudioEngine: \(error.localizedDescription)")
        }
    }

分类完音频流后,我尝试停止音频引擎并关闭音频流,如下所示:

private func terminateNight() {
        streamAnalyzer.removeAllRequests()
        audioEngine.stop()
        stopAndSaveNight()
        do {
            let session = AVAudioSession.sharedInstance()
            try session.setActive(false)
        } catch {
            print("unable to terminate audio session")
        }
        nightSummary = true
    }

但是,在调用terminateNight()函数后,应用将继续使用麦克风并对传入的音频进行分类。以下是我的SNResultsObserving实现:

class AudioAnalyzer: NSObject, SNResultsObserving {
    var prediction: String?
    var confidence: Double?
    let snoringEventManager: SnoringEventManager
    
    internal init(prediction: String? = nil, confidence: Double? = nil, snoringEventManager: SnoringEventManager) {
        self.prediction = prediction
        self.confidence = confidence
        self.snoringEventManager = snoringEventManager
    }
    
    func makeRequest(_ customModel: MLModel? = nil) throws -> SNClassifySoundRequest {
        if let model = customModel {
            let customRequest = try SNClassifySoundRequest(mlModel: model)
            return customRequest
        } else {
            throw AudioAnalysisErrors.ModelInterpretationError
        }
    }
    
    func request(_ request: SNRequest, didProduce: SNResult) {
        guard let classificationResult = didProduce as? SNClassificationResult else { return }
        let topClassification = classificationResult.classifications.first
        let timeRange = classificationResult.timeRange
        self.prediction = topClassification?.identifier
        self.confidence = topClassification?.confidence
        if self.prediction! == "snoring" {
            self.snoringEventManager.snoringDetected()
        } else {
            self.snoringEventManager.nonSnoringDetected()
        }
    }
    
    func request(_ request: SNRequest, didFailWithError: Error) {
        print("ended with error \(didFailWithError)")
    }
    
    func requestDidComplete(_ request: SNRequest) {
        print("request finished")
    }
}

据我所知,在调用streamAnalyzer.removeAllRequests()和audioEngine.stop()时,应用会停止麦克风的流传输,并调用requestDidComplete函数,但这不是我得到的行为。

368yc8dk

368yc8dk1#

摘自OP版本:
所以我意识到这是SwiftUI的问题。我调用了startAudioEngine()函数在声明它的视图的初始化器中。我认为这很好,但由于SwiftUI更新父视图时此视图嵌入在父视图中,因此它将重新初始化我的视图,并因此调用startAudioEngine解决方案是在onAppear块中调用此函数,以便它仅在视图出现时激活音频引擎,而不是在SwiftUI初始化它时激活。

monwx1rj

monwx1rj2#

我不认为您应该期望因为删除请求而收到requestDidComplete,您应该期望在调用completeAnalysis时收到它。

相关问题