NodeJS 在Discord.js流中插入静音

eufgjt7s  于 2022-12-18  发布在  Node.js
关注(0)|答案(1)|浏览(131)

我正在用Discord.js v14制作一个discord bot,它将用户的音频记录为单独文件和一个集体文件。由于Discord.js流不插入静音,我的问题是如何将静音插入流中。
我的代码是基于Discord.js recording example的,本质上,特权用户进入一个语音通道(或舞台),运行/record,该通道中的所有用户都会被记录下来,直到他们运行/leave为止。
我尝试过使用诸如combined-streamaudio-mixermultistreammultipipe之类的Node包,但是我对Node流不够熟悉,无法使用每个包的优点来弥补缺点给问题带来的不足。(可能需要流是连续的,或者接收器流被应用到静音上)或者通过一种“多流”,在管道流和静音缓冲区之间交换。我还需要覆盖音频文件(例如,用ffmpeg)。
Readable是否有可能等待音频块,如果在某个时间段内没有音频块,就推送一个静音块?下面是我尝试这样做的尝试(同样,基于Discord.js recorder example):

// CREDIT TO: https://stackoverflow.com/a/69328242/8387760
const SILENCE = Buffer.from([0xf8, 0xff, 0xfe]);

async function createListeningStream(connection, userId) {
    // Creating manually terminated stream
    let receiverStream = connection.receiver.subscribe(userId, {
        end: {
            behavior: EndBehaviorType.Manual
        },
    });
    
    // Interpolating silence
    // TODO Increases file length over tenfold by stretching audio?
    let userStream = new Readable({
        read() {
            receiverStream.on('data', chunk => {
                if (chunk) {
                    this.push(chunk);
                }
                else {
                    // Never occurs
                    this.push(SILENCE);
                }
            });
        }
    });
    
    /* Piping userStream to file at 48kHz sample rate */
}

作为一个不必要的奖励,如果可以检查用户是否曾经说话或没有,以消除创建空录音将会有所帮助。提前感谢。
相关:

rm5edbpk

rm5edbpk1#

在阅读了大量关于节点流的内容之后,我获得的解决方案出乎意料地简单。
1.创建布尔变量recording,当记录应继续时为true,当记录应停止时为false
1.创建一个缓冲区来处理反压(即,当数据输入速率高于其输出速率时)

let buffer = [];

1.创建一个可读流,接收用户音频流将通过管道传输到该可读流中

// New audio stream (with silence)
let userStream = new Readable({
    // ...
});

// User audio stream (without silence)
let receiverStream = connection.receiver.subscribe(userId, {
    end: {
        behavior: EndBehaviorType.Manual,
    },
});
receiverStream.on('data', chunk => buffer.push(chunk));

1.在流的读取方法中,使用48 kHz计时器处理流记录,以匹配用户音频流的采样率

read() {
   if (recording) {
        let delay = new NanoTimer();
        delay.setTimeout(() => {
            if (buffer.length > 0) {
                this.push(buffer.shift());
            }
            else {
                this.push(SILENCE);
            }
        }, '', '20m');
    }
    // ...
}


1.在同一方法中,还处理流的结束

// ...
        else if (buffer.length > 0) {
            // Stream is ending: sending buffered audio ASAP
            this.push(buffer.shift());
        }
        else {
            // Ending stream
            this.push(null);
        }

如果我们把这些放在一起:

const NanoTimer = require('nanotimer'); // node
/* import NanoTimer from 'nanotimer'; */ // es6

const SILENCE = Buffer.from([0xf8, 0xff, 0xfe]);

async function createListeningStream(connection, userId) {
    // Accumulates very, very slowly, but only when user is speaking: reduces buffer size otherwise
    let buffer = [];
    
    // Interpolating silence into user audio stream
    let userStream = new Readable({
        read() {
            if (recording) {
                // Pushing audio at the same rate of the receiver
                // (Could probably be replaced with standard, less precise timer)
                let delay = new NanoTimer();
                delay.setTimeout(() => {
                    if (buffer.length > 0) {
                        this.push(buffer.shift());
                    }
                    else {
                        this.push(SILENCE);
                    }
                    // delay.clearTimeout();
                }, '', '20m'); // A 20.833ms period makes for a 48kHz frequency
            }
            else if (buffer.length > 0) {
                // Sending buffered audio ASAP
                this.push(buffer.shift());
            }
            else {
                // Ending stream
                this.push(null);
            }
        }
    });
    
    // Redirecting user audio to userStream to have silence interpolated
    let receiverStream = connection.receiver.subscribe(userId, {
        end: {
            behavior: EndBehaviorType.Manual, // Manually closed elsewhere
        },
        // mode: 'pcm',
    });
    receiverStream.on('data', chunk => buffer.push(chunk));
    
    // pipeline(userStream, ...), etc.
}

从这里,您可以将该流传输到fileWriteStream等文件中,以满足不同的用途。注意,每当recording = false执行以下命令时,最好也关闭receiverStream:

connection.receiver.subscriptions.delete(userId);

同样,如果userStream不是,也应该关闭它,例如,pipeline方法的第一个参数。
作为一个旁注,虽然超出了我最初问题的范围,但您可以对此进行许多其他修改。例如,您可以在将receiverStream的数据通过管道传输到userStream之前,在音频前添加silence,例如,生成相同长度的多个音频流:

// let startTime = ...
let creationTime;
for (let i = startTime; i < (creationTime = Date.now()); i++) {
    buffer.push(SILENCE);
}

编码快乐!

相关问题