websocket Google Speech-To-Text v2不接受Node.JS中的音频

m2xkgtsf  于 12个月前  发布在  Go
关注(0)|答案(1)|浏览(112)

几天来,我一直在尝试使用Node.JS迁移到Google STT V2。在v1中,一切都很完美。我创建了一个识别器,并使用https://github.com/GoogleCloudPlatform/nodejs-docs-samples/blob/main/speech/transcribeStreaming.v2.js创建了一个脚本
我的观点是转录来自Twilio电话的音频,我使用Twilio的websockets连接到WSS并流式传输音频数据,我将其传递给Google streamingRecognition。我的代码看起来像这样:

const speech = require('@google-cloud/speech').v2;
const fs = require('fs');

const client = new speech.SpeechClient({
  keyFilename: './googlecreds.json',
  apiEndpoint: 'eu-speech.googleapis.com'
});

const recognizerName = "projects/12345678910/locations/eu/recognizers/name";

const recognitionConfig = {
  audoDecodingConfig: {},
};

const streamingConfig = {
  config: recognitionConfig,
};

const configRequest = {
  recognizer: recognizerName,
  streamingConfig: streamingConfig,
};

const express = require('express');
const bodyParser = require('body-parser');
const app = express();
app.use(bodyParser.urlencoded({ extended: true }));

// Load your key and certificate
const privateKey = fs.readFileSync('location', 'utf8');
const certificate = fs.readFileSync('location', 'utf8');
const ca = fs.readFileSync('location', 'utf8');

const credentials = {
  key: privateKey,
  cert: certificate,
  ca: ca
};

//wss
const WebSocket = require('ws');
const https = require('https');
const server = https.createServer(credentials, app);
const wss = new WebSocket.Server({ 
  server: server, 
  path: '/stream',
});

wss.on("connection", async function connection(ws) {
    let recognizeStream = null;
    ws.on("message", function incoming(message) {
        const msg = JSON.parse(message);
        switch (msg.event) {
            case "start":
                recognizeStream = client
                ._streamingRecognize()
                .on('data', response => {
                  const {results} = response;
                  console.log(results[0].alternatives[0].transcript);
                })
                .on('error', err => {
                  console.error(err.message);
                })
                recognizeStream.write(configRequest);
                break;
            case "media":
                // Write the raw media data to the recognize stream
                recognizeStream.write({audio: msg.media.payload});
                break;
            case "stop":
                // Stop the recognize stream
                recognizeStream.end();
                break;
        }
    });
});

app.post('/voice', (req, res) => {
  twiml = `
<Response>
    <Say>talk now</Say>
    <Connect>
        <Stream url="wss://my.domain.com/stream"/>
    </Connect>
    <Pause length="60"/>
</Response>
`
  res.type('text/xml');
  res.send(twiml);
});

const port = process.env.PORT || 8080;
server.listen(port, '0.0.0.0', () => {
  console.log(`Server running on port ${port}`);
});

字符串
Stream已连接,配置写入没有错误。我可以在“media”情况下记录从Twilio接收到的msg.media.payload,但写入它来识别Stream没有任何作用,我没有得到任何答案。我不知道该怎么办了。

nhhxz33t

nhhxz33t1#

正在处理相同的功能。能够解决。两个修复:
1配置

const recognitionConfig = {
  explicitDecodingConfig: {
    encoding: 'MULAW',
    sampleRateHertz: 8000,
    audioChannelCount: 1
  }
}

字符串
2缓冲转换

const buffer = Buffer.from(msg.media.payload, 'base64')
recognizeStream?.write({ audio: buffer })

相关问题