文本到语音转换在Android Studio中无法正常工作

crcmnpdw  于 2023-02-24  发布在  Android
关注(0)|答案(3)|浏览(238)

我最近在android studio中创建了一个简单的项目,使用语音识别和文本到语音,但问题是文本到语音在第一次启动应用程序时不能说出给定的行,但在此事件后它工作得非常正常。比如当我在下面的代码中启动应用程序时,我添加了欢迎用户的行,但tts没有。t speak,然后当我按下识别按钮,应用程序识别句子,也正确地说。为什么会这样呢?这似乎很奇怪。我提供了下面的代码,请检查它,并告诉我,如果我犯了一些错误,尽快。
这是我的Java代码:

package com.maitreyastudio.ai;

import androidx.annotation.NonNull;
import androidx.appcompat.app.AppCompatActivity;
import androidx.core.app.ActivityCompat;
import androidx.core.content.ContextCompat;

import android.Manifest;
import android.annotation.SuppressLint;
import android.content.Intent;
import android.content.pm.PackageManager;
import android.location.Address;
import android.location.Geocoder;
import android.location.Location;
import android.net.Uri;
import android.os.Build;
import android.os.Bundle;
import android.provider.Settings;
import android.speech.RecognitionListener;
import android.speech.RecognizerIntent;
import android.speech.SpeechRecognizer;
import android.speech.tts.TextToSpeech;
import android.view.MotionEvent;
import android.view.View;
import android.widget.Button;
import android.widget.EditText;
import android.widget.GridLayout;
import android.widget.TextView;

import com.chaquo.python.PyObject;
import com.chaquo.python.Python;
import com.chaquo.python.android.AndroidPlatform;
import com.google.android.gms.location.FusedLocationProviderClient;
import com.google.android.gms.tasks.OnCompleteListener;
import com.google.android.gms.tasks.OnSuccessListener;
import com.google.android.gms.tasks.Task;

import org.w3c.dom.Text;

import java.io.IOException;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;
import java.util.Locale;

import static android.Manifest.permission.ACCESS_FINE_LOCATION;
import static android.Manifest.permission.READ_EXTERNAL_STORAGE;
import static android.Manifest.permission.RECORD_AUDIO;
import static android.Manifest.permission.WRITE_EXTERNAL_STORAGE;

public class MainActivity extends AppCompatActivity {

    private Button btnRecognize;
    private SpeechRecognizer speechRecognizer;
    private TextToSpeech textToSpeech;
    private EditText ET_ShowRecognized;
    String locality;
    private Intent intent;
    private String ET_ShowRecognizedText;
    private String ProcessingText;
    private ArrayList voices;
    private FusedLocationProviderClient fusedLocationProviderClient;
    Geocoder geocoder;
    /*Python py;
    PyObject pyobj;
    PyObject obj;
    String currentDate;
    String currentTime;*/

    @SuppressLint({"SetTextI18n", "ClickableViewAccessibility", "MissingPermission"})
    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);

        ActivityCompat.requestPermissions(this, new String[]{RECORD_AUDIO, WRITE_EXTERNAL_STORAGE, READ_EXTERNAL_STORAGE, ACCESS_FINE_LOCATION}, PackageManager.PERMISSION_GRANTED);

        ET_ShowRecognized = findViewById(R.id.ET_ShowRecognized);
        btnRecognize = findViewById(R.id.btnRecognize);



        /*fusedLocationProviderClient.getLastLocation().addOnCompleteListener(new OnCompleteListener<Location>() {
            @Override
            public void onComplete(@NonNull Task<Location> task) {

                Location location = task.getResult();
                if(location != null){

                    geocoder = new Geocoder(MainActivity.this, Locale.getDefault());
                    try {

                        List<Address> address = geocoder.getFromLocation(location.getLatitude(), location.getLongitude(), 1);
                        locality = address.get(0).getLocality();

                    } catch (IOException e) {
                        ;
                    }

                }
            }
        });

        if(!Python.isStarted()){

            Python.start(new AndroidPlatform(this));

        }
        py = Python.getInstance();
        pyobj = py.getModule("WolframAlpha");
        obj = pyobj.callAttr("main", locality);*/

        textToSpeech = new TextToSpeech(getApplicationContext(), new TextToSpeech.OnInitListener() {
            @Override
            public void onInit(int i) {
                if (i == TextToSpeech.SUCCESS) {

                    textToSpeech.setLanguage(Locale.ENGLISH);

                }

            }
        });

        textToSpeech.speak("Hi you succesfully ran me.", TextToSpeech.QUEUE_FLUSH, null, null);

        //currentDate = new SimpleDateFormat("dd-MM-yyyy", Locale.getDefault()).format(new Date());
        //currentTime = new SimpleDateFormat("HH:mm:ss", Locale.getDefault()).format(new Date());
        //textToSpeech.speak("Hi! I am your personal assistant. Today date is something something ", TextToSpeech.QUEUE_FLUSH, null, null);
        //Speak("Today's weather forecast for the current location is " + obj.toString());

        intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
        intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);

        speechRecognizer = SpeechRecognizer.createSpeechRecognizer(this);
        speechRecognizer.setRecognitionListener(new RecognitionListener() {
            @Override
            public void onReadyForSpeech(Bundle bundle) {

            }

            @Override
            public void onBeginningOfSpeech() {

            }

            @Override
            public void onRmsChanged(float v) {

            }

            @Override
            public void onBufferReceived(byte[] bytes) {

            }

            @Override
            public void onEndOfSpeech() {

            }

            @Override
            public void onError(int i) {

            }

            @Override
            public void onResults(Bundle bundle) {
                ArrayList<String> mathches = bundle.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);

                if (mathches != null) {

                    ET_ShowRecognized.setText(mathches.get(0));
                    process();

                }
            }

            @Override
            public void onPartialResults(Bundle bundle) {

            }

            @Override
            public void onEvent(int i, Bundle bundle) {

            }
        });

        btnRecognize.setOnTouchListener(new View.OnTouchListener() {

            @Override
            public boolean onTouch(View view, MotionEvent motionEvent) {

                switch (motionEvent.getAction()) {

                    case MotionEvent.ACTION_UP:
                        speechRecognizer.stopListening();

                        break;

                    case MotionEvent.ACTION_DOWN:
                        ET_ShowRecognized.setText(null);
                        ET_ShowRecognized.setText("Listening...");
                        speechRecognizer.startListening(intent);
                        break;
                    default:
                        break;
                }

                return false;
            }
        });

        textToSpeech.speak("Hi! Seems good to meet you.", TextToSpeech.QUEUE_FLUSH, null, null);
    }

    public void process() {

        ProcessingText = ET_ShowRecognized.getText().toString().toLowerCase();

        switch (ProcessingText) {

            case ("hello"):
                textToSpeech.speak("Hello! Hope all is going fine.", TextToSpeech.QUEUE_FLUSH, null, null);
                break;

            case ("hi"):
                textToSpeech.speak("Hi! I hope all is well.", TextToSpeech.QUEUE_FLUSH, null, null);
                break;

            case ("what is your name"):
                textToSpeech.speak("My name is assistant.", TextToSpeech.QUEUE_FLUSH, null, null);
                break;

            case ("bye"):
                finish();
                System.exit(0);

            default:
                textToSpeech.speak(ProcessingText, TextToSpeech.QUEUE_FLUSH, null, null);
                break;
        }

    }

}

这是我的XML代码

<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    tools:context=".MainActivity">

    <Button
        android:id="@+id/btnRecognize"
        style="@style/Widget.AppCompat.Button"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:text="@string/recognize"
        app:layout_constraintBottom_toBottomOf="parent"
        app:layout_constraintEnd_toEndOf="parent"
        app:layout_constraintStart_toStartOf="parent"
        app:layout_constraintTop_toTopOf="parent" />

    <EditText
        android:id="@+id/ET_ShowRecognized"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:ems="10"
        android:inputType="textPersonName"
        android:hint="@string/you_will_see_recognized_text_here"
        app:layout_constraintBottom_toTopOf="@+id/btnRecognize"
        app:layout_constraintEnd_toEndOf="parent"
        app:layout_constraintStart_toStartOf="parent"
        app:layout_constraintTop_toTopOf="parent" />

</androidx.constraintlayout.widget.ConstraintLayout>

请尽快帮我。
谢谢

pbwdgjma

pbwdgjma1#

它以前不起作用的原因和DB377说的一样,那是因为tts的初始化是异步的,只有当进程完成时才调用onInit,执行不一定是逐行进行的。
您可以按如下方式更改代码:

// THIS RUNS FIRST
textToSpeech = new TextToSpeech(getApplicationContext(), new TextToSpeech.OnInitListener() {

        // THIS RUNS THIRD!
        @Override
        public void onInit(int i) {
            if (i == TextToSpeech.SUCCESS) {

                textToSpeech.setLanguage(Locale.ENGLISH);

                // NEW LOCATION
                textToSpeech.speak("Hi you succesfully ran me.", TextToSpeech.QUEUE_FLUSH, null, null);

            }

        }
    });

    // OLD LOCATION (THIS RUNS SECOND)
gpnt7bae

gpnt7bae2#

您可以尝试创建一个应用程序类,并在其中初始化您的textToSpeech引擎,以便它在应用程序启动时开始初始化。

public class MyApp extends Application {
    private static MyApp instance = null;
    private TextToSpeech t1;

    public static MyApp getInstance() {
        return instance;
    }

    @Override
    public void onCreate() {
        super.onCreate();
        instance = this;

        t1 = new TextToSpeech(this, new TextToSpeech.OnInitListener() {
            @Override
            public void onInit(int i) {
                if (i == TextToSpeech.SUCCESS) {
                    
                }
            }
        });
    }

    public TextToSpeech getT1() {
        return t1;
    }
}

然后在你的活动中使用它。

3pmvbmvn

3pmvbmvn3#

//这对我很有效:
//在此处将文本转换为语音

// Create a Handler object to run some code after a delay
Handler handler = new Handler();
handler.postDelayed(new Runnable() {
    @Override
    public void run() {
        // Your code to run after a delay. in my case in another handler
        handler.post(runnable);
    }
}, 3000); // 3 seconds delay

相关问题