WeChat Mini Program uses simultaneous interpretation to implement speech recognition

WeChat Mini Program uses simultaneous interpretation to implement speech recognition

I use the simultaneous interpretation voice recognition function to realize the voice search function on the homepage of the WeChat applet. If you are also like that, congratulations. You can press ctrl+c, ctrl+v to change it again. If you are not like that, don’t leave in a hurry. Reading my article will be helpful to you!

First, on the WeChat public platform (which is the background of the mini program), go to Settings in the left menu bar --> Plugin Management under Third-party Settings --> Add --> Search for Simultaneous Interpretation --> Click Add

The next step is to add some settings in the code.

If you are using WeChat developer tools to develop mini-programs, you need to add the following code to the app.json file.

// app.json
{
    ...
    "plugins": {
        ...
        "WechatSI": {
            "version": "0.3.4", // This is the version of simultaneous interpretation (you can also check the latest version of simultaneous interpretation added in WeChat public platform)
            "provider": "wx069ba97219f66d99" // This is the ID of the simultaneous interpreter
        }
    }
}

If you are using Hbuildex to develop a small program, you need to add modifications in the source code view of the manifest.json file.

Find mp-weixin in the source code view, and then add and modify it according to the following code

// manifest.json
/* Mini-program related*/
"mp-weixin": {
    "appid": "xxxxxxxxxx", // This is the AppId of your applet
    ...
    "plugins": {
        "WechatSI": {
            "version": "0.3.4", // This is the version of simultaneous interpretation (you can also check the latest version of simultaneous interpretation added in WeChat public platform)
            "provider": "wx069ba97219f66d99" // This is the ID of the simultaneous interpreter
        }
    }
}

After completing the above steps, you can develop according to the official documentation

The following is my function implementation code

// index.vue Here I only write the voice button in my page layout (simplified)
<template>
    <div @click="yuyin" class="yuyin-icon">
        <img :src="baseUrlImg+'/yuyin.png'" alt="" class="img" />
    </div>
</template>
<script>
    export default {
        data() {
            return {
                // This is the content of the search box search_word: ''
            }
        },
        methods: {
            // Voice click eventyuyin: function() {
                var that = this
                // Initiate an authorization request to the useruni.authorize({
                    scope: 'scope.record', // Get the recording function, that is, the microphone permission success: (res) => {
                        // User authorized to use microphone permission to call voice search event function that.plugin()
                    },
                    // The user does not have permission to use the microphone. Execute the following code fail(res) {
                        // Display a modal pop-up window to remind the user that the microphone permission is not enabled uni.showModal({
                            content: 'It is detected that you have not enabled the microphone permission, please keep the microphone permission enabled',
                            confirmText: 'Go to open',
                            showCancel: false,
                            success: (res) => {
                                console.log(res)
                                if(res.confirm) {
                                    // Open the client applet settings interface and return the result of the user's settings uni.openSetting({
                                        success: (res) => {
                                            console.log(res)
                                            if(res.authSetting['scope.record'] == false) {
                                                that.plugin()
                                            }
                                        }
                                    })
                                } else {
                                    uni.navigateBack({
                                        delta: 1
                                    })
                                }
                            }
                        })
                    }
                })
            }
            // Voice search plugin () {
                var that = this
                var plugin = requirePlugin('WechatSI')
                var manager = plugin.getRecordRecognitionManager()
                // Set the recording parameters manager.start({
                    duration: 5000, // time lang: "zh_CN" // language })
                // Start recording manager.onStart = function(res) {
                    console.log("Recording recognition started successfully", res)
                    if(res.msg == 'Ok') {
                        // Prompt the user that recording is in progressuni.showToast({
                            title: 'Recognizing voice...',
                            duration: 5000,
                            icon: 'loading'
                        })
                    }
                }
                // Recording ends manager.onStop = function(res) {
                    // Prompt the user that they are jumping to the search page (because when I did it, it would take 1 to 2 seconds to jump, so I set a prompt box)
                    uni.showToast({
                        title: 'Redirecting...',
                        duration: 1500,
                        icon: 'success'
                    })
                    // Translate the recognized speech into text plugin.translate({
                        lfrom: 'en_US',
                        lto: 'zh_CN',
                        content: res.result,
                        success: function(res) {
                            if(res.retcode == 0) {
                                // (This is the case for iPhone, not sure about Android) Speech recognition sometimes adds a symbol at the end if (res.result.charAt(res.result.length - 1) == '.' || res.result.charAt(res.result.length - 1) == '.') {
                                    res.result = res.result.substr(0, res.result.length - 1);
                                }
                                // Put the translated content into the search box that.search_word = res.result
                                // Code that performs the search function that.searchName()
                            } else {
                                console.log('Translation failed', res)
                            }
                        },
                        fail: function(res) {
                            console.log('Network failed', res)
                            // When the user speaks softly or does not speak, these two errors will be reported if(res.retcode == -10001 || res.retcode == -10002) {
                                uni.showToast({
                                    title: 'I didn't hear what you said',
                                    duration: 1000,
                                    icon: 'error'
                                })
                            }
                        }
                    })
                }
                 // Print error information manager.onError = function(res) {
                    console.error('error msg', res.msg)
                }
            }
        }
    }
</script>

This is the end of this article about how WeChat Mini Program uses simultaneous interpretation to implement voice recognition. For more relevant Mini Program voice recognition content, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope everyone will support 123WORDPRESS.COM in the future!

You may also be interested in:
  • WeChat applet implementation code for real-time speech recognition via websocket
  • WeChat applet implements voice recognition to text function and the pitfalls encountered
  • Detailed explanation of WeChat applet and Baidu's speech recognition interface
  • Quickly implement the mini program voice recognition function in 30 minutes

<<:  Mysql queries the transactions being executed and how to wait for locks

>>:  How to install mongodb 4.2 using yum on centos8

Recommend

An article to deal with Mysql date and time functions

Table of contents Preface 1. Get the current time...

How to control the startup order of docker compose services

summary Docker-compose can easily combine multipl...

Q&A: Differences between XML and HTML

Q: I don’t know what is the difference between xml...

Detailed explanation of HTML onfocus gain focus and onblur lose focus events

HTML onfocus Event Attributes Definition and Usag...

A brief discussion on JS regular RegExp object

Table of contents 1. RegExp object 2. Grammar 2.1...

Nginx domain forwarding usage scenario code example

Scenario 1: Due to server restrictions, only one ...

Solution to Linux server graphics card crash

When the resolution of the login interface is par...

MySQL Basic Tutorial: Detailed Explanation of DML Statements

Table of contents DML statements 1. Insert record...

Vue implements two routing permission control methods

Table of contents Method 1: Routing meta informat...

Nginx configuration to achieve multiple server load balancing

Nginx load balancing server: IP: 192.168.0.4 (Ngi...

Vue basics MVVM, template syntax and data binding

Table of contents 1. Vue Overview Vue official we...

A brief discussion on MySQL select optimization solution

Table of contents Examples from real life Slow qu...