WeChat Mini Program uses simultaneous interpretation to implement speech recognition

WeChat Mini Program uses simultaneous interpretation to implement speech recognition

I use the simultaneous interpretation voice recognition function to realize the voice search function on the homepage of the WeChat applet. If you are also like that, congratulations. You can press ctrl+c, ctrl+v to change it again. If you are not like that, don’t leave in a hurry. Reading my article will be helpful to you!

First, on the WeChat public platform (which is the background of the mini program), go to Settings in the left menu bar --> Plugin Management under Third-party Settings --> Add --> Search for Simultaneous Interpretation --> Click Add

The next step is to add some settings in the code.

If you are using WeChat developer tools to develop mini-programs, you need to add the following code to the app.json file.

// app.json
{
    ...
    "plugins": {
        ...
        "WechatSI": {
            "version": "0.3.4", // This is the version of simultaneous interpretation (you can also check the latest version of simultaneous interpretation added in WeChat public platform)
            "provider": "wx069ba97219f66d99" // This is the ID of the simultaneous interpreter
        }
    }
}

If you are using Hbuildex to develop a small program, you need to add modifications in the source code view of the manifest.json file.

Find mp-weixin in the source code view, and then add and modify it according to the following code

// manifest.json
/* Mini-program related*/
"mp-weixin": {
    "appid": "xxxxxxxxxx", // This is the AppId of your applet
    ...
    "plugins": {
        "WechatSI": {
            "version": "0.3.4", // This is the version of simultaneous interpretation (you can also check the latest version of simultaneous interpretation added in WeChat public platform)
            "provider": "wx069ba97219f66d99" // This is the ID of the simultaneous interpreter
        }
    }
}

After completing the above steps, you can develop according to the official documentation

The following is my function implementation code

// index.vue Here I only write the voice button in my page layout (simplified)
<template>
    <div @click="yuyin" class="yuyin-icon">
        <img :src="baseUrlImg+'/yuyin.png'" alt="" class="img" />
    </div>
</template>
<script>
    export default {
        data() {
            return {
                // This is the content of the search box search_word: ''
            }
        },
        methods: {
            // Voice click eventyuyin: function() {
                var that = this
                // Initiate an authorization request to the useruni.authorize({
                    scope: 'scope.record', // Get the recording function, that is, the microphone permission success: (res) => {
                        // User authorized to use microphone permission to call voice search event function that.plugin()
                    },
                    // The user does not have permission to use the microphone. Execute the following code fail(res) {
                        // Display a modal pop-up window to remind the user that the microphone permission is not enabled uni.showModal({
                            content: 'It is detected that you have not enabled the microphone permission, please keep the microphone permission enabled',
                            confirmText: 'Go to open',
                            showCancel: false,
                            success: (res) => {
                                console.log(res)
                                if(res.confirm) {
                                    // Open the client applet settings interface and return the result of the user's settings uni.openSetting({
                                        success: (res) => {
                                            console.log(res)
                                            if(res.authSetting['scope.record'] == false) {
                                                that.plugin()
                                            }
                                        }
                                    })
                                } else {
                                    uni.navigateBack({
                                        delta: 1
                                    })
                                }
                            }
                        })
                    }
                })
            }
            // Voice search plugin () {
                var that = this
                var plugin = requirePlugin('WechatSI')
                var manager = plugin.getRecordRecognitionManager()
                // Set the recording parameters manager.start({
                    duration: 5000, // time lang: "zh_CN" // language })
                // Start recording manager.onStart = function(res) {
                    console.log("Recording recognition started successfully", res)
                    if(res.msg == 'Ok') {
                        // Prompt the user that recording is in progressuni.showToast({
                            title: 'Recognizing voice...',
                            duration: 5000,
                            icon: 'loading'
                        })
                    }
                }
                // Recording ends manager.onStop = function(res) {
                    // Prompt the user that they are jumping to the search page (because when I did it, it would take 1 to 2 seconds to jump, so I set a prompt box)
                    uni.showToast({
                        title: 'Redirecting...',
                        duration: 1500,
                        icon: 'success'
                    })
                    // Translate the recognized speech into text plugin.translate({
                        lfrom: 'en_US',
                        lto: 'zh_CN',
                        content: res.result,
                        success: function(res) {
                            if(res.retcode == 0) {
                                // (This is the case for iPhone, not sure about Android) Speech recognition sometimes adds a symbol at the end if (res.result.charAt(res.result.length - 1) == '.' || res.result.charAt(res.result.length - 1) == '.') {
                                    res.result = res.result.substr(0, res.result.length - 1);
                                }
                                // Put the translated content into the search box that.search_word = res.result
                                // Code that performs the search function that.searchName()
                            } else {
                                console.log('Translation failed', res)
                            }
                        },
                        fail: function(res) {
                            console.log('Network failed', res)
                            // When the user speaks softly or does not speak, these two errors will be reported if(res.retcode == -10001 || res.retcode == -10002) {
                                uni.showToast({
                                    title: 'I didn't hear what you said',
                                    duration: 1000,
                                    icon: 'error'
                                })
                            }
                        }
                    })
                }
                 // Print error information manager.onError = function(res) {
                    console.error('error msg', res.msg)
                }
            }
        }
    }
</script>

This is the end of this article about how WeChat Mini Program uses simultaneous interpretation to implement voice recognition. For more relevant Mini Program voice recognition content, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope everyone will support 123WORDPRESS.COM in the future!

You may also be interested in:
  • WeChat applet implementation code for real-time speech recognition via websocket
  • WeChat applet implements voice recognition to text function and the pitfalls encountered
  • Detailed explanation of WeChat applet and Baidu's speech recognition interface
  • Quickly implement the mini program voice recognition function in 30 minutes

<<:  Mysql queries the transactions being executed and how to wait for locks

>>:  How to install mongodb 4.2 using yum on centos8

Recommend

Vue commonly used high-order functions and comprehensive examples

1. Commonly used high-order functions of arrays S...

Detailed steps for Python script self-start and scheduled start under Linux

1. Python automatically runs at startup Suppose t...

Detailed explanation of mysql basic operation statement commands

1. Connect to MySQL Format: mysql -h host address...

Detailed explanation of how to use zabbix to monitor oracle database

1. Overview Zabbix is ​​a very powerful and most ...

How to implement n-grid layout in CSS

Common application scenarios The interfaces of cu...

HTML table tag tutorial (34): row span attribute ROWSPAN

In a complex table structure, some cells span mul...

Solution to the cross-domain problem of SpringBoot and Vue interaction

Table of contents Browser Same Origin Policy 1. V...

Summary of JavaScript Timer Types

Table of contents 1.setInterval() 2.setTimeout() ...

Example of converting spark rdd to dataframe and writing it into mysql

Dataframe is a new API introduced in Spark 1.3.0,...

How to install redis5.0.3 in docker

1. Pull the official 5.0.3 image [root@localhost ...

How to create scheduled tasks using crond tool in Linux

Preface Crond is a scheduled execution tool under...

Vue's vue.$set() method source code case detailed explanation

In the process of using Vue to develop projects, ...