Go to file
nl 958c936de3 feat: 添加model zoo,统一定义模型结构 2021-11-09 17:35:15 +08:00
.github Update FUNDING.yml 2019-05-25 21:02:13 +08:00
datalist 更新st-cmds拼音标签 2020-10-23 20:05:05 +08:00
general_function feat: 其他文件也升级到 tf2.0 版本 API 2021-05-16 20:53:23 +08:00
model_language fix: 去除其中的异形字和生僻字 2021-05-16 19:46:47 +08:00
.gitignore fix bugs and improve asrserver 2018-05-11 16:56:59 +08:00
LICENSE add license and gitignore.modify py code main.py 2017-08-22 17:56:05 +08:00
LanguageModel.py style: 为代码添加头部注释说明 2021-05-16 19:44:05 +08:00
LanguageModel2.py style: 为代码添加头部注释说明 2021-05-16 19:44:05 +08:00
README.md docs: 更新README文档 2021-05-16 22:00:04 +08:00
README_EN.md docs: 更新README文档 2021-05-16 22:00:04 +08:00
SpeechModel24.py fix: 修复加载h5格式模型文件的问题 2021-05-16 21:14:52 +08:00
SpeechModel25.py fix: 修复加载h5格式模型文件的问题 2021-05-16 21:14:52 +08:00
SpeechModel26.py fix: 修复加载h5格式模型文件的问题 2021-05-16 21:14:52 +08:00
SpeechModel251.py feat: 解决新版tf中ctc_decode后出现大量-1问题 2021-11-03 18:13:15 +08:00
SpeechModel251_limitless.py fix: 修复加载h5格式模型文件的问题 2021-05-16 21:14:52 +08:00
SpeechModel251_p.py fix: 修复加载h5格式模型文件的问题 2021-05-16 21:14:52 +08:00
SpeechModel252.py fix: 修复加载h5格式模型文件的问题 2021-05-16 21:14:52 +08:00
SpeechModel261.py fix: 修复加载h5格式模型文件的问题 2021-05-16 21:14:52 +08:00
SpeechModel261_p.py fix: 修复加载h5格式模型文件的问题 2021-05-16 21:14:52 +08:00
asrserver.py fix: 修复加载h5格式模型文件的问题 2021-05-16 21:14:52 +08:00
dict.txt fix: 去除其中的异形字和生僻字 2021-05-16 19:46:47 +08:00
readdata24.py style: 为代码添加头部注释说明 2021-05-16 19:44:05 +08:00
readdata24_limitless.py style: 为代码添加头部注释说明 2021-05-16 19:44:05 +08:00
requirements.txt Merge pull request #258 from nl8590687/dependabot/pip/pillow-8.3.2 2021-11-03 18:02:28 +08:00
speech_model_zoo.py feat: 添加model zoo,统一定义模型结构 2021-11-09 17:35:15 +08:00
speech_recorder.py style: 为代码添加头部注释说明 2021-05-16 19:44:05 +08:00
test.py fix: 修复加载h5格式模型文件的问题 2021-05-16 21:14:52 +08:00
testClient.py style: 为代码添加头部注释说明 2021-05-16 19:44:05 +08:00
test_mspeech.py feat: 独占显卡,不再限制单卡显存占用 2021-11-03 17:58:45 +08:00
train_mspeech.py feat: 独占显卡,不再限制单卡显存占用 2021-11-03 17:58:45 +08:00

README_EN.md

ASRT: A Deep-Learning-Based Chinese Speech Recognition System

GPL-3.0 Licensed TensorFlow Version Python Version

ReadMe Language | 中文版 | English |

ASRT Project Home Page | Released Download | View this project's wiki document (Chinese) | Experience Demo | Donate

If you have any questions in your works with this project, welcome to put up issues in this repo and I will response as soon as possible.

You can check the FAQ Page (Chinese) first before asking questions to avoid repeating questions.

A post about ASRT's introduction

About how to use ASRT to train and deploy

For questions about the principles of the statistical language model that are often asked, see:

For questions about CTC, see:

For more infomation please refer to author's blog website: AILemon Blog (Chinese)

Introduction

This project uses Keras, TensorFlow based on deep convolutional neural network and long-short memory neural network, attention mechanism and CTC to implement.

  • Steps

First, clone the project to your computer through Git, and then download the data sets needed for the training of this project. For the download links, please refer to End of Document

$ git clone https://github.com/nl8590687/ASRT_SpeechRecognition.git

Or you can use the "Fork" button to copy a copy of the project and then clone it locally with your own SSH key.

After cloning the repository via git, go to the project root directory; create a subdirectory dataset/ (you can use a soft link instead), and then extract the downloaded datasets directly into it.

Note that in the current version, both the Thchs30 and ST-CMDS data sets must be downloaded and used, and using other data sets need to modify the sourece codes.

$ cd ASRT_SpeechRecognition

$ mkdir dataset

$ tar zxf <dataset zip files name> -C dataset/ 

Then, you need to copy all the files in the 'datalist' directory to the dataset directory, that is, put them together with the data set.

$ cp -rf datalist/* dataset/

Currently available models are 24, 25 and 251

Before running this project, please install the necessary Python3 version dependent library

To start training this project, please execute:

$ python3 train_mspeech.py

To start the test of this project, please execute:

$ python3 test_mspeech.py

Before testing, make sure the model file path filled in the code files exists.

ASRT API Server startup please execute:

$ python3 asrserver.py

Please note that after opening the API server, you need to use the client software corresponding to this ASRT project for voice recognition. For details, see the Wiki documentation ASRT Client Demo.

If you want to train and use other model(not Model 251), make changes in the corresponding position of the import SpeechModel in the code files.

If there is any problem during the execution of the program or during use, it can be promptly put forward in the issue, and I will reply as soon as possible.

Model

Speech Model

CNN + LSTM/GRU + CTC

The maximum length of the input audio is 16 seconds, and the output is the corresponding Chinese pinyin sequence.

  • Questions about downloading trained models

The released finished software that includes trained model weights can be downloaded from ASRT download page.

Github Releases page includes the archives of the various versions of the software released and it's introduction. Under each version module, there is a zip file that includes trained model weights files.

Language Model

Maximum Entropy Hidden Markov Model Based on Probability Graph.

The input is a Chinese pinyin sequence, and the output is the corresponding Chinese character text.

About Accuracy

At present, the best model can basically reach 80% of Pinyin correct rate on the test set.

However, as the current international and domestic teams can achieve 98%, the accuracy rate still needs to be further improved.

Python libraries that need importing

  • python_speech_features
  • TensorFlow (1.14 - 2.x)
  • Numpy
  • wave
  • matplotlib
  • math
  • Scipy
  • h5py
  • http
  • urllib
  • requests

If you have trouble when install those packages, please run the following script to do it as long as you have a GPU and CUDA 10.0 and cudnn 7.4 have been installed

$ pip install -r requirements.txt

Dependent Environment Details

Data Sets

Some free Chinese speech datasets (Chinese)

  • Tsinghua University THCHS30 Chinese voice data set

    data_thchs30.tgz Download

    test-noise.tgz Download

    resource.tgz Download

  • Free ST Chinese Mandarin Corpus

    ST-CMDS-20170001_1-OS.tar.gz Download

  • AIShell-1 Open Source Dataset

    data_aishell.tgz Download

    Noteunzip this dataset

    $ tar xzf data_aishell.tgz
    $ cd data_aishell/wav
    $ for tar in *.tar.gz;  do tar xvf $tar; done
    
  • Primewords Chinese Corpus Set 1

    primewords_md_2018_set1.tar.gz Download

  • aidatatang_200zh

    aidatatang_200zh.tgz Download

  • MagicData

    train_set.tar.gz Download

    dev_set.tar.gz Download

    test_set.tar.gz Download

    metadata.tar.gz Download

Special thanks! Thanks to the predecessors' public voice data set.

If the provided dataset link cannot be opened and downloaded, click this link OpenSLR

License

GPL v3.0 © nl8590687 Author: ailemon

Contributors

@zw76859420 @madeirak @ZJUGuoShuai @williamchenwl

@nl8590687 (repo owner)