# mandarin-tts **Repository Path**: BSTester/mandarin-tts ## Basic Information - **Project Name**: mandarin-tts - **Description**: Mandarin text-to-speech 中文语音合成(TTS), based on Fastspeech2 - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 4 - **Forks**: 1 - **Created**: 2021-12-03 - **Last Updated**: 2024-05-28 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Chinese mandarin text to speech based on Fastspeech2 and Unet This is a part-time on-going work. 建议先加星收藏,有时间我会随时更新。 ## updates - 加入了儿化音。run: ``` ./scripts/hz_synth.sh 1.0 500000 ``` Checkpoint is here Audio examples here: this page This is a modification and adpation of fastspeech2 to mandrin(普通话). Many modifications to the origin paper, including: 1. Use UNet instead of postnet (1d conv). Unet is good at recovering spect details and much easier to train than original postnet 2. Added hanzi(汉字,chinese character) embedding. It's harder for human being to read pinyin, but easier to read chinese character. Also this makes it more end-to-end. 3. Removed pitch and energy embedding, and also the corresponding prediction network. This makes its much easier to train, especially for my gtx1060 card. I will try bringing them back if I have time (and hardware resources) 5. Use only waveglow in synth, as it's much better than melgan and griffin-lim. 6. subtracted the mel-mean for (seems much) easier prediction. 7. Changed the loss weight to mel_postnet_loss x 1.0 + d_loss x 0.01 + mel_loss x 0.1 8. Used linear duration scale instead of log, and subtracted the duration_mean in training. ## Model architecture ![arch](./docs/arch.png) ## Dependencies All experiments were done under ubuntu16.04 + python3.7 + torch 1.7.1. Other env probably works too. - torch for training and inference - librosa and ffmpeg for basic audio processing - pypinyin用于转换汉字为拼音 - jieba 用于分词 - perf_logger用于写训练日志 To install all dependencies, run: ``` sudo apt-get install ffmpeg pip3 install -r requirements.txt ``` ## Synthesis (inference) First clone the project and install the dependencies. To generate audio samples, first you need to download the checkpoint from google drive and untar it to ```mandarin_tts/``` 上不了google可以用这个link,```5rur``` - run the pinyin+hanzi model: ``` python synthesize.py --model_file ./ckpt/hanzi/checkpoint_300000.pth.tar --text_file ./input.txt \ --channel 2 --duration_control 1.0 --output_dir ./output ``` - Or you can run pinyin model: ``` python synthesize.py --model_file ./ckpt/pinyin/checkpoint_300000.pth.tar --with_hanzi 0 \ --text_file ./input.txt --channel 2 --duration_control 1.0 --output_dir ./output ``` ### Audio samples Audio samples can be found in this page ![page](./docs/page.png) ## Training (under testing) Currently I am use baker dataset(标贝), which can be downloaded from baker。 The dataset is for non-commercial purpose only, and so is the pretrained model. I have processed the data for this experiment. You can also try ``` python3 preprocess_pinyin.py python3 preprocess_hanzi.py ``` to generate required aligments, mels, vocab for pinyin and hanzi for training. Everythin should be ready under the directory './data/'(you can change the directory in hparams.py) before training. ``` python3 train.py ``` you can monitor the log in '/home/\/.perf_logger/' Best practice: copy the ./data folder to /dev/shm to avoid harddisk reading (if you have big enough memorry) The following are some spectrograms synthesized at step 300000 ![spect](./docs/data/step_300000_0.png) ![spect](./docs/data/step_300000_2.png) ![spect](./docs/data/step_300000_3.png) ## TODO - Clean the training code - Add gan for better spectrogram prediction - Add Aishell3 support # References - https://github.com/ming024/FastSpeech2">https://github.com/ming024/FastSpeech2. - [FastSpeech 2: Fast and High-Quality End-to-End Text to Speech](https://arxiv.org/abs/2006.04558), Y. Ren, *et al*.