Send raw bytes for us in sa.play_buffer
parent
f354725f2a
commit
5cf120d0e5
@ -0,0 +1,40 @@
|
||||
# Learning Nemo Toolkit
|
||||
|
||||
## Examples
|
||||
|
||||
I made a streaming example with `tts_stream.py`.
|
||||
|
||||
```
|
||||
python tts_steam.py
|
||||
```
|
||||
|
||||
Will take some time to load but eventually you will see
|
||||
```
|
||||
[NeMo I 2022-03-05 14:32:25 common:654] Instantiating model from pre-trained checkpoint
|
||||
[NeMo I 2022-03-05 14:32:27 features:240] PADDING: 1
|
||||
[NeMo I 2022-03-05 14:32:27 features:249] STFT using conv
|
||||
[NeMo I 2022-03-05 14:32:27 features:251] STFT using exact pad
|
||||
[NeMo I 2022-03-05 14:33:13 modelPT:376] Model SqueezeWaveModel was successfully restored from /home/toor/.cache/torch/NeMo/NeMo_1.0.0rc1/tts_squeezewave/d48f5835ac007ddb0c183bdbbbdace28/tts_squeezewave.nemo.
|
||||
File tts_fifo_file already exists
|
||||
Pipe text to tts_fifo_file
|
||||
```
|
||||
|
||||
You can echo text steams to this file and the program will process them and play them on local speaker:
|
||||
|
||||
```
|
||||
echo "I can't let you do that Dave." > tts_fifo_file
|
||||
```
|
||||
|
||||
The first time it processes a text stream it will take a lot longer. It should get slightly faster after that first steam.
|
||||
|
||||
## Dependencies
|
||||
|
||||
I had
|
||||
|
||||
```
|
||||
llvmlite-0.36.0-cp38-cp38-linux_aarch64.whl
|
||||
onnxruntime_gpu-1.7.0-cp38-cp38-linux_aarch64.whl
|
||||
torch-1.7.0-cp38-cp38-linux_aarch64.whl
|
||||
```
|
||||
|
||||
In a dependencies folder. So I assume I hade these installed in then virtualenv manually
|
Loading…
Reference in New Issue