jmvalin
This is a follow-up on the first LPCNet demo. In this new demo, we turn LPCNet into a very low-bitrate neural speech codec (see submitted paper) that's actually usable on current hardware and even on phones. It's the first time a neural vocoder is able to run in real-time using just one CPU core on a phone (as opposed to a high-end GPU). The resulting bitrate — just 1.6 kb/s — is about 10 times less than what wideband codecs typically use. The quality is much better than existing very low bitrate vocoders and comparable to that of more traditional codecs using a higher bitrate.
Re: Possible to parallelize?
Date: 2019-04-04 04:25 pm (UTC)When it comes to FPGAs, I suspect LPCNet can be implemented quite efficiently, bot only if the weights of the sample rate network can fit on the chip and not have to come from external memory. That means having around 100k-200k internal storage.
Re: Possible to parallelize?
Date: 2019-11-22 12:42 am (UTC)If this would work it could mean that it would work on a R Pi in realtime.
Best regards
- Martin
Re: Possible to parallelize?
Date: 2020-06-08 05:14 pm (UTC)