jmvalin: (Default)
jmvalin ([personal profile] jmvalin) wrote2019-03-29 09:09 am

A Real-Time Wideband Neural Vocoder at 1.6 kb/s Using LPCNet

This is a follow-up on the first LPCNet demo. In this new demo, we turn LPCNet into a very low-bitrate neural speech codec (see submitted paper) that's actually usable on current hardware and even on phones. It's the first time a neural vocoder is able to run in real-time using just one CPU core on a phone (as opposed to a high-end GPU). The resulting bitrate — just 1.6 kb/s — is about 10 times less than what wideband codecs typically use. The quality is much better than existing very low bitrate vocoders and comparable to that of more traditional codecs using a higher bitrate.

Read More

Re: Possible to parallelize?

(Anonymous) 2019-11-22 12:42 am (UTC)(link)
As it like processes 4 frames in sequence would it not be easiest do split the 10ms frames to an own core and then merge the result in the fifth and do the rest of the calculation.
If this would work it could mean that it would work on a R Pi in realtime.

Best regards
- Martin

Re: Possible to parallelize?

(Anonymous) 2020-06-08 05:14 pm (UTC)(link)
Is it possible to parallelize the encoding and decoding on the GPU with WebGL?