The problem then is the wave format is a lot of bandwidth so when your bandwidth drops, the audio can become delayed or gappy. I tried downsampling it to 20KHz but that introduced comparable lag to MP3 encoding.
My idea now is that somehow I can air quotes break the audio stream into little pieces and stream it over webRTC to the browser but I don't understand anything about how audio is streamed in a way that is able to respond to changing bandwidth conditions.
My plan is to use pulseaudio RTP source to send it over the network and I want to pick it up with webrtc in the browser but maybe that's not possible, or wouldn't give me the low latency streaming that I want.
Anyway I know I simply do not understand enough about this and I couldn't really find any good resources where I can learn about it so I'm just asking here if anyone can point me in the right direction or give me some advice about this that would be awesome.
I think it kind of sounds similar to what you were mentioning, but it sounds like the lowest possible latency solution is to stream multiple streams of different bit rates at the same time and then WebRTC picks up the best one it can.
[1] https://bloggeek.me/webrtc-simulcast-and-abr-two-sides-of-th...