HACKER Q&A
📣 amichail

Hearing aids embedded in walls of house that track people's movements?


So instead of having someone wear a hearing aid while at home, you would have speakers in a house that amplify certain frequencies towards only those individuals who need such amplification.

Is such a thing possible? Has it been done?


  👤 sargstuff Accepted Answer ✓
Note: Historically, outside of concert halls, there are room shapes that redirect sound. [1]

Dynamically changing room accoustics would be an interesting modern take [2], but think that'd have to wait for equivalent of 3d version of eink and/or readily available dynamic mass density and acoustic metamaterials.

[1] From Whispering Galleries to Echo Chambers, These Five Architectural Structures Have Extraordinary Acoustics : https://www.smithsonianmag.com/travel/worlds-weirdest-echoes...

[2] The Effect on Room Acoustical Parameters Using a Combination of Absorbers and Diffusers—An Experimental Study in a Classroom : https://www.mdpi.com/2624-599X/2/3/27


👤 sargstuff
Various experiments in using ambient wifi to track movements (vs. wearing rfid tag)

There was a hn post about wearable sleeve that modifies/adjusts sound to text or "relevant frequencies".

Conducting sound to bone and/or using cell phone/blue tooth ear speakers set-up cheaper / easier to impliment / more portable than wall setup. [1]

[1] Canarias DC soundwalk : https://explore.echoes.xyz/collections/FMI9Kr8oteVAqi9K


👤 sargstuff
instead of hearing sleeve, perhaps 'tooth' version of bone conductor to transmit 'missing frequencies' [1]

[1] : https://newatlas.com/hearing-aid-tooth/14042/


👤 jerf
I'd say such a thing is on the verge of possible, but there's a lot of interesting issues that emerge in practice that may trip you up:

1. The sound will "splash" from the target. You mostly can't fire a beam at someone and have them just absorb it. Though keep reading a bit on that.

2. You have latency from the origin of the sound getting to the microphone, latency in processing (you've got some nontrivial math to do here), and then latency in the audio arriving to the listener. The sum of this latency could be a problem. I don't know. Certainly there are going to be phase issues. It would be interesting to take some sample audio, process it using frequency filters into a simulation of bad hearing, process a second copy of the sample into a simulation of the amplified signal, and lay one on top of the other with a ~50-100 millisecond delay for the second. I have no idea if the result would be insanely intolerable or merely an "interesting effect" that leaves most sounds still quite comprehensible... if you do this, please provide a link so I can hear it too. :)

Latency particularly concerns me with the plosive consonants: https://thesoundofenglish.org/plosives/ They cause a big noise burst across all frequencies which may strain the fancy algorithms trying to put together what's going on in the sonic environment, and the differences between "buh", "duh" and "tuh" are not very many milliseconds. If you do this test be sure to work in a lot of plosives to your sample.

3. In general you have a lot of math to do on the incoming data to process it into your target sounds, especially if you want to be able to localize the audio being amplified (which you probably do), and you've got some non-trivial problems in tracking the listener and feeding audio to their ears.

One interesting thing that could be a net win and net help is that ultrasonic speakers (https://www.focusonics.com/ultrasonic-speaker/ ), generally rejected from common use for speakers because they generally can only respond in the higher frequencies, has some overlap with the fact that it's generally the high frequencies that hearing aids need to help with. And they can be directional, and the splash problems would be minimized. However, as far as I know (and I can absolutely be wrong on this), the existing speakers would need to be pointed at the user to work, like, physically maneuvered in exactly the way you are thinking, a flat panel constantly following the listener around the room. Whether that tech could be expanded into some sort of phase array that could be targeted without moving as much, I don't know. But now you need very detailed listener tracking.

#2 up there is a cheap way to proof out the idea quickly; I think if the resulting audio is not usable, you can discard the idea. Latency is going to be intrinsic to this process and you are absolutely looking at a bare minimum of the time for the sound to come from the source to the microphones (and, quite likely, all the microphones you are using in the system, so, the farthest microphone) and from the speaker to the listener, even at 0 milliseconds of processing. So, with a 10 foot distance from source to mic and from speaker to listener and some super generous 25 milliseconds for processing (probably unrealistic) you get about a bare minimum of latency of ~30-35 milliseconds. I honestly doubt you could get that in practice; personally I'd guestimate closer to 100. But you can play with the audio procedure I described above and see if it's even practical at 30-35... like I said, watch those plosives.