Listen and Learn

While hearing aids have transformed spoken communication for people with hearing loss, the importance of music when fitting these devices has too often been left as background noise in the conversation.

The impact of deep learning, the phenomenon driving the most recent artificial intelligence boom, presents new technological possibilities for hearing aid users demanding better music listening experiences.

Berlin, a major international research hub for both AI and medical technology and famously home to many musicians, is in a unique position to deliver solutions that address this demand.

Music can change our mood, activating the pleasure centres of the brain. It helps us to connect with others.

Hearing impairment presents extra hurdles for appreciating music. Lyrics and melody lines become harder to pick out. Soft and high notes become inaudible - the music loses its brightness, suspense, poignancy. These obstacles in turn exclude many people from music’s health and wellbeing benefits.

The German Association for the Hearing Impaired (Deutscher Schwerhörigenbund e.V.) estimates that more than 5 million people aged between 14 and 19 years were hearing impaired at the end of 2015. The Association gauges this figure at 44.6 million among 20 to 60 year-olds and 22.5 million among over-60 year-olds. A study by the Institute of Hearing Technology and Audiology at Jade University from 2017 indicates that around 16 percent of all adults in Germany are hard of hearing.

An early hearing aid marketed at women, in which the microphone and battery were contained in a purse

Hearing aids - a Berlin story

Berlin Summer 1913. Louis Weber is a researcher at the Wernerwerk telephone plant in the city’s Siemenstadt district. For some decades, it has been known that many people with hearing loss are able to follow a conversation on the phone with greater ease than if they had no electrical device at their disposal to amplify signals. On this basis, Weber has been tasked with developing a single new electric hearing aid; ‘as small as possible, so it is not very bothersome to the wearer.’ Electric hearing aids are already available on the market. But existing designs are cumbersome. They are more concerned with hiding the user's disability from the public - concealing the devices in hair and headgear, couches, clothing, and accessories - than they are in helping them live with their disability.

The client, Carl Kloenne, is a director at Deutsche Bank, a friend of Weber’s boss at the plant and hard of hearing. Weber’s numerous efforts to engineer a device that would meet Kloenne’s needs have so far come to nothing. Presented with Weber’s ‘one last desperate attempt’ at a solution, consisting of a pair of carbon microphones inbuilt into dual headphones, a small receiver and a three-volt battery, the banker questioned the point of even trying it on. He is, after all, completely deaf in one ear. Eventually caving to Weber’s insistence, Kloenne is surprised to learn upon fitting the device that he is ‘now able to hear even in the ear he had thought was deaf.’

‘I fondly recall the day when Mr. Kloenne told me, visibly moved,’ Weber later writes, ‘that the new hearing aid had allowed him to participate in a group again for the first time in a long while.’ A later version of this unit will soon be mass-produced and publicly sold as the Esha-Phonophor, with the technology to remain in use for many years.

Music and hearing aids - the problem

In the same way that Weber was overwhelmingly focused on his client’s struggles with participating in conversation, improving speech recognition has always been the main driver of hearing aid research and development. Most human speech sounds are no softer than a sparsely occupied room; no louder than a standard washing machine.

Adult humans rarely speak in a register lower than 80 Hz, or higher than 255 Hz. But most music has dynamic and frequency ranges far broader than these parameters.

Modern digital hearing aids allow users to focus on speech by using equalisation to cut out interfering frequencies from outside human speech ranges, while compression is used to make soft speech audible and loud speech comfortable. But these same tools lead to a loss of contrast and perverse listening effects when the same settings are applied to music, leading to frustrating experiences for hearing aid wearers.

As users demand more satisfying music listening experiences, an increased number of manufacturers are working towards solutions. There are now hearing aids available on the market that boast greater frequency and dynamic input ranges. Others, like Signia - a rebranded heir to the hearing instrument line produced by Weber’s former employer Siemens - are offering special microphone modes in select products for musical uses.

Directional microphones within the devices are activated in various positions relative to the user, depending on the listening environment. For example, one setting allows a user listening to a live performance to focus on the stage at the front and not the crowd behind them. If disturbing noises are in front of the listener in this situation, however, directional systems alone cannot attenuate them.

Neural Networks, Hearing Aids and Berlin

The sudden mass abundance of cloud-based storage in the last decade has driven a boom in artificial intelligence, led by a subset of machine learning called deep learning - of which neural networks form the backbone. Berlin has become a major frontier for AI research, with 65 professors and over 200 companies working in this domain according to information made publically available by the City.

Neural networks ‘make sense of huge data sets and teach computers to do more-orless what humans would do’, write researchers Lars Bramlow and Douglas L. Beck for the Hearing Review. This has exciting applications for hearing aids, including for music listening environments. Algorithms operating through these devices can now learn, not only from their own troubleshooting history, but by accessing and using data in real time from millions of other devices.

Because there are almost always multiple sound sources of interest present in music, hearing devices could present them all to the listener so that they can be separated and attended to selectively. These choices could be guided via a smartphone app, or could be driven by the intent of the user via a “brain-computer interface” guided by EEG signals from the ears, or other indicators such as visual attention.

Berlin is in a uniquely pivotal position to drive such advances. An international destination for skilled migrants working in AI and medical technology sectors, the city is also home to a large community of musicians - many of whom are hearing impaired - whose valuable experiences can inform research. In this way, the city is well placed to play its part in ensuring that the future sounds less dull.

Previous
Previous

Swimming in the Spree

Next
Next

Review: MC Yallah & Scotch Rolex