The idea of this work is fine-tuning and personalizing compression ratios of hearing aid dynamic range compression. In order to decrease the number of user feedbacks and thus enable a practical implementation of the personalized compression, first a reward function is considered to model hearing preferences of a user in an asynchronous manner. This is achieved by carrying out A/B comparison between instances of two different compressed audios. Then, an agent is trained to maximize reward. Following figure shows a block diagram of this approach:
Figure 1: Block diagram of Human-in-the-Loop Deep Reinforcement Learning.
Below shows how data is passes through different blocks of the personalization framework both in training and testing modes. Please refer to [1] for more details.
Figure 2 Developed personalized compression DRL framework for (a) training mode and (b) operation mode.
Link | Description |
---|---|
https://ieeexplore.ieee.org/document/9247199 | [1] N.Alamdari, E.Lobarinas, N.Kehtarnavaz, “Personalization of Hearing Aid Compression by Human-in-the-Loop Deep Reinforcement Learning”, IEEE Access, vol. 8, pp. 203503-203515, 2020. |
A User's Guide is provided which describes how to run the codes.