Cog Av HearingWelcome to the Cog AV Hearing Project page. This project commenced on 1st October 2015 and is currently expected to run until October 2018. A number of meetings have been held and progress continues. Various sections of the project website, including datasets, demos, reports/papers, and project (including related) activities/events of interest, etc, will be made publicly available in due course. Please check back for regular updates. For any queries, please e-mail the project's Lead Principal Investigator (PI), Professor Amir Hussain, firstname.lastname@example.org (http://cs.stir.ac.uk/~ahu/)
Project AimsThis ambitious project aims to address the EPSRC research challenge, "Speech-in-noise performance in hearing aid devices" and the long-standing challenge of developing disruptive assistive listening technology that can help improve the quality of life of the 10m people in the UK suffering from some form of hearing loss. We aim to develop devices that mimic the unique human ability to focus hearing on a single talker, effectively ignoring background distractor sounds, regardless of their number and nature.
This research is a first attempt at developing a cognitively-inspired, adaptive and context-aware audio-visual (AV) processing approach for combining audio and visual cues (e.g. from lip movement) to deliver speech intelligibility enhancement. A preliminary multi-modal speech enhancement framework pioneered by Prof Hussain's Lab at Stirling will be significantly extended to incorporate models of auditory and AV scene analysis developed by Dr Barker's group at Sheffield. Further, novel computational models and theories of human vision developed by Prof Watt at Stirling will be deployed to enable real-time tracking of facial features. Intelligent multi-modality selection mechanisms will be developed, and planned collaborations with Phonak and MRC IHR will facilitate delivery of a clinically-tested software prototype.