Cog Av Hearing - Research Outputs
Online Prototype Demo
We have now completed and uploaded our multimodal hearing-aid demo, link will be available soon.
Presented AV-COGHEAR Demo/Poster: Towards context-aware, cognitively-inspired multimodal hearing-aids and assistive technology at at Faculty Research Afternoon, Stirling, 18th April.
- Deep Neural Network-based Enhanced Visually-Derived Wiener Filtering For Speech Enhancement
- Novel Deep Learning based Lip-Reading Regression Model for Speech Enhancement
- Deep Convolutional Neural Network based Lip-Reading For Speech Enhancement in Cognitively-Inspired Multi-Modal Hearing-Aid
- Exploiting Audio Visual Information in Hearing-Aids: A Critical Review including Cognitive, Neurobiological and Behavioural Perspectives, and Future Directions
- Convolutional Long Short Term Memory based Lip-Reading For Speech Enhancement in Cognitively-Inspired Multi-Modal Hearing-Aid
- Evaluating hearing-aid algorithms in an audiovisual setting
Visual Barcode FeaturesWe are currently working on the development and application of cognitively inspired visual features, with some preliminary demo videos available. Please check the Visual Features page for more information.
Project DatasetsWe will develop and use a number of datasets during the course of this project for development and evaluation. Find out more details at our datasets page.
Presentations and PostersA low resolution version of the poster is available for download .
Poster Download Link