From Hearing to Seeing Speech: AR + AI Revolutionize Phonetics Education

By

Diana Abbasi

This study introduces a groundbreaking Mobile Augmented Reality (MAR) system that blends AI-driven speech recognition with 3D animated models to teach phonetics in an interactive way. Unlike traditional approaches or existing AR apps that only display letters and vocabulary, this system visualizes speech organ movements and provides real-time feedback on pronunciation, making phonetic learning immersive, visual, and highly effective.

By allowing learners to see how sounds are formed rather than relying solely on hearing, it makes learning more effective even in noisy environments, helping users build clearer, more accurate speech production skills.

This Open Access paper is available to read below and on IEEE Access: https://ieeexplore.ieee.org/document/10540119

Loader Loading…
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab


Discover more from NoiseHelp

Subscribe to get the latest posts sent to your email.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *