Audio-Driven 3D Talking Face for Realistic Holographic Mixed-Reality Telepresence
Abstract
Machines’ ability to effectively understand human speech based on visual input is crucial for efficient communication. However, distinguishing between the semantics of speech and the facial appearance poses a challenge. This article presents a taxonomy of 3D talking human face methods, categorizing them into GAN-based, NeRF-based, and DLNN-based approaches. The evolution of mixed-reality telepresence now focuses on developing talking 3D faces that synthesize natural human faces in response to text or audio inputs. Audio-video datasets aid in training algorithms across different languages and enabling speech recognition. Addressing audio data noise is vital for robust performance, utilizing techniques like integrating DeepSpeech and adding noise. Latency optimization enhances the user experience, and careful technique selection reduces latency levels. Quantitative and qualitative evaluation methods measure synchronization, face quality, and performance comparison. Talking 3D faces hold potential for advancing mixed-reality communication, necessitating considerations of audio-video datasets, noise reduction, latency, and evaluation techniques.
Authors
- Nicole Christoff;
- Krasimir Tonchev;
- Nikolay Neshov;
- Agata Manolova;
- Vladimir Poulkov
Venue
2023 IEEE International Black Sea Conference on Communications and Networking (BlackSeaCom)
Links
https://ieeexplore.ieee.org/document/10299781