Abstract Speechreading is the task of inferring phonetic information from visually observed articulatory facial movements, and is a notoriously difficult task for humans to perform. In this paper we present an end-to-end model based on a convolutional neural network (CNN) for generating an intelligible and natural-sounding acoustic speech signal from silent video frames of a speaking person.

We train our model on speakers from the GRID and TCD-TIMIT datasets, and evaluate the quality and intelligibility of reconstructed speech using common objective measurements. We show that speech predictions from the proposed model attain scores which indicate significantly improved quality over existing models. In addition, we show promising results towards reconstructing speech from an unconstrained dictionary.

ICCVW 2017 - Improved Speech Reconstruction from Silent Video Supplementary video The following video contains a few examples of silent video frames given to our model as input, along with the acoustic signal generated by the system. The first two examples are of Speaker 3 (S3) and Speaker 4 (S4) from the

The third example is of Lipspeaker 3 from the





BibTeX @article{ephrat2017improved, title={Improved Speech Reconstruction from Silent Video}, author={Ephrat, Ariel and Halperin, Tavi and Peleg, Shmuel}, journal={ICCV 2017 Workshop on Computer Vision for Audio-Visual Media}, year={2017} }

Code Code for this work is being prepared for release. In the meantime, code for our ICASSP 2017 paper can be found The following video contains a few examples of silent video frames given to our model as input, along with the acoustic signal generated by the system. The first two examples are of Speaker 3 (S3) and Speaker 4 (S4) from the GRID corpus . Separate models were trained on 80% of the data from each speaker, and the training set contained all words present in the testing set.The third example is of Lipspeaker 3 from the TCD-TIMIT dataset. In this experiment, training and testing sets contained many different words, such that the predicted speech presented here shows significant progress towards learning to reconstruct words from an unconstrained dictionary.Code for this work is being prepared for release. In the meantime, code for our ICASSP 2017 paper can be found here. Enjoy!