Google's DeepMind has partnered with Oxford University researchers to create a new AI that can read lips, calling it Watch, Listen and Spell (WLAS).

The researchers released a scientific paper suggesting the newly developed AI could correctly interpret more words that a trained professional in lip reading.

When tested on the same randomly selected 200 clips, a human professional lip reader was able to guess words correctly 12.4% of the time, while WLAS had an accuracy rate of 46.8%.

The paper reads: "The WLAS model trained on the LRS dataset surpasses the performance of all previous work on standard lip reading benchmark datasets, often by a significant margin. This lip reading performance beats a professional lip reader on videos from BBC television, and we also demonstrate that visual information helps to improve speech recognition performance even when the audio is available."

The system was trained on a dataset of 118,000 different sentences (17,500 words) using 5,000 hours of video footage from the BBC.

The BBC videos were prepared using machine learning algorithms, and the AI was also taught to realign video and audio when it was out of sync.

Earlier this month, the University of Oxford published a similar research paper, testing a lip reading program called LipNet. LipNet had a 93.4% level of lip reading accuracy, compared to 52.3% scored by a human expert on the same material presented.