When Nvidia popped the bonnet on its Co-Pilot "backseat driver" AI at this year's Consumer Electronics Show, most onlookers were struck by its ability to lip-read while tracking CES-going "motorists'" actions within the "car".

[...] An Nvidia spokesperson has since confirmed in an email to The Register that the lip-reading component was based on research paper [PDF] written by academics from the University of Oxford, Google DeepMind and the Canadian Institute for Advanced Research.

"We are really happy to see LipNet in such an application and [it] is the proof that our novel architecture is scalable to real-world problems," the research team added in an email to El Reg.

[...] The paper was initially criticised. Although the neural network, LipNet, had an impressive accuracy rate of 93.4 per cent, it was only tested on a limited dataset of words and not coherent sentences.

A second paper, unofficially published on arXiv, showed LipNet's capabilities had improved. It could now decipher complete sentences after it had been trained to watch the speech movements of BBC News presenters for several hours.