In what is somehow the cutest science story of the new year so far, scientists at the University of Washington have announced a new artificial intelligence system for decoding mouse squeaks.

Dubbed DeepSqueak, the software program can analyze rodent vocalizations and then pattern-match the audio to behaviors observed in laboratory settings. As such, the software can be used to partially decode the language of mice and other rodents. Researchers hope that the technology will be helpful in developing a broad range of medical and psychological studies.

Published this week in the journal Neuropsychopharmacology, the study is based around a novel use of sonogram technology, which transforms an audio signal into an image or series of graphs.

The DeepSqueak program turns recordings of mouse chatter into visual output, which is then analyzed using advanced machine learning algorithms. In fact, the A.I. algorithms are in the same family as those used by self-driving cars to “see” their environment. The technology represents the first use of deep learning neural networks in rodent vocalization research, said study co-author Russell Marx in a statement.

RELATED: How Mozart Might Have Played Metallica, According to Artificial Intelligence

One critical advantage to the DeepSqueak system is that it can “hear” vocalizations that are otherwise inaudible to human ears.

“As it turns out, rats and mice have this rich vocal communication, but it's way above our hearing range, so it's been really hard to detect and analyze these calls,” Russell says in a project demo video. “Our software allows us to visualize all those calls, look at their shape and structure and categorize them.