A team of researchers from Apple and Carnegie Mellon University’s Human-Computer Interaction Institute have presented a system for embedded AIs to learn by listening to noises in their environment without the need for up-front training data or without placing a huge burden on the user to supervise the learning process. The overarching goal is for smart devices to more easily build up contextual/situational awareness to increase their utility.

The system, which they’ve called Listen Learner, relies on acoustic activity recognition to enable a smart device, such as a microphone-equipped speaker, to interpret events taking place in its environment via a process of self-supervised learning with manual labelling done by one-shot user interactions — such as by the speaker asking a person ‘what was that sound?’, after it’s heard the noise enough time to classify in into a cluster.

A general pre-trained model can also be looped in to enable the system to make an initial guess on what an acoustic cluster might signify. So the user interaction could be less open-ended, with the system able to pose a question such as ‘was that a faucet?’ — requiring only a yes/no response from the human in the room.

Refinement questions could also be deployed to help the system figure out what the researchers dub “edge cases”, i.e. where sounds have been closely clustered yet might still signify a distinct event — say a door being closed vs a cupboard being closed. Over time, the system might be able to make an educated either/or guess and then present that to the user to confirm.

They’ve put together the below video demoing the concept in a kitchen environment.

In their paper presenting the research they point out that while smart devices are becoming more prevalent in homes and offices they tend to lack “contextual sensing capabilities” — with only “minimal understanding of what is happening around them”, which in turn limits “their potential to enable truly assistive computational experiences”.

And while acoustic activity recognition is not itself new, the researchers wanted to see if they could improve on existing deployments which either require a lot of manual user training to yield high accuracy; or use pre-trained general classifiers to work ‘out of the box’ but — since they lack data for a user’s specific environment — are prone to low accuracy.

Listen Learner is thus intended as a middle ground to increase utility (accuracy) without placing a high burden on the human to structure the data. The end-to-end system automatically generates acoustic event classifiers over time, with the team building a proof-of-concept prototype device to act like a smart speaker and pipe up to ask for human input.

“The algorithm learns an ensemble model by iteratively clustering unknown samples, and then training classifiers on the resulting cluster assignments,” they explain in the paper. “This allows for a ‘one-shot’ interaction with the user to label portions of the ensemble model when they are activated.”

Audio events are segmented using an adaptive threshold that triggers when the microphone input level is 1.5 standard deviations higher than the mean of the past minute.

“We employ hysteresis techniques (i.e., for debouncing) to further smooth our thresholding scheme,” they add, further noting that: “While many environments have persistent and characteristic background sounds (e.g., HVAC), we ignore them (along with silence) for computational efficiency. Note that incoming samples were discarded if they were too similar to ambient noise, but silence within a segmented window is not removed.”

The CNN (convolutional neural network) audio model they’re using was initially trained on the YouTube-8M dataset — augmented with a library of professional sound effects, per the paper.

“The choice of using deep neural network embeddings, which can be seen as learned low-dimensional representations of input data, is consistent with the manifold assumption (i.e., that high-dimensional data roughly lie on a low-dimensional manifold). By performing clustering and classification on this low-dimensional learned representation, our system is able to more easily discover and recognize novel sound classes,” they add.