When I heard a demo of Apple’s new HomePod smart speaker last month, I was told that the speaker sends out audio signals into the room, then uses software to tailor-fit the sound it’s putting out for the environment. But the details on how that actually works were a little thin.

An Apple patent published today sheds more light on the fancy software algorithms Apple engineers created to help the HomePod shape its sound to the room.

The patent doesn’t make it clear if this is the technology used in the HomePod, but given Apple’s boasts, there’s a good chance it is. The challenge tackled by the tech in the patent is to make the HomePod’s audio sound good no matter where the device is placed in the room. If the HomePod is placed in the corner of a room, for instance, the close presence of the two converging walls can cause the audio to sound bassy and muddy in the room, according to the patent.

When the HomePod’s speaker starts emitting sound, an external microphone on the device starts to measure the acoustic pressure of the sound waves returning after bouncing off the walls, ceiling, floor, and objects in the room. Based on that information, it understands the acoustic response of the room. So if the HomePod is in a corner, the microphone will detect the close presence of the two walls from the strength of the sound waves bouncing off them and returning.

That microphone then shares what it learned with a microchip within the speaker. That chip is also collecting information from an internal microphone listening only to the speaker output. Now that it knows both what the speaker is outputting and how that output is being received out in the room, it can–through some fairly intense algorithms–instruct the speaker’s digital signal processor to tweak the equalization of the music to fit the room.

The patent says the same method can be used to balance the output of two or more speakers sounding in the same room.