Disassembly Report

Time to disassemble: 59 minutes, 30 seconds

Number of parts: 50

Model:

Made in: China

Notes: How do you make an affordable device that can help with anything you ask it? It requires two components: a way to communicate, and a way to think. Either one could be expensive or overly complicated. The Echo represents an elegant solution: The small, cheap stuff that never needs to change—microphones that allow it to hear, speakers that allow it to talk—fits in a small package that can sit on a shelf. And the stuff that's hard, that must evolve, and might otherwise require a computer the size of a car—intelligence—resides on that giant, remote computer we call the Cloud.

Todd McLellan

Listening

Echo is always listening (well, unless you press the microphone off button to make it stop). It has seven microphones: six spaced evenly around the top's circumference, under the microphone grill, and one more in the center. Once it hears you utter its wake word (which for most people is Alexa), it isolates audio from your direction using a process called beamforming, which analyzes and manipulates the sound picked up by multiple microphones to focus on your voice. (The technique is similar to what noise-canceling headphones do.)

This content is imported from YouTube. You may be able to find the same content in another format, or you may be able to find more information, at their web site.

Thinking

Suppose you say, "Alexa, play Kendrick Lamar." Echo—alerted by its wake word—flashes its LEDs to let you know it heard you and captures an analog recording of what you've said. After running through an analog-to-digital converter on the microphone input PCB, the audio, now a digital file, gets sent to the Cloud via the Wi-Fi radio. That's where serious processing power turns your voice into text, figures out what it means, and decides how to handle your request. In fact, the Echo does most of its computing in the Cloud—not just speech recognition, but also whatever else it takes to help you out, like accessing Spotify. The one notable exception is recognizing the wake word itself. Echo has to be able to instantly understand when you're talking to it, so speech-recognition software on the main PCB listens specifically for the wake word.

Speaking

[downloads ][{"dbId":359,"label":"Download the annotated PDF"}][/downloads]

While Echo is accessing Spotify, the Cloud beams down instructions to tell the user what it's doing. This requires more Cloud-computing work, using text-to-speech software to build and then vocalize a message ("Playing songs by Kendrick Lamar on Spotify"). Finally, once it has connected to Spotify, found the music, and got a song ready, Echo's audio amplifier PCB fires up the speakers, which take up most of the space inside the device. Halfway down the cylinder, a two-and-a-half-inch low-frequency speaker, or woofer, sits above a two-inch high-frequency speaker, or tweeter. The speakers point downward, so a pair of baffles at the bottom called deflectors—one tuned to the frequencies emitted by each speaker—redirect the sound outward, in all directions, through the speaker grille. And because Echo has no subwoofer, it has a plastic tube above the woofer called a bass reflex port. The port takes some of the low-end sound waves that project out of the back of the woofer and reshapes them so they reinforce the sound that comes out the front. The level of all this noise is controlled by the rotating volume ring that rides on volume gearing inside the top of the device. Kendrick Lamar should be played loud.

This story appears in the March 2017 Popular Mechanics.

This content is created and maintained by a third party, and imported onto this page to help users provide their email addresses. You may be able to find more information about this and similar content at piano.io