Eddie's Lounge room sonar project.

This project is dorment while I play with other stuff. This page will be a bit basic for some but hopefully most people wanting to play with sonar will get something from it. Almost every PC has a sound card and many also have a microphone but I've never heard of anyone using them to build a sonar system. Most people associate sonar with ultra-sound – even though the "Red October" ping was clearly audible. Echo-localing bats mostly use ultra-sound as does most marine sonar and sonar range finders. There are also sonars which use lower frequencies. Lower frequencies are useful for penetrating deeper and are largely used for "looking" thru the ocean floor for tens of meter or more. The PC program I am writing uses a sweep frequency chirp and some very basic signal processing to detect echos off complex targets. I think the performance I've achieved from a $30 (second hand) sound card and a $13 mic is amazing. This photo shows my test setup. The multi-media speaker is clearly visible to the right of the mouse. Less obvious is the mic which is near the front of the box and in some yellow-tack. The white broom stick laying on top of the monitor is one of my test targets.

The Chirp.

Simple sonars use a single frequency pulse but we can better than that. The optimal chirp will depend on the application but for the "in house" stuff I've been playing with- this has been my favorite – it sweeps from 5 to 20 kHz and is about 4.5 Milli-seconds long. I use a correlation procedure to find the echoes among to jumble of sounds which are recorded by the mic. The correlator works by multiplying the recorded sound with the original chirp. Ideally we want a sharp spike whenever our chirp echo is found. Repetitive waveforms such as single frequency sine waves are really bad for this – a chirp is fairly good.

Auto-correlation.

To see what a perfect echo would look like after correlation we correlate the chirp with itself. This image uses "screen" co-ordinates so down is positive. I'll fix it one day. You can see a sharp spike but also some other lumps and bumps.

This is a chirp and echo off my ceiling. The top trace shows the waveform recorded by the PC mic. It is plotted in with two scales because the dynamic range is too high to display the both the large and small signals with one trace. On the left you can see the direct wave – straight from speaker to mic. Just after this ends the reflection arrives. For close targets the direct and reflected chirps overlap. This isn't a problem in current setup but is a limitation if the same transducer is used as both a speaker and mic. It is somewhat difficult to interpret the raw signal by eye. For such a simple target you can see what's happening but any subtle echo would be lost.

The lower trace in the image above shows the signal after its been processed by the correlation routine. The direct wave is a bit funny because the mic is beside the speaker and not in front of it. The echo shows a nice spike plus some secondary peaks. We expect some secondary peaks because the auto-correlation has them. This suggests that they are not real or at least are larger than they should be.



The next stage of processing is about finding a "trigger" to give a consistent reference when we process a series of chirps (such as in the sounder image below). My current method is the look for a correlation that is (for example) greater than 1/5 the largest value in the entire data set and then search (for example) the next 100 samples looking for the peak largest peak in this region. This is what the "trig gate" setting is about. I also do a crude gain adjustment to boost the distant echos and reduce the close ones. The is just a linear gain adjustment at the moment – it proably should be a square function but there are also reasons for keeping it linear which I may get into later. By the way these images are of different chirps – same type but different instances.

Knowing what the auto-correlation function looks like lets me do some more (crude but effective) processing. I simply look for the largest peak and subtract the auto-correction function from the data at that region and replace it with a single spike – then repeat the process till I'd done them all (its faster than you might expect). This is what the "sharpen" feature does.

Here is the the proof of the pudding. This is a "depth sounder" type screen. The time axis runs from left to right and the distance axis runs top to bottom. The maximum range here is a meter or so but the system does works over much longer distances. On the left you can see where I turned off the "sharpen" function so you can see the difference. The wiggly line is the echo off a broomstick (the same one that is in the top photo). I moved the this up and down and it has left a very bright trace and some dim ghosts.

The next fun thing to do will be to try some synthetic aperture processing. The basic gist is take lots of recordings from different positions to try to work out what the targets look like. To give some feel for what the raw data will look like I set up two targets (call them broom sticks) and slowly moved the mic/speaker combo to take 150 or so recordings. I was only pushing the combo by hand so the speed varied a bit. You can see the classic parabolic arcs which are trivial to decode by eye. The point of the parabola is the location of the target. However with some smart software you could possibly see the shape of the target and see them amongst clutter (useful in my house).