Introduction

I have a high-end baby monitor that I used with baby #1. It worked well, but I was often left wondering:

How long has the baby been babbling/crying/asleep?

How many times has the baby woken up in the past X hours?

This information is helpful for sleep training, where you want to give the baby a chance to resettle/self-soothe before going in to the room.

In this post I will present an open-source “sleep monitor” using a Raspberry Pi, a USB microphone and some Python. The Raspberry Pi is perfect for this project as it is much cheaper than a baby monitor, and uses very little power.

NOTE: As this a sleep monitor, not a baby monitor, the device does not stream video or audio. There are a few projects out there the use an RPi as a baby monitor, but I haven’t tried them.

LittleSleeper Demo

LittleSleeper runs a web server on the Raspberry Pi. The output includes the current state of the baby, a plot of volume levels for the past hour, and a log of events for the past 12 hours:

Building LittleSleeper

Here is what you need:

A Raspberry Pi with internet access. I used a Model B, but any model will do. For the OS I used a fresh install of NOOBS version 1.4.0. Once again, any OS should be fine.

A USB microphone. I had a USB webcam (Logitech C270 HD) sitting around from a previous project. Plays nicely with the RPi and the mic has high sensitivity.

Optional: Powered USB hub

After you plug it all together, run the following to install the software:

# make sure everything is up to date sudo apt-get update sudo apt-get upgrade # get pip (for installing python libraries) curl --silent --show-error --retry 5 https://bootstrap.pypa.io/get-pip.py | sudo python2.7 # install python libraries sudo apt-get install python-numpy sudo apt-get install python-scipy sudo pip install tornado # install pyaudio sudo apt-get install libportaudio0 libportaudio2 libportaudiocpp0 portaudio19-dev git clone http://people.csail.mit.edu/hubert/git/pyaudio.git cd pyaudio sudo python setup.py install cd .. # install the LittleSleeper code git clone https://github.com/NeilYager/LittleSleeper.git

Reboot your RPi. To start LittleSleeper run the following commands:

cd LittleSleeper nohup python audio_server.py & nohup python web_server.py &

NOTE: nohup starts a process in such a way that it will not be killed when you end the current session. For debugging it is better to open two SSH terminals, and run python audio_server.py in one and python web_server.py in the other.

Point a browser (on any device connected to the local network) to:

http://raspberrypi:8090

and that’s all there is to it!

NOTE: The software assumes that the hostname “raspberrypi” can be resolved. Some home routers automatically act as DNS servers for connected hosts, so this may work out of the box. Otherwise you will need to add an entry mapping “raspberrypi” to the IP address of the RPi in your hosts file. Alternatively, hardcode the IP address in the index.html file.

How the software works

Reading data from the microphone is an I/O operation that blocks the execution of the current process. This is a problem for web application as we need to be able to handle requests at any time. Therefore, we will have several processes running:

Microphone process: grabs a chunk of data from the microphone, does some simple processing, stores the result, and repeats Audio server process: Handles requests for the latest audio data from the web server. Web server process: Requests data from the audio server Pushes the results to all browsers

This is how the various processes communicate with each other:

All of the code is available at: https://github.com/NeilYager/LittleSleeper. In the rest of this post I will go through the source in detail.

audio_server.py

audio_server.py implements the microphone process and the audio server. It starts by importing libraries and defining some constants:

import pyaudio import numpy as np import time import multiprocessing as mp from multiprocessing.connection import Listener import ctypes from scipy import ndimage, interpolate from datetime import datetime CHUNK_SIZE = 8192 AUDIO_FORMAT = pyaudio.paInt16 SAMPLE_RATE = 16000 BUFFER_HOURS = 12 AUDIO_SERVER_ADDRESS = ('localhost', 6000)

The microphone process and audio server communicate using shared memory. The following code initializes the shared arrays, creates a “Lock” for synchronization, and kicks off the processes:

def init_server(): # figure out how big the buffer needs to be to contain BUFFER_HOURS of audio buffer_len = int(BUFFER_HOURS * 60 * 60 * (SAMPLE_RATE / float(CHUNK_SIZE))) # create shared memory lock = mp.Lock() shared_audio = mp.Array(ctypes.c_short, buffer_len, lock=False) shared_time = mp.Array(ctypes.c_double, buffer_len, lock=False) shared_pos = mp.Value('i', 0, lock=False) # start 2 processes: # 1. a process to continuously monitor the audio feed # 2. a process to handle requests for the latest audio data p1 = mp.Process(target=process_audio, args=(shared_audio, shared_time, shared_pos, lock)) p2 = mp.Process(target=process_requests, args=(shared_audio, shared_time, shared_pos, lock)) p1.start() p2.start()

Microphone proccess

The microphone process grabs a chunk of data from the microphone, stores the maximum volume during that interval, and repeats:

def process_audio(shared_audio, shared_time, shared_pos, lock): # open default audio input stream p = pyaudio.PyAudio() stream = p.open(format=FORMAT, channels=1, rate=SAMPLE_RATE, input=True, frames_per_buffer=CHUNK_SIZE) while True: # grab audio and timestamp audio = np.fromstring(stream.read(CHUNK_SIZE), np.int16) current_time = time.time() # acquire lock lock.acquire() # record current time shared_time[shared_pos.value] = current_time # record the maximum volume in this time slice shared_audio[shared_pos.value] = np.abs(audio).max() # increment counter shared_pos.value = (shared_pos.value + 1) % len(shared_time) # release lock lock.release()

Audio server

The audio server acts as a middle man between the microphone process and the web server. It communicates with the web server using sockets. multiprocessing.connection.Listener . This is a nice implementation that works well on a Raspberry Pi. The first step is to create a new Listener and wait for a connection.

When a new request is received, acquire the Lock and create a working copy of the shared data. The next step is to process the audio data buffer. I’m not going to include all of the code below as it is about 100 lines. Here is the high-level outline of the main steps:

Normalize the volume level to the range [0, 1] (using a user supplied “upper sound limit” parameter) Apply smoothing using a Gaussian filter Re-sample the previous hour of data so there is one value per second. This depends on the microphone’s sampling rate and the chunk size when reading the data. Classify the entire time series into time blocks of two types: Noise: volume is above a user-defined threshold Silent: volume is below a user-defined threshold

The last step is to create a dictionary with all of the results and send it to the web server.

def process_requests(shared_audio, shared_time, shared_pos, lock): listener = Listener(AUDIO_SERVER_ADDRESS) while True: conn = listener.accept() # get some parameters from the client parameters = conn.recv() # acquire lock lock.acquire() # convert to numpy arrays and get a copy of the data time_stamps = np.frombuffer(shared_time, np.float64).copy() audio_signal = np.frombuffer(shared_audio, np.int16).astype(np.float32) current_pos = shared_pos.value # release lock lock.release() # process audio data # code not included - see full source for details # return results to web server results = {'audio_plot': audio_plot, 'crying_blocks': crying_blocks, 'time_crying': time_crying, 'time_quiet': time_quiet} conn.send(results) conn.close()

web_server.py

I used Tornado as the web framework because it is written in Python and supports WebSockets. My first thought was to have the clients (i.e. browsers viewing the site) make an AJAX call every second to get the latest information. However, if there are many clients this could bog down the server with lots of (computationally expensive) requests for the same information. With WebSockets, the web server can periodically send new data to all clients as soon as it is available.

The file starts by importing some libraries and defining some constants. These constants may need to be adjusted depending on the environment where the sleep monitor is placed.

import os from datetime import datetime import tornado.httpserver import tornado.ioloop import tornado.web import tornado.websocket import tornado.gen from multiprocessing.connection import Client AUDIO_SERVER_ADDRESS = ('localhost', 6000) HTTP_PORT = 8090 # The highest (practical) volume for the microphone, which is used # to normalize the signal. This depends on: microphone sensitivity, # distance to crib, amount of smoothing, etc. UPPER_LIMIT = 25000 # After the signal has been normalized to the range [0, 1], volumes # higher than this will be classified as noise. Vary based on: # background noise, how loud the baby is, etc. NOISE_THRESHOLD = 0.25 # seconds of quiet before transition mode from "noise" to "quiet" MIN_QUIET_TIME = 30 # seconds of noise before transition mode from "quiet" to "noise" MIN_NOISE_TIME = 5 class IndexHandler(tornado.web.RequestHandler): def get(self): self.render('index.html')

Now we set up Tornado handle WebSocket connections:

clients = [] class WebSocketHandler(tornado.websocket.WebSocketHandler): def open(self): print "New connection" clients.append(self) def on_close(self): print "Connection closed" clients.remove(self)

The next function requests the latest data from the audio server (using a unix socket), and pushes the results to all browsers viewing the page (using WebSockets):

def broadcast_mic_data(): # get the latest data from the audio server parameters = {"upper_limit": UPPER_LIMIT, "noise_threshold": NOISE_THRESHOLD, "min_quiet_time": MIN_QUIET_TIME, "min_noise_time": MIN_NOISE_TIME} conn = Client(AUDIO_SERVER_ADDRESS) conn.send(parameters) results = conn.recv() conn.close() # send results to all clients now = datetime.now() results['date_current'] = '{dt:%A} {dt:%B} {dt.day}, {dt.year}'.format(dt=now) results['time_current'] = now.strftime("%I:%M:%S %p").lstrip('0') results['audio_plot'] = results['audio_plot'].tolist() for c in clients: c.write_message(results)

The main function starts the web application. The only trick is to use PeriodicCallback to call broadcast_mic_data every second.

def main(): settings = { "static_path": os.path.join(os.path.dirname(__file__), "static"), } app = tornado.web.Application( handlers=[ (r"/", IndexHandler), (r"/ws", WebSocketHandler), ], **settings ) http_server = tornado.httpserver.HTTPServer(app) http_server.listen(HTTP_PORT) print "Listening on port:", HTTP_PORT main_loop = tornado.ioloop.IOLoop.instance() scheduler = tornado.ioloop.PeriodicCallback(broadcast_mic_data, 1000, io_loop=main_loop) scheduler.start() main_loop.start()

index.html

The HTML file is mostly self-explanatory. Apart from appearance, its main function is to receive a message from the web application and update the display accordingly:

var socket = new WebSocket("ws://raspberrypi:8090/ws"); socket.onmessage = function (message) { // update the text display $("#time_quiet").text(JSON.parse(message.data).time_quiet); $("#time_crying").text(JSON.parse(message.data).time_crying); // update the history table var table = "<tr><th>Baby noise start</th><th>Duration</th></tr>"; $.each(JSON.parse(message.data).crying_blocks, function( index, crying_block ) { table += "<tr><td>" + crying_block.start_str + "</td><td>" + crying_block.duration + "</td></tr>"; }); $("#history_table").html(table); // update the plot of the volume levels for the past hour var data = JSON.parse(message.data).audio_plot; var vals = []; for (var i = 0; i < data.length; i++) { vals.push([i, data[i]]); } plot.setData([ vals ]); plot.draw(); };

Future directions

Here are some things I would like to add:

An audio alert if there has been noise for X minutes

Store the event log in a database, and create plots of the baby’s sleeping patterns over time

Encode a sleep training routine. e.g., LittleSleeper would automatically give hints as to a suitable time to soothe and comfort the baby.

That’s it for now. Check out the full code on github. If you have any questions or comments, I can be contacted at neil _at_ aicbt.com.