Abstract

We present motion magnification, a technique that acts like a microscope for visual motion. It can amplify subtle motions in a video sequence, allowing for visualization of deformations that would otherwise be invisible. To achieve motion magnification, we need to accurately measure visual motions, and group the pixels to be modified. After an initial image registration step, we measure motion by a robust analysis of feature point trajectories, and segment pixels based on similarity of position, color, and motion. A novel measure of motion similarity groups even very small motions according to correlation over time, which often relates to physical cause. An outlier mask marks observations not explained by our layered motion model, and those pixels are simply reproduced on the output from the original registered observations. The motion of any selected layer may be magnified by a user-specified amount; texture synthesis fills-in unseen holes revealed by the amplified motions. The resulting motion-magnified images can reveal or emphasize small motions in the original sequence, as we demonstrate with deformations in load-bearing structures, subtle motions or balancing corrections of people, and rigid structures bending under hand pressure.

System Overview

To find small motions in a video and magnify them, we model the appearance of the input video as translations of the pixel intensities observed in a reference frame. Naively, this sounds like one might (a) compute the translation from one pixel to the next in each frame, and (b) re-render the video with small motions amplified. Unfortunately, such an approach would lead to artificial transitions between amplified and unamplified pixels within a single structure. Most of the steps of motion magnification relate to reliably estimating motions, and to clustering pixels whose motions should be magnified as a group. Below we motivate and summarize each step of the motion magnification processing. The processing steps are illustrated with the swing set images in the figure below. The details can be found in our publication [pdf download].

(a) Registered input frame (b) Clustered trajectories of tracked features (c) Layers of related motion and appearance (d) Motion magnified, showing holes (e) After texture in-painting to fill holes (f) After user's modification to segmentation map in (c) The overview of the system through swing set example.

Applications

We foresee broad application of this algorithm in fields related to visualization, such as education, physical diagnosis, pre-measurement planning for precise physical measurements, and surveillance.

Downloads