In order to provide effective support for the interpretation and analysis of football games we need to observe the games systematically and comprehensively and the abstraction and build informative models. These models are getting built constantly by coaches and scouts, but they lack in parameters, and narrowed down to only a few, because that’s the way human cognition works.

We investigate a new generation of sport game models that:

– are based on players’ positions, motion trajectories, and ball actions as their primitive building blocks;

– represent the interaction between ball actions, game situations, and the effects of ball actions and thereby allow for more comprehensive assessment of the games;

– use concepts such as scoring opportunity, being under pressure, and passing opportunities, classifying situations and interpreting the game events;

– can be acquired automatically by a camera-based observation system.

The advantage of this system is its possibility to model even cognitive abstractions based on an automatically gathered position data pool. Although it will never come close to human cognitive abilities, which are based on a much wider database, automated models can help and accelerate human analysis of complex activities in sports games.

So how do we track soccer players and ball from a live broadcast footage?

In order to simplify the task, let’s take a single camera with panning, zoom and tilt, consider only the penalty area and ignore changing scenes and lens distortion. By dividing the problem we get the following formula:

We need to transform world coordinates to image coordinates in order to determine transformation matrices. Transformation captures tilt, zoom and panning

There are 8 free parameters in total, so we need 4 point correspondences from image space to world space, which are the key points. Penalty box lines are the easiest to detect, so we’re using them.

Implementation

• Step 1: isolate field and lines (colour thresholding, morphology)

• Step 2: Apply mask over original image

• Step 3: Convert image to binary

• Step 4: Detect straight lines (Hough transformation)

Hough Transform

• Step 5: Refine lines (least square best fit)

• Step 6: Prune excess lines (distance and angle criteria)

• Step 7: Classify lines as vertical and horizontal (angle criteria)

• Step 8: Order and label sets of lines (distance criteria)

• Step 9: Determine intersection points in the line pairs

• Step 10: Detect intersection points in the world space

• Step 11: Determine transition matrix for each corresponding set (solving linear system)

• Step 12: Use best correspondence to map world space onto image space

There are two ways to track players and the ball. The first one would be to eliminate the ground from the image, utilizing a ground detection algorithm. Algorithmically, the ground is an area of the image where green dominates. Therefore, the ground is defined to be:

Eliminating the ground

To eliminate the ground we’ll use Sobel gradient method to extract the players, the ball and other features. The Sobel gradient algorithm detects the color intensity gradients, and the regions for which the value is within a certain range of the maximum intensity derivative are shown.

Sobel Algorithm Output

Combining the images

Now let’s eliminate the straight lines on the field using Repetitive Morphological Closing.

The second ball and players tracking method implies the following steps:

• Perform Frame by Frame Query;

• Calculate the weighted sum of two images to account for changes in the background;

• Compute the difference between the weighted average and every frame queried in the video;

• Convert the derived image to grayscale;

• Threshold the grayscale image to form a binary image;

• Perform morphological closing to remove noise;

• Detect Contours of the players and the ball on the pitch;

• Apply optical flow to track the path of the players and the ball in each frame

Binary Image

Detected Players and Ball

The most popular and acknowledged sports tracking solution today is a system called TRACAB developed by ChyronHego engineering teams in the U.S., Sweden, and the Czech Republic. TRACAB uses two arrays of three stereoscopic cameras set 30–50 feet apart, usually on the third-base side of the stadium. The TRACAB optical tracking technology combines the radar-based ball tracking with the optically measured player positions to create a comprehensive real-time data feed with a miniscule delay of up to 3 frames. ChyronHego’s Virtual Placement technology uses a proprietary calibration technique to overlay ball traces and other visualizations onto a feed from a standard broadcast camera, adding even greater excitement to the viewer experience. The data gathered with TRACAB is so comprehensive that it is also used by many clubs and federations for match analysis, player development and performance training.

To know more about modern technologies implementation in football, visit betonchart.com.

References:

• Robust Camera Calibration for Sport Videos using Court Models”, Dirk Farina , Susanne Krabbe, Peter H.N. de Withb , Wolfgang Effelsberg, Dept. of Computer Science IV, University of Mannheim, Germany LogicaCMG / Eindhoven University of Technology, The Netherlands

• https://ias.cs.tum.edu/_media/spezial/bib/beetz09ijcss.pdf

• Jog, Aditi, and Shirish Halbe. “Multiple Objects Tracking Using CAMShift Algorithm and Implementation of Trip Wire.” International Journal of Image, Graphics and Signal Processing (IJIGSP) 5.6 (2013): 43.

• Ali, MM Naushad, M. Abdullah-Al-Wadud, and Seok-Lyong Lee. “An Efficient Algorithm for Detection of Soccer Ball and Players.” Proceedings of Conference on Signal and Image Processing (SIP). 2012.