Neural Sports

ABOUTARTICLES

Anaylst Series: Tracking

May 13, 20209 min read

It’s Monday night and you’ve just switched over to watch Gary Neville and Jamie Carragher lay into some poor defender who had a shocker on the weekend or maybe you don’t have Sky Sports so it’s Saturday night and you’ve tuned in to watch the nations beloved Match Of The Day to hear Alan Shearer give one of his iconic one liners ‘Football’s not just about scoring goals. It’s about winning’…yeah, I don’t really get him either most of the time. Anyway! Welcome to a new series on Neural Sports!

One of the best parts of either MNF or MOTD is and always will be when the pundits start using the interactive screen to highlight key parts of the game. Some really do struggle with it, however, most are able to navigate it in an intriguing way. This new analyst series will take you through how to produce and program some of the key interactivity that takes place during these reviews on the ‘tactic tables’. If any of you still have no clue what I’m talking about here’s a great example of Carra and Neville going over Man Citys tactics with Guardiola.

During this 5 part series we will be focusing on the following!

  1. Highlighting and tracking a chosen players movement.
  2. Being able to draw certain shapes to indicate what run or pass a player should have made.
  3. Adding a shadow effect that focuses on a particular player.
  4. Measuring the speed of a player across a pitch.
  5. Identifying a player from selection.

Tracking an Individual


One of the well known uses during analysis is highlighing a particular player and then ranting about their lack of awareness of praising their movement. To demonstrate this let’s choose an iconic moment from one of the greatest summers ever where Football nearly came home. The World Cup 2018. In particular a player who had an amazing tournament at only 19 yeards old. Kylian Mbappe.

Mbappe

What a player. The moment I’m referring to is when he absolutely rinsed the Argentinian defence and Marcos Rojo’s only hope was to take him out and give away a penalty. During this moment his speed maxed out at 39 km/h - insane.

Mbappe Run

So, how can we track Mbappe during this clip? How can we highlight him in some sort of way so that pundits can praise him and say they knew about his potential years ago? Using the beautfiful combination of Computer Vision and Python.

Prerequisites


Couple of prerequisites before we delve into the code and this awesome series.

  • OpenCV 3.2+ installed, fear not, if you do not have it installed yet follow this tutorial on pyimagesearch
  • Python…obviously..

Tracking and how it works?


Tracking is being able to find the arrangement of parts in a particular object over time. How tracking differs from standard object detection is that detection is just finding the arrangements of parts in a particular object in a single frame, whereas tracking is over multiple frames.

First, I’ll explain Tracking in it’s most basic form and after we’ll start to think of ways we can incooperate Computer Vision techniques into each step.

  1. Assume we have a method of estimating the location and number of objects in the very first initial frame of a video, the frame before you even press play.
  2. Then, using the previous position (maybe speed of the object) we estimate a search area over which the objects are likely to be in the very next frame. The search area is an area of positions we predict the object could have moved to.
  3. Finally, compare possible locations of each object in frames that are next to each other. This could be through if pixels are similar in one frame to the next.

Now we’ve defined the basics, what Computer Vision techniques can be included into these steps? Here is one of many ways.

  1. We talked about identifying an object in the first frame. After converting the image to a binary one (white and black) we could perform foreground detection of the image through Gaussian Distrubution. The algorithm tries to identify the background by calculating the probability of a colour pixel being present in a particular place for a long time. If it has a higher probability then it is more likely background. Once we have split the foreground from the background we could apply connected component analysis to identify connected white pixels in an image. If it is a particular object we want to identify we could train a model to detect that object in the first frame. After identifying connected pixels we can draw a rectanglular region around them.
  2. Sweet, so we’ve now identified our pixels in the first frame. We now need some association method so we can associate the pixels in the first frame with the second frame. During the first frame we could also build statistical models around the object that we are tracking such as size, velocity, colour and positioning. Then from these models we could use a technique called Kalman Filtering to have a formulated guess on the whereabouts. Kalman Filtering takes combinations of models or a ‘search area’ and filters down to the combination with the highest probability. Kalman’s can also take into account external factors by using an uncertainity model such as wind, or type of ground. Once we have a good idea of where we think our object is we update our models and iterate over them for the next frames.

Now that was a high level overview but don’t worry in our implementation further down we will not be using the technique above but rather out of the box solutions provided by the Computer Vision library OpenCV.

Problems in tracking?


Objects may be too close to one another. For example, when players are loading the box for a corner as there are too many bodies in a certain area it may be difficult to track an individual.

Occlusion. This is where we begin tracking an object but then it becomes blocked or some sort of interference is in play. For example, we’re tracking a ball in a Baseball game and the batter smashes one for a Home-Run. When that ball is out the stadium, we struggle to track it due to our cameras being inside the stadium.

Objects are too fast. Luckily in our case Mbappe isnt as quick as a Fighter Jet, but if some objects are moving too quick like an F1 car on a straight there may be some lag in calculating where the object is in subsequent frames.

Implementation


Now you have some awareness of how tracking works let’s delve right into the implementation. To demonstrate how it works I’ll be focusing on one tracker from OpenCV specifically. Mainly because I have had the best results with it - ‘CSRT’. How this particular tracker works is it trains a filter and then uses that filter to search the area around the last known position of the object. There are many more trackers in the OpenCV library so give them a go yourself afterwards!

How our implementation differs to the explanation earlier is that we initially select our ‘region of interest’ (object) that we want to track in the first few frames so no detection needed.

Onto the code!

from imtutils.video import VideoStream, FPS
import imutils
import time
import cv2

Key packages that need to be pip installed before beginning!

tracker = cv2.TrackerCSRT_create()initialBoundary = None
video = cv2.VideoCapture('mbappe.mov')
fps = None

On line 1 we initialise our tracker that we want to use and on line 3 we import our video that we want to use.

while True:    # current frame    frame = vs.read()    frame = frame[1]    if frame is None:        break
    # resize
    frame = imutils.resize(frame, width=500)
    # handle if region of interest has been selected
    if initBB is not None:        (success, box) = tracker.update(frame)        if success:            (x,y,w,h) = [int(v) for v in box]            cv2.rectangle(frame, (x,y), (x+w, y+h),                         (0, 255, 0), 2)
    # show video
    cv2.imshow("Mbappe Run", frame)    key = cv2.waitKey(1) & 0xFF    # control selection of region of interest    if key == ord("s"):        initBB = cv2.selectROI("Mbappe Run", frame, fromCenter=False,showCrosshair=True)        tracker.init(frame, initBB)        fps = FPS().start()    elif key == ord('q'):        break
vs.release()cv2.destroyAllWindows()

In lines 1-6 we begin our while loop and grab the current frame in the video. The while loop continues until either an exit key has been entered or there are no frames left in the video in which case we break.

In line 9 we resize our frame to 50 pixels so we can process it faster.

In lines 12-17 we check to see if we are currently tracking an object already. If we are, we use the update method to pass our current frame in to locate the object’s new position. If a success boolean is returned we draw our region of interest again around our region of interest.

In lines 20-28 if the ‘s’ key is clicked we initiate selection of the region of interest. This allows the user to draw a bounding box around what we wish to track and then press ENTER or ESCAPE to continue. Our tracker is then init (initiated).

Finally in lines 30-31 we deal with quitting out of the tracker.

Voila! We developed this beautiful tracker that Neville and Carra can use till their hearts content!

Mbappe Run ROI

Summary


Awesome! We have now built one of many functionalities that Carra and Neville use on MNF. Pretty cool eh? Make sure you subscribe below to receive updated info from Neural Sports and not to miss out on part 2 where we will be exploring drawing certain shapes on the screen so Carra, a centre back can show midfielders like Pogba where they should have played an ’easy’ through ball.


References

Fifa TV


Developed by Sean O'Connor, a sports and AI enthusiast.