Motion Detection & Tracking

Flux Tensor develops state-of-the-art detection and tracking algorithms (pFlux) that help surveillance operations and analysts focus only on the objects and events of interest and deemphasizes irrelevant objects and object movement. These algorithms enable solutions to perform effectively regardless of image/video background noise and distractions stemming from environmental conditions like, poor weather, limited visibility, occlusions, glare, reflections, camera motion, etc. Huge amounts of surveillance footage is being collected across the globe on a daily basis. Often, due to reasons like human resource or computing limitations, this data goes unanalyzed until a much later time, usually well after an event of interest has occurred. Even when the data is expected to analyzed, the vast amounts of data collected make it difficult to analyze a significant portion of the repository, resulting in many missed events of interest. 


Background Subtraction

Automated change detection enables users to monitor multiple scenes or videos simultaneously, keying on significant events and generating actionable intelligence in a timely fashion. Examples of the types of object behavior that Flux Tensor customers and partners are focused on include:

  • Is an object(s) in motion in the current camera view?
  • Did the object(s) stop/park in a certain location and for how long?
  • Once change is identified, what object identification or classification should be applied (i.e. human, car, box, bag)?

Also, and pertinent to ground-based security monitoring, there is often a shortage of security operators to monitor, assess and tag all the available video in real time. As a result, immediately actionable content and data is often missed entirely or overlooked as non-critical.


Flux Tensor’s object detection and tracking algorithms can be used to help cue the attention of an operator to a particular object or event by ignoring background noise or distractions. This is achieved through a proprietary pixel change methodology, requiring no machine learning or machine training of any kind. In the video above, this mathematical algorithm is applied to the single raw video shown at the left, full of weather and other stationary or moving objects that are not critical to the analysis. The resulting output of pFlux is shown in the center, eliminating background objects, ignoring the snowfall and drawing attention only to the objects of interest (“humans in motion”). At the right, parameter settings have been modified to restore background objects, but maintain the focus on humans in motion by automatically applying semantic segmentation to those objects exhibiting the defined behavior.  In the video below, a similar analysis is being done to track “vehicles in motion” in the background while ignoring irrelevant motion and objects in the foreground.



Persistent Change Detection

A critically important capability included in the pFlux solution is the ability to identify objects, by specific classifications, that are exhibiting persistent change. The videos below demonstrate how this used in real-world surveillance scenarios and how modified parameters impact the ability to draw attention to or ignore certain objects or events.  


The top view in the video below is an accelerated raw camera feed from a major international airport . The bottom view has the pFlux solution applied to that same feed with differing application parameters to focus on specific object classes and object behavior. To start, the object classification was set to “bags” demonstrating persistent change, enabling rapid and automated detection of bags or bag-like objects that have been left behind or remain idle after being in motion. At the midpoint of the video, the application parameters were modified to include human object classes. An environmental mask was also applied to the area and pixels representing the bag check counter. Applying an environmental mask ensures that bags and humans that are intentionally stopping as part of the bag-check process are ignored as persistent change is the expected behavior at the location.         

It should be be noted that the output of the pFlux analysis contains temporal and visual metadata but nothing that would compromise compliance to privacy standards, such as Personally Identifiable Information (PII), General Data Protection Rights (GDPR) or the California Consumer Protection Act (CCPA). The image below highlights the pFlux output as a color-coded representation of the relevant metadata showing how objects in motion (blue pixels) are identified and compared to stationary objects (red pixels).