It has almost been a decade since video analytic (VA) software for the IP camera market appeared, and given the noise surrounding this technology, it is reasonable to question the value of many of the offerings, debate what is fashion instead of function and ask whether the solutions are ultimately focused on augmenting security and achieving business goals.
Marc van Jaarsveldt, consultant at The Surveillance Factory, says that the initial objective of video analytic tools in camera surveillance is to analyse video streams from cameras and recognise or detect certain scenarios and then generate an output or alarm.
“The earliest video analytic software for the IP camera market appeared around 2007 and offered some very basic functionality, mostly allowing for License Plate Recognition (LPR) and cross line (or tripwire) monitoring. Fast forward to 2016 and we question to what extent these tools have advanced and do they genuinely provide for an improvement in security and an operational cost saving?”
Van Jaarsveldt says that there is an overabundance of additional offerings in the market providing everything from facial recognition, shape and object detection, even agitation and aggression detection. “While it is great to see such innovation, it is important that these technologies are functional and meet a real business requirement.”
He says that there is no doubt that modern video content analysis tools are bringing major benefit to customers. The simplest and oldest VA utilities, in LPR and cross line detection, are now available as powerful and reliable detection tools and can be deployed alongside an IP camera system, improving security and enhancing performance.
“These tools still need to be used with discretion and the acknowledgment that they are never 100% accurate. End-users need to understand that analytics are not magic and although good results can be expected, they are not the answer to all security issues,” advises van Jaarsveldt.
Beyond intrusion detection and LPR are a range of analytics that can for example, detect left-objects, perform people counting, loitering analysis, fire detection and even analyse behaviour to provide crowd management data. These solutions are understandably expensive and are hard to justify in terms of a return on investment.
Van Jaarsveldt offers these tips to assist when developing a surveillance solution:
* Camera type and position – always a critical aspect of any camera system, but for those cameras running analytics, it is even more crucial that the camera position and the type of camera used actually analyse a scene that allows the analytic software to produce the required results. Poor camera position can compromise the analytics capabilities.
* Controlling the environment – more often than not, the effect of a very noisy environment can render the analytics unusable. Here we refer to issues such as light, foliage (trees and bushes), shadows and other unwanted changes to the video signal generated by spurious motion that can trigger the analytic.
* Tuning, false alarms and the cost of managing the system – all software analytic systems generate false alarms and they are largely unavoidable. A trip-wire can be ‘trained’ to ignore small objects (e.g. birds) that cross the line, but at some point the object will fool the detection tool. Turning the sensitivity down can render the analytic tool incapable of detecting a real threat so the solution is to ‘tune’ the system to the point where false alarms are minimised to an acceptable level, but the software can still detect a genuine threat. It is key to note, that tuning can also be required when there are seasonal changes affecting ambient light, wind or rain etc., because the environmental variables do affect the scene and the ability of the video analytic to trigger.
* The laboratory-style demonstration – only too often we have seen an analytic tested in a very clean, and well-lit environment where the detection process is unhindered by noise, shadows, changes in light or spurious motion and the solution offered delivers excellent results with no false alarms. Once installed in a live environment the tool fails to work as advertised. The Surveillance Factory advises that users test the analytics in the field or an environment that replicates what will be happening at the customer’s site. Facial recognition tools are a good example of a VA tool that works well when the light, camera position and subjects face position are controlled well, but seems to be notoriously unreliable in the average office environment, where the light is poorer and the subjects face position and camera placement are not ideal.
* System maintenance: software/firmware, cleaning camera lenses; VA vendors are constantly tweaking and fixing the way their software works, eliminating bugs in code and making improvements to functionality. Upgrading firmware (or software if the analytic is server based) is an obvious task that should form part of a maintenance schedule. Less obvious is the issue of dirty camera lenses, which can result in poor focus and can cause the VA to work less efficiently. We recently reviewed a cross-line detection using a video stream from a camera with dust on its lens. The cameras auto-back focus would not produce a crisp image and the resulting analytic was generating errors. Once the lens was wiped, the analytic worked properly.
* Edge or server based: edge based systems have the advantage that they process the video stream on the camera and thus they don’t create a load for the recording server. The downside of edge VA is that it sometimes runs on expensive camera hardware and the analytic choice is often restricted. The best result in this situation is obtained by using a server-based system while ensuring that the server specification is able to cope with additional load that the analytic will create. It is entirely possible to saturate a well-performing recording server by installing a VA tool on it and processing multiple video analytic steams. In these cases using a separate server for the analytic streams is recommended.