22 Jan 2024

Misconceptions about AI and analytics in video security are common due to the rapid evolution of the technology and varying levels of understanding. Artificial Intelligence (AI) and analytics are increasingly used as interchangeable terms when discussing video security cameras.

AI v/s analytics

While there is some overlap, it is important to articulate the differences between them when speaking to stakeholders or customers.

As the myriad types of AI gain more prominence in the global dialogue and privacy concerns are increasingly raised, security professionals need to take extra care to educate executives and management in any organisation so that they can, in turn, represent the company's use of AI tools accurately.

AI and analytics in video security

Security cameras use subsets of AI, namely machine learning and deep learning, to recognise and classify objects

The intent of this article is not to do an academic deep-dive on either AI or analytics, but instead to position each discipline as it relates to modern AI-based video security. The goal is to educate operations and management about the use of AI in video surveillance to support informed decision-making across the organisation.

It’s worth mentioning that while AI is often used as a catch-all term, security cameras use subsets of AI, namely machine learning and deep learning, to recognise and classify objects. 

Misconceptions

Here are four common misconceptions about AI and analytics: 

Misconception #1:

AI and analytics are the same 

People often confuse AI with analytics, but they're distinct. AI is used in the video industry to enhance analytics and analysis capabilities. The technique involves the use of the machine and deep learning algorithms to recognise or classify known objects like a person or vehicle.

AI can further detect unique attributes of objects such as the colour of clothing, or additional objects that are carried or worn such as backpacks or glasses.

  • Object detection 

The processes overlap somewhat when AI is also used to enhance the analysis of complex behaviours

Analytics, on the other hand, refers to the process of analysing what the detected object is doing. The processes overlap somewhat when AI is also used to enhance the analysis of complex behaviours. Is a vehicle traveling left or right in the camera’s field of view, possibly going the wrong way down a one-way street? Did it enter a restricted zone? Should a car arrive in this area at 3 a.m.? These are basic binary (yes/no) analytic tasks.

Ascertaining whether two people are fighting or if someone is shoplifting is a more nuanced analysis that requires a sophisticated AI algorithm capable of considering multiple data points before alerting staff to suspicious behaviour.​

  • Addressing false positives with AI-assisted analytics

​Before AI-based object detection, analytics were prone to false positives any time the lighting changed (a passing cloud could change the pixels). Working together, AI and analytics have largely solved the issue of false positives for the objects they recognise. On a properly installed camera, AI-assisted analytics can issue proactive alerts or search through hours of footage for humans or vehicles with specific attributes incredibly quickly.

AI adds classification and behaviour information to raise the overall accuracy of analytics and analysis. AI can also be used to enhance image quality in cameras, so there are many other ways it can be utilised. 

Misconception #2: 

AI can operate autonomously and replace security personnel 

AI coupled with analytics can help operators monitor an increasingly larger number of cameras for anomalies and events

While AI can enhance surveillance and response, the nuanced understanding and decision-making capabilities of humans are still crucial in most scenarios. AI coupled with analytics can help operators monitor an increasingly larger number of cameras for anomalies and events that may warrant attention, but the decision on how to act still firmly rests with the operator.

This increasingly ‘intelligent’ assistant represented by AI helps security teams focus on what matters in an increasingly complex world, but it’s a team effort. And while AI’s capability to operate autonomously will surely improve, it’s hard to imagine a world in which it would be wise to let it make important decisions without human oversight. 

Misconception #3:

AI-based security cameras are invading the privacy

Safeguarding personally identifiable information (PII) is a critical responsibility of any organisation. Because of the prevalence of data breaches, everyone is keenly aware of the risks of PII ending up in the wrong hands. For video security systems, it’s essential to realise that the descriptive metadata an AI-based camera captures is composed of anonymous data about the humans it detects. 

Attributes such as the colour of clothing and whether a person is carrying a backpack or wearing a hat are certainly not sufficient to identify anyone personally. And while there may be grey areas with some AI systems that attempt to classify gender and age, they are still not identifying a specific person.

  • Facial recognition

Facial recognition is a specific, focused function, and while it might be improved with some AI-based techniques

Most importantly, AI does not equal facial recognition. Facial recognition is a specific, focused function, and while it might be improved with some AI-based techniques, facial recognition has had its separate evolution distinct from AI.

Facial recognition has privacy implications, while most AI implementations do not. 

  • AI-based detection

Additionally, AI-based detection of humans and vehicles typically happens “on the edge”, processed within the camera itself, while facial recognition almost always requires a separate VMS/server application and database to function.

So, AI-based cameras don’t, by themselves, do facial recognition. 

Misconception #4:

AI can learn anything on its own 

AI-based algorithms are only as good as the training they’ve received

While AI can significantly enhance video surveillance capabilities, the machine and deep learning algorithms are not infallible. They require human oversight to manage false positives and interpret complex situations. AI-based algorithms are only as good as the training they’ve received.

For example, human and vehicle detection algorithms have been carefully trained by R&D departments in laboratory settings with hundreds of thousands, if not millions, of representative images. This is why they can be very accurate when installed correctly.

  • AI ‘on-site learning’ cameras

For AI-based systems to truly learn to recognise something, they must also be informed when they get something wrong. This type of training usually occurs under the supervision of experienced AI developers. Recently, new AI ‘on-site learning’ cameras have become available.

These can be trained by operators to recognise unique, customisable objects such as forklifts, shopping carts, airplanes, logos on vehicles, or any object an organisation might want to track or count. These systems must also be trained to be accurate.

  • Intelligent training application

AI on-site learning is also a great way to increase overall accuracy in any setting prone to false positives

A typical on-site training challenge would be to capture an object in every lighting condition. To overcome this, forward-thinking vendors include an intelligent training application that automatically generates additional images across a range of luminance values, saving operators substantial time and effort while increasing accuracy.

AI on-site learning is also a great way to increase overall accuracy in any setting prone to false positives. 

Conclusion 

Understanding these misconceptions is crucial for making informed decisions when implementing AI and analytics in video security systems and gaining consensus from stakeholders.

As AI becomes ubiquitous across industries, it has the potential to lose its true meaning, particularly because we are only scratching the surface with machine and deep learning applications. 

  • AI’s current capabilities and limitations

Most importantly, we have to help educate all of our constituents that AI and analytics are not the same

It’s also important to recognise that general (or strong) ‘AI’ does not yet exist. This is the ability of a machine to do any intellectual task a human can do. However, the marketing ship has long since sailed, so the best we can do is stay informed about AI’s current capabilities and limitations.

Most importantly, we have to help educate all of our constituents that AI and analytics are not the same. AI does not equal facial recognition. Nor does it replace the need for human oversight.

  • AI-based technology

AI is a long way from non-structured, comprehensive learning and decision-making in a way humans would describe as ‘intelligent.’

With those caveats in place, the current AI-based technology functions as a fantastic assistant for security teams helping them to better protect people and property.