23 Oct 2018

Editor Introduction

Artificial intelligence (AI) is a current buzzword in the physical security market – and the subject of considerable hype. However, AI sometimes get negative press, too, including dire warnings of its potential and eventual impact from some of our most prominent technology thinkers. We decided to take the issue to our Expert Panel Roundtable with this week’s question: What are the negative impacts and/or new challenges of AI for physical security?


Sean Lawlor Genetec, Inc.

The biggest negative impacts of using artificial intelligence in physical security greatly lie in semantics. The term artificial intelligence is a loaded term, where utilisation of the term itself is a large challenge for the industry. A true AI is something we are years away from creating (i.e. machines self-learning tasks they aren’t taught about) and use of the term can lead to confusion. With improvements in computational efficiency, as well as novel applications of tried and tested machine learning algorithms—we will see great jumps in operational efficiency. What normally took security operators days of manual work can often be automated through machine learning techniques to allow them to focus more on tasks which are greater value additions to an entity. Additionally, machine learning can help understand patterns in the data the operator may not have even considered but are more relevant to the task at hand.

Generally, I don’t think there is much of a negative impact on physical security from AI – it’s more the negativity surrounding it in some people’s minds! In some quarters AI is still seen as a threat to jobs, with a fear that people will be replaced by machines. In fact, AI can’t replace people on the ground dealing with a situation. What is does do however is save humans having to process the more mundane tasks such as checking surveillance feeds – which it is better suited to doing anyway. As well as needing humans to deal with incidents, there is also be a need for skilled professionals to sort through all the mined data from AI-controlled systems. People will still need to investigate this data, finding relationships between incidents and acting upon the insights to ensure these are either prevented or properly investigated and mitigated in the future.  

Jim Katsifolis The Ava Group

Recent advances in big data, cloud storage and computing power have allowed AI, and more specifically machine learning and deep learning, to begin a disruptive technological transformation of the physical security industry. The effectiveness of these learning algorithms relies heavily on large volumes of labelled data to train them correctly. One of the key challenges often overlooked with AI learning techniques is data quality. Today’s security systems generate lots of data, and while some limited AI training can be achieved at the commissioning stage, the vast amount of data generated is unlabelled as the operator is typically only interested in alerts or a small fraction of the data. Incorrectly labelled data, such as false positives or missed intrusions, will also impact the accuracy of AI. So, while AI will continue to transform physical security systems, its full potential will not be reached unless the challenge of data quality is overcome.

Hank Monaco Johnson Controls, Inc.

With the introduction of AI-supported video analytics and the amount of additional data being collected, the value of video footage has increased exponentially. Advanced video analytics are able to detect and distinguish between objects, analyse behaviors and attributes, and classify data for different business and safety applications. In conjunction with machine learning, video analytics become smarter over time and are programmed to recognise patterns and detect abnormal behaviours. However, AI is just another tool for security personnel and decision makers to make better informed decisions with and not a complete replacement for critical business decisions. As with all technology, AI can bolster operational efficiency and security, but only when paired with the insights and expertise of seasoned security professionals.

While AI-based analytics hold promise for being much more accurate for existing analytics as well as providing newer capabilities, they certainly come with some challenges. One difficulty is the availability of datasets for "training," which are needed because the computer first needs to "learn." The amount of time that the computer takes to learn can also be a challenge for users that have critical security needs. Another challenge is that "deep learning" algorithms take a lot of computing power, which means that users would have to significantly increase their infrastructure budgets. Given the need for large data sets and significant computing power means that Cloud may be necessary, which could also be a challenge for some users. Lastly, as with a lot of other industries, there is an unfortunate fear that AI-based machines (e.g. mobile robots) will result in reduced employment.

Jessica Burton Seagate Technology

While artificial intelligence has proven itself to be incredibly helpful for video surveillance, it has also challenged how we approach data storage. There has been rapid adoption of facial recognition technologies, accident detection systems, and event-based trigger solutions, all of which are made possible by Ultra High definition cameras and AI software. The HD/4K video quality needed for AI is placing a larger workload on the storage devices holding the surveillance data. With AI solutions, data is also stored for longer periods of time, allowing deep learning algorithms to better understand patterns, again, adding to the stress on storage drives. There was a 50 percent compound annual growth rate for video surveillance storage from 2014-2019. Meeting these increased demands has been challenging for our industry, but improvements in hard drive technology – such as enhanced firmware, workload rates, and random read write technology optimisation functionality – make it possible to keep up.

AI, including machine learning, has been around for decades, but recent advances with Deep Neural Network technology and the increasing availability of cheap compute resources enables AI to make a significant impact in the procedure- and policy-intensive Physical Security System arena (i.e. intrusion detection, access control, video surveillance, fire detection and suppression). While various AI algorithms address the same application, success lies with how well AI-systems are defined, including their operating boundaries, and perhaps more importantly, the “training” of the AI system. The AI system must be taught what events are valid triggers and what policies or specific actions to execute. An effective system will have been trained using extensive amount of “trigger” events, given exceptions (false-positives that should be rejected) and a very explicit set of actions to take in response to various valid events. Otherwise, the AI system may execute inappropriate actions or provide next to useless insights.


Editor Summary

Our Expert Panel Roundtable seems to agree that the impact of AI will be overwhelmingly positive. There may be challenges to manage, but our panel agrees that fear of AI is more a matter of semantics than reality. It’s all about mindset, and their positive expectations suggest that AI is good news indeed.

Quick poll
What is the most significant challenge facing smart building security today?