8 Jan 2025

Editor Introduction

2025 is likely to see further advancements in artificial intelligence, with potential impacts on various aspects of society, including the security industry. The new year will also require security professionals to adapt to changing market conditions and develop contingency plans for unforeseen events. Industry changes will accelerate and challenge everyone in the security market to keep up. Making specific predictions for the new year can be a struggle, but we asked our Expert Panel Roundtable: What will be the big news for security in the year ahead? 

 


Rachel Akbar Gallagher Security

In 2025, the security industry will see a shift toward partnership and usability. As organisations increasingly view security as a hygiene factor, the focus is shifting from individual features to selecting long-term partners capable of delivering operational benefits beyond traditional protection. Ease of use will dominate conversations, driven by the need to integrate advanced systems seamlessly into non-technical workflows. Complex solutions with steep learning curves risk obsolescence as businesses prioritise interfaces that empower broader teams. The growing influence of IT is another key trend. Cloud adoption, cybersecurity concerns, and regulatory compliance are giving IT departments a central role in system selection, requiring manufacturers to align offerings with their priorities. Finally, trust and education are paramount. Misconceptions about technology and inadequate training hinder effective implementation. End users increasingly value reliable support and collaboration, signaling that strong partnerships will be as critical as cutting-edge technologies in shaping the future of security. 

Cynthia Marble Ontic Technologies

Workplace violence prevention legislation is going to expand in 2025. In the past two years, California and New York have implemented new regulations on workplace violence prevention while OSHA is expected to introduce potential new requirements for healthcare workers by this year’s end.  More states will follow suit next year, with an emphasis on planning, process, record-keeping, and continuous improvement.

Greg Colaluca Intellicene

To meet the pace of growth in the evolving security landscape, security operations require tools powered by intelligence. Intelligence-driven solutions transform raw data into actionable insights, enabling security teams to proactively identify, mitigate, and prevent potential risks. This can bolster security defences, streamline operations, and unlock valuable insights. Intelligence-driven solutions are driving market growth at a robust rate and will likely continue to do so throughout 2025. These advancements are reshaping the core of security operations. 2025 will be less about installing new technologies that collect more data and more about finding ways to use the data we have to enhance operations. 

Andy Cease Entrust Inc.

After observing the success of the European Union’s digital wallet program, U.S. consumers will look to find tools that put them back in control of their identities in 2025. Whether it be through banking or travel, trusted identities are central to facilitating digital journeys that are not only seamless but also secure. Enabled by biometric-based identity verification, digital identity is poised for rapid and long-term growth, ushering a shift beyond digital transformation to a digital-first society in 2025. Secure, trusted identities have the power to merge both our digital and physical worlds. By leveraging advanced technologies such as phishing-resistant, password-less multi-factor authentication (MFA) and AI-driven identity verification, organisations can significantly enhance their security posture and operational efficiency.

It probably won’t come as a surprise to anyone if I say AI will be embedded in many more business activities. But that does mean security teams will need to do more to adapt to AI. The focus should be on protecting data as it is fed into an AI service and when it comes back in response. Implementing a zero-trust architecture is crucial. Ensure every access request is verified and monitored, regardless of its origin. Security teams should enforce strict data-sharing policies to limit exposure and use data-anonymisation techniques to protect sensitive data. Evaluating third-party AI providers’ security practices will also become increasingly important to ensure they meet industry standards and comply with privacy regulations. Additionally, organisations will need to be more proactive in educating employees on responsible AI use and establishing clear guidelines so they can benefit from generative AI while maintaining a strong security posture. 

Woodie Andrawos National Monitoring Center

As we move into 2025, AI’s role in security monitoring is gaining momentum, setting the stage for more precise and insightful threat detection. AI-driven video analytics are advancing, allowing monitoring systems to interpret site activities with enhanced accuracy. This improvement helps reduce unnecessary alerts so response teams can focus on high-priority incidents that demand immediate attention. The growing ability of AI to detect context within video footage enables a more adaptive approach, where monitoring systems can distinguish between routine activity and potential risks in real-time. As companies seek new ways to elevate customer support, ensuring assistance is available beyond regular hours becomes a priority, especially with the increase in incidents occurring outside typical business times. This shift marks a new era in monitoring, where AI empowers security providers to meet evolving challenges with both efficiency and resilience.

Stephen G. Sardi Anviz Global Inc.

As biometric security (fingerprints, retinal scans, palm veins, etc.) becomes more commonplace, we may see increased biohacking attacks that manipulate or falsify biometric data. This could raise privacy and identity theft issues, so corresponding laws and technologies must be developed to ensure that biometric data is not misused. 2025 may see stronger global data protection regulations. It could be a United Nations-backed global digital security and privacy framework to complement existing laws such as the General Data Protection Regulation (GDPR). Governments may push for stricter rules on data collection and data breach disclosure. To cope with privacy laws such as the GDPR, the California Consumer Privacy Act (CCPA), and potential global digital privacy regulations becoming more stringent, security firms need to help their clients comply with complex data protection requirements. Stay abreast of changing laws and create flexible, adaptable solutions to meet different legal requirements in different regions. They will need to ensure that their storage, encryption, and access control comply with regulatory and privacy standards while focusing on protecting sensitive biometric data.

I'm sure many of these answers will harp on the role that AI will play in security as they have over the last couple of years. The difference this time will be a shift toward investing in AI because it solves real-world problems; not investing in AI for AI's sake. AI that solves everyday headaches that security teams deal with false alarms, slow response times, and systems that don't play nice together – is what organisations are looking for to level up their teams. In other words, practical AI is in action that's truly making people safer, saving companies money, and simply working.

In 2024, we saw a significant increase in data breach-related securities class action filings, reflecting a trend known as “event-driven litigation.” This involves significant events, like data breaches, triggering a drop in a company’s share price and forming the basis of class actions. 2024 saw some of the largest settlements in this area, totaling $560 million. I expect this trend will continue, and that it will continue to be an incentive for companies to continue to invest in data protection controls, such as AI data protection tools which 48% of technologist experts say will be one of the top use cases for AI in 2025 according to the recent "The Impact of Technology in 2025 and Beyond: an IEEE Global Study," and the ability to show that those controls were being operated at the time of a breach. Additionally, I expect that we’ll see a consolidation of businesses willing or able to deliver services in the European Union, driven by the updated EU Product Liability Directive (PLD). The updated PLD, adopted by the European Council, extends liability to digital products, including software and AI, and it takes effect in mid-2026. It treats software as a product subject to the same liability rules as physical goods, holding producers responsible for damages caused by defective software, AI systems, and IoT devices, regardless of fault.

Rebecca Herold Privacy & Security Brainiacs

In 2025, information and digital security/safety activities are going to be filled with the adoption of tools that can be used for great benefit, however, if their results are not accurate, or if they are misused, they can also be used to cause cyber risks. While the benefits of using AI to improve security are exciting, security professionals and business pioneers need to understand that there are thousands of AI tools, some of which are good and many of which are buggy and inaccurate. When implementing new AI tools, security professionals should always test any AI security tool that claims to be providing benefits to ensure that it provides accurate results, will not negatively impact the performance of the associated networks and business facilities, does not put people, networks, data, or physical environments at risk by exposing or inappropriately sharing data, and does not violate the organisation’s legal requirements for data use. 

As artificial intelligence (AI) becomes more embedded in daily life, organisations are increasingly tasked with safeguarding their video data against manipulation. While security teams often leverage AI to enhance threat detection, access control, and alarm systems, few prioritise defences against AI-driven threats. But the fact is, the proliferation of deepfakes and AI-generated content introduces significant challenges for video-reliant security operations, particularly in investigations and legal proceedings, as manipulated footage undermines its reliability. To maintain the integrity of mission-critical video content, security teams must prioritise ensuring the authenticity of their digital assets. A proactive approach involves mapping the digital content with identifying data and securely storing it in systems like blockchain. This method creates an authenticated baseline, enabling security personnel to detect any alterations made after the content's original creation by comparing it to the verified record.

 

Kurt Takahashi Netwatch USA

The future of security is focused on detecting and mitigating threats before they materialise. Advancements in AI-powered video monitoring systems are enabling deeper analysis of behavioural patterns and environmental cues that might otherwise go unnoticed. With sophisticated analytics distinguishing routine activities from events requiring attention, false alarms are reduced, allowing security operators to respond more effectively. These AI-driven systems enhance human oversight by managing vast amounts of data and identifying trends in real-time. This evolution is shaping a security model that blends human expertise with intelligent, automated monitoring across multiple sites, setting a new standard for security excellence as we approach 2025. 


Editor Summary

Among the news our expert panelists expect in 2025 is a shift toward partnership and usability. There will also be increased challenges such as preventing workplace violence. In 2025, we can expect security systems to be driven increasingly by intelligence (especially AI). Other trends include a shift toward digital wallets, and data protection challenges related to biometrics and other technologies. Specifically in the area of AI, we can expect trends to favor more practical uses rather than "AI for AI's sake.”

Quick poll
What is the most significant challenge facing smart building security today?