One of the reasons there has been significant growth over recent years in the deployment of enterprise level access control systems is that advances in technology have enabled the delivery of significant benefits over and above just providing access control to a building.
As a result, in addition to security managers being responsible for the security of people, property and assets, there is likely to be a number of other stakeholders involved in the decision to justify and fund the installation of an access control system. These could include but are not limited to, health & safety, operations and human resources management.
Compliance
A key word likely to be continuously on the lips of these stakeholders is ‘compliance’, as failure to comply with government regulations or local laws could have serious consequences for organisations that have a duty of care to the general public. An inspector’s visit to a food processing plant, for example, could prove costly and may even result in temporary closure unless it can be verified that everyone working at the plant has undertaken appropriate training and has a valid hygiene certification. The same smart access control cards which facilitate staff access through an entrance to a building, may also be configured through integration of the various systems to produce a report of all those whose hygiene certificates are due to be renewed.
The weak link
In this and many other scenarios, the hardware and software elements of an access control system need to be working effectively 24/7/365. The weak link will most likely be the server that the various software applications are operating upon. Unfortunately, even a well-designed and maintained system is still vulnerable to downtime as server manufacturers cannot provide a 100% guarantee that there will not be a component failure at some point. Furthermore, it is important to remember the potential impact of a cyber attack upon the associated software applications too.
Knowing your options
Data backups and restores: Having basic backup, data-replication, and failover procedures in place is perhaps the most basic approach to server availability. This will help speed the restoration of an application and help preserve data following a server failure. However, if backups are only occurring daily, significant amounts of data may be lost. At best, this approach delivers approximately 99% availability. This sounds good, but consider that it equates to an average of 87.5 hours of downtime per year, or more than 90 minutes of unplanned downtime per week.
Having basic backup, data-replication, and failover procedures in place is perhaps the most basic approach to server availability |
High availability (HA): HA includes both hardware-based and software-based approaches to reducing downtime. HA clusters are systems combining two or more servers running with an identical configuration, using software to keep application data synchronised on all servers. When one fails, another server in the cluster takes over, ideally with little or no disruption. However, HA clusters can be complex to deploy and manage, and you will need to license software on all cluster servers, increasing costs.
HA software
HA software, on the other hand, is designed to detect evolving problems proactively and prevent downtime. It uses predictive analytics to automatically identify, report and handle faults before they cause an outage. The continuous monitoring that this software offers is an advantage over the cluster approach, which only responds after a failure has occurred. Moreover, as a software-based solution, it runs on low-cost commodity hardware.
HA generally provides from 99.9% to 99.99% uptime. On average, this means from 52 minutes to 4.5 hours of downtime per year, which is significantly better than basic backup strategies.
Maximum uptime
Continuous availability solutions are able to deliver 99.999% uptime; this is the equivalent to just five minutes of downtime per year.
Supported by specialist continuous availability software, two servers are linked and continuously synchronised via a virtualisation platform that pairs protected virtual machines together to create a single operating environment. If one physical machine should fail, the application or software platform will continue to run on the other physical machine without any interruptions. In-progress alarms and access control events, as well as data in memory and cache, are preserved.
Continuous availability means that no single point of failure can stop a security software platform from running and unlike high availability, back-up and clustering solutions, there is no failover or reboot required and therefore absolute minimal downtime. In a business environment where non-compliance can have serious consequences, adding a continuous availability solution to support an existing or new access control system would seem to be one of the easiest decisions to make.