25 Apr 2022

Most individuals have strong opinions about what is more essential to our civilisation: personal privacy or security. After more than a decade of technology enabling greater and greater amounts of Personally Identifiable Information (PII) and data harvesting, now the pendulum is swinging towards a society that places more emphasis on individual rights and privacies.

This shift is manifesting in a variety of ways, quickly changing the technology, security and cyber security landscapes, and requiring organisations to take a hard look at their own data and privacy practices.

Growth of digitisation

While finding a balance point between the two that everyone can agree on is an ongoing struggle, today there are technology solutions that can help. As we continue to examine how our society, organisations, devices and individuals can achieve this balance, we acknowledge that ongoing discussion remains important as the digitisation and connectivity of our environments continues to grow.

In recent years, a number of policies have been put into place, to help formally dictate the balance between privacy and security. The European Union (EU) looked to tackle this issue in 2018, with the passage of the General Data Protection Regulation (GDPR), currently seen as the toughest privacy and security law in the world. Other, more specific guidelines based on industries or geographical areas, such as HIPAA in healthcare and the California Consumer Privacy Act, also help determine an individual’s right to privacy.

Regulations have to keep up with technological advancements

Every new regulation, policy and legal protection put in place helps draw a line in the sand, regarding privacy and security

Every new regulation, policy and legal protection put in place helps draw a line in the sand, regarding privacy and security, addressing the growing societal concern for collective security and personal privacy. However, regulations are not usually able to keep up with technological advancements.

With new policies being enacted at the same time new solutions and threats come to market, organisations can easily fall behind. This opens the door for security and safety teams to take data privacy into their own hands, in order to reassure their customers that their solutions and policies are both privacy-centric and security-minded.

Where privacy and security co-exist

As our presence in physical and cyber environments expands and merges - along with the inter-connectivity between devices - so does the attack surface available to those wishing to do harm. There have been several developments on both fronts to address these threats with built-in protections for privacy and safety in place. The question becomes: As we integrate solutions that help us address security, are we sacrificing privacy and vice versa?

For example, physical access control systems are deployed to protect an organisation’s facilities, people and assets, ensuring only those with the right credentials have permission to enter. This helps to ensure the security of the organisation – but at the same time, the access control systems are gathering reams of data, including who entered what area, when, and with whom.

Business intelligence

Integrators and manufacturers are now actively marketing the data benefits of their solutions to their customers. These benefits typically revolve around business intelligence, as organisations can use the data gathered by security products to provide valuable insights, in order to guide business decisions and ensure compliance with applicable regulations.

The same security vs. privacy conundrum is present in video surveillance. Video surveillance, which was once only able to provide real-time monitoring, has now expanded to include video analytics, capable of telling us what happened in the past, why it happened, and how to prevent it from happening in the future, using machine learning and AI (artificial intelligence).

From a security perspective, video analytics removes guesswork, when it comes to leveraging business data and creating actionable results, but also introduces personal privacy concerns related to the generation of PII.

PII gathered from video surveillance

Some organisations are choosing to forego surveillance systems all together

PII gathered from video surveillance, such as faces and biometric records, is of concern to organisations potentially looking to deploy the technology, and often rightfully so. Some organisations are choosing to forego surveillance systems all together, in order to help better protect the rights of their employees.

To combat this, security professionals are starting to bring facial recognition and video analytics out of the shadows, marketing such innovations in the public-sphere in ways that are easily understood.

Framework for security data governance

On the end user side, responsible providers must begin outlining a framework for security data governance, in order to ensure that users’ privacy is protected from unlimited data gathering. Operating with transparency, security teams need to create a paradigm for how data is gathered, where it is being stored and what security measures are put in place to protect it.

At the same time, security is a greater concern now than ever before. With modern threats, including workplace violence, COVID-19 concerns, and a growing number of cyber-attacks, today’s workplace have never been riskier. Organisations are constantly looking to deploy solutions that can both prevent and deter such risks.

Privacy issues continue to grow

While physical security systems, such as video surveillance and access control have become widely accepted solutions for such risks, privacy issues only continue to grow. With an estimated 75.44 billion networked IoT devices estimated to be installed in 2025, even more, questions will continue to be raised over the rights of individuals, to protect their personal privacy.

It is up to security teams and their IT counterparts to find the best ways to optimise physical and cyber security, without breaching these boundaries.

Is AI (artificial intelligence) the answer?

AI and machine learning solutions can be easily configured, in order to enable or disable features

Modern technologies and their champions are succeeding, when it comes to helping organisations create their own personal balance between security and privacy. Often, AI and machine learning solutions can be easily configured, in order to enable or disable features, when these features are not needed or wanted.

For example, if an organisation is not comfortable with the use of facial recognition in their video surveillance, they can simply turn it off, while still taking advantage of available analytics that work based on body signatures.

Similarly, operations teams can gather anonymised occupancy data from cameras, without accessing the security system itself. It is up to security teams, IT departments, and security integrators to understand the implications and capabilities of the technology they are installing, in order to determine if the solution meets their individual privacy and security specifications.

Facial recognition leads to privacy concerns

When it was first introduced in the market, facial recognition technology was considered an exciting new technology for a variety of applications. However, now facial recognition is highly controversial and not as well accepted by the general public.

Opposition has grown due to reasons, including bias and the potential scraping of photos from public sites, in order to create a depository of images for use in facial matching. This is where artificial intelligence (AI) can make a game-changing difference, with its ability to power advanced video analytics and simultaneously, address both privacy and security.

AI technology used to solve the problem of bias

AI technology is also being used to solve the problem of bias, particularly with the development of better training data sets, used as the basis of facial recognition. Unlike databases where photos are skimmed from public sites, data sets can be artificially created using General Adversarial Networks (GANs).

GANs use AI to create computer-generated faces indistinguishable from real photos, as the base for training facial recognition devices. In this way, millions of faces are made under perfect conditions, with a variety of skin tones, eye shapes and facial structures. These artificially created images are, in turn, used to train the facial recognition AI device, in order to reduce bias and create a greater pool of recognition.

AI can help protect personal information on the cyber-side

The use of facial recognition and two-factor authentication is commonly used to unlock a personal mobile phone

AI can further help protect personal information on the cyber-side. For example, the use of facial recognition and two-factor authentication is commonly used to unlock a personal mobile phone.

This technology is similarly being deployed in remote work, where individuals working from home must first verify themselves using facial recognition, in order to access their organisation’s sensitive files.

AI used in reporting

AI is also used in reporting, making it possible for organisations and individuals to know immediately, if an unauthorised user intended to use their identity, going as far as to even recognise when a social security number is being entered online in an unsecured environment.

While both man and machine (AI) can help find the line between privacy and security, it will be up to the individual to determine how, when, where and to what extent, the technologies they employ address their personal assessment of privacy vs. security.

Smart solutions providers, in turn, recognise this by creating systems and tools that allow for a sliding scale of privacy and security, wherein the individual can determine which of their concerns is addressed more directly.