18 Apr 2023

The recent criminal activity being undertaken via the chatbot is disappointing, but it ultimately is no surprise. When new technology is developed, there’s always a chance it can be used for harm.

ChatGPT

Few could have foreseen the rapid rise in popularity AI chatbot, ChatGPT, would achieve since its launch in November 2022. Having amassed over one million users within its first week, ChatGPT has propelled itself to the centre of AI and wider tech conversations through its powerful, human-like responses generated at the click of a button.

Recently, with the development of GPT-4, the latest, more powerful version of the chatbot continues to ask questions about businesses’ cybersecurity credentials.

Risk mitigation guidance 

GPT-4 has developed into a disruptor within AI and demonstrates the powerful capabilities AI has to offer"

Despite its many uses, recent reports suggest that cybercriminals have moved swiftly to weaponise the chatbot. In response to this emerging threat, business leaders from SoDA (Software Development Association of Poland) offer guidance on how businesses can mitigate the risk posed by GPT-4 and where the responsibility lies in this debate.

Dr. Jerzy Biernacki, Head of Operations at Miquido comments, “GPT-4 has developed into a disruptor within artificial intelligence (AI) and demonstrates the powerful capabilities AI has to offer. The recent criminal activity being undertaken via GPT-4 is disappointing, but it ultimately is no surprise. When new technology is developed, there’s always a chance it can be used for harm."

OpenAI, robust security measures 

In terms of where the responsibility lies, we should expect greater strides being taken by OpenAI following its partnership with industry heavyweight, Microsoft. Implementing robust security measures to prevent unauthorised access or misuse of the service will likely be top of the agenda for OpenAI in the coming months."

Ultimately though, it will be the end users who shoulder the most responsibility on GPT-4. OpenAI has launched a very powerful tool and a suitable education should be on hand. Educating users on the security implications surrounding GPT-4 and the mechanics behind the chatbot will be pivotal moving forward and could potentially decrease the volume of criminal activity.

AI-generated code victim

Lukasz Brandt, Senior Security Analyst at DAC.digital said, “From a business perspective, the risk of falling victim to AI-generated code could affect an organisation on multiple fronts. The introduction of bugs or other weakening proxies could disrupt a business through data breach and network disturbances both of which could subsequently lead to financial loss and reputational damage later down the line." 

Skills and knowledge

The third-party expertise can help to reduce the risk of data breaches and other security incidents"

Software developers and other industry bodies are equipped with the necessary skills and knowledge to help guide businesses through this confusing landscape."

"Businesses should consider bringing in the expertise of developers to help provide guidance and best practices for reviewing, testing, and deploying code in a production environment. This third-party expertise can help to reduce the risk of data breaches and other security incidents.”

Authentication 

Brandt outlines the role user authentication could provide to mitigate criminal activity on GPT-4: “Hackers are smart but pragmatic."

"The thinking behind user authentication is that anything that can slow down an attack, make it harder to automate, and increase the necessary expenditures should be considered. However, security cannot be too burdensome for legitimate users, a balancing act for OpenAI to consider from user experience and security viewpoint.

Self-sovereign identity

A solution is to use two-factor authentication based on mobile applications generating time-based, one-time passwords which expire after a short amount of time. Alternatively, another promising technology is self-sovereign identity based on blockchain which, by its nature, cannot be tampered with or infiltrated by criminals."

The tools are out there and at the disposal of OpenAI to deploy to mitigate the growing threat posed by criminals looking to infiltrate the hottest new feature on the internet right now.”

Keeping minimum criminal activity 

Expertise from development industry and security measures help ensure criminal activity is kept to a minimum"

Biernacki concludes, “Although the early indications surrounding the criminal activity being conducted across AI-powered chatbots are concerning, businesses should not panic." 

"What we can see is that both the expertise from the development industry and security measures are available to help ensure criminal activity is kept to a minimum.

Best practices 

Having the necessary user authentication processes in place will help OpenAI monitor the legitimacy of their users and the activities they are undertaking on the platform."

"Furthermore, businesses that may feel vulnerable to this emerging threat can seek the expertise of professional developers to provide security advice, guidance, and best practice on identifying weaponised code developed via AI’s latest and most disruptive tool yet.