Web Analytics Made Easy -
StatCounter
BLOG

NEWSROOM

Cyber security company warns against cyber risks of ChatGPT
Cyber security company warns against cyber risks of ChatGPT
01.08.2024
ECONOMY
TECHNOLOGY

The adoption of artificial intelligence and productive artificial intelligence applications such as ChatGPT in business life has brought cyber security risks. ChatGPT, which is the most used application without the knowledge of cyber security teams, has made businesses of all scales the target of cyber attackers, especially with third-party plugins.

TURKEY - Artificial intelligence, one of the most talked about agenda items since the end of 2023, and productive artificial intelligence solutions such as ChatGPT have started to be actively used in every business function and at every stage of business processes. According to a report published byMicrosoft and LinkedIn, three out of four white-collar employees worldwide said they actively use ChatGPT. Although their contribution in terms of efficiency and productivity cannot be denied, tools such as ChatGPT have become the new fearful dream of cyber security teams. Onur Oktay, Senior Cyber Security Specialist of Privia Security, a local cyber security company, warned about the cyber risks posed by ChatGPT.

"Shadow IT" crown of ChatGPT

According to a study published in April 2024, ChatGPT was one of the leading software-as-a-service solutions used within the company without the knowledge of information security and IT teams. Stating that this situation is called "shadow IT" in the literature, Privia Security Senior Cyber Security Specialist Onur Oktay said, "All kinds of cloud-based or local software used in a corporate company must be supervised by IT teams or cyber security teams. It is the duty of IT and cyber security teams to adopt governance principles on how and under what conditions the software in question will be used and in which unusual situations the teams should be informed. In other words, any software used within the company without the knowledge of these teams brings risks.

Productive artificial intelligence solutions such as ChatGPT, which develop using large data sets and provide better results, pose a great risk for companies in this respect. The fact that employees use products such as ChatGPT to convert real data, business data, trade secrets into efficient outputs without paying attention to security criteria increases the risks of revealing these secrets or organised cyber attacks against the company. On the other hand, taking personal information out of the company can cause companies to violate personal data-oriented laws such as KVKK and GDPR and face regulatory sanctions."

Attention to third-party applications

Reminding that ChatGPT also offers individual developers the opportunity to develop GPTs that serve new and special purposes, Onur Oktay said, "ChatGPT also offers access to plug-ins and third-party software developed for different purposes. The possibility of using plug-ins commissioned by third-party developers to further increase the risks is proven by scientific studies. Third parties can somehow capture sensitive data of businesses or users. For any cyber attacker, even having only one personal and sensitive information changes the entire attack plan and increases the chances of success. On the other hand, these plug-ins can be installed by requesting approval from the person, and this approval has the possibility of being used maliciously. As soon as malware is installed on a computer, the attacker has infiltrated the corporate network. In this case, it may become almost inevitable for risks to turn into real damage."

Phishing scams are also among the risks

Emphasising that the risks of using tools such as ChatGPT without awareness of information security are not limited to third-party applications, Oktay said: "It should also be kept in mind that ChatGPT has the ability to provide cyber attackers with the ability to code very sophisticated attack vectors or provide phishing scams / social engineering-oriented materials. Although the platform has developed policies in this regard, ChatGPT can still be misused. A phishing email created using any personal information poses serious risks for the company, ranging from sharing critical data, locking down company networks and demanding ransom. The impact of ChatGPT's speed on the 1,265% increase in phishing attackemails from the last quarter of 2022 to the first quarter of 2024 is undeniable."

9 out of 10 developers do not trust code written by ChatGPT

Stating that software developers can also use ChatGPT or productive artificial intelligence tools developed for software developers to correct or check their codes, Onur Oktay said, "On the other hand, in a study published at the beginning of the year, almost 9 out of 10 developers stated that they would be concerned about the security implications of using artificial intelligence coding tools. This is a correct approach, because it may not be possible to discover the hidden malicious part without reading the code part line by line. Any commercial or personal information that interacts with a computer connected to the Internet means that cyber attackers reach their targets."

"Privacy and security should be followed, cyber security culture should be created"

Emphasising that businesses that want to benefit from the efficiency gained by ChatGPT should urgently develop strategy and management standards that address the artificial intelligence policies of their companies, Oktay drew attention to the importance of receiving expert support with the following statements:

"First of all, IT teams of all scales need to have information about which software-as-a-service solutions and which applications are used throughout the company. On the other hand, all movements in this software should be regularly monitored and changes in privacy/security policies should be taken into consideration. The task of creating a culture in this regard falls to the leaders. The first step to minimise risks and therefore costs is to provide employees with regular information security trainings to give them an insight into these issues and the prominent trends in the cybercrime ecosystem.

As one of the companies that have been leading the cyber security sector since 2010, we provide consultancy solutions and corporate information security trainings to businesses in the field of cyber security and support them in the process of creating this culture. As Privia Security, we are one of the rare companies that can provide offensive, defensive, forensic analysis and comprehensive cyber security consultancy services at the highest level of importance for large institutions and networks, as well as the training scale that will raise the cyber security awareness of institutions. One of the most important strengths of Privia Security is the unique products developed in-house. Privia Security, which conducts R&D with its own resources in addition to its consultancy services and whose products developed as a result of this R&D are used by important companies in the public and private sectors, is one of the rare companies that have achieved this in this field. We can state that our PriviaHub product, developed especially for the security forces of countries and large institutions in the private sector, is carefully followed and used by friendly countries under the roof of NATO; It is preferred by the SoC teams of various companies that are leaders in their field.

Our aim is to become an expert business partner that can be trusted by CTOs and CIOs in the corporate segment, as well as everyone who works in this field.''

Contact: Tülay Genç | [email protected] | +31 30 799 6022

You can use the press releases published by B2Press in your media for free.
Fill the form immediately, get all the news instantly.
PUBLISHER REQUEST FORM
Please enter a valid name.
Please enter a valid e-mail address.
Please approve.
© B2Press B.V.
B2Press
Sending...
B2Press