ChatGPT is the latest tool of cybercriminals

- Advertisement -

GENERATIVE AI in ChatGPT, which is its most accessible form, is now being used by cybercriminals for social engineering schemes for identity theft and financial fraud. This, according to Vicky Ray, Director at Unit 42 Cyber Consulting & Threat Intelligence, Asia Pacific & Japan at Palo Alto Networks.

In an exclusive interview with Malaya Business Insight, Ray said that “with the rise of ChatGPT, criminals are quick to exploit the generative AI” indicating that researchers at Unit 42, Palo Alto’s team of security consultants help companies develop an intelligence-driven, response-ready organizations.

 

- Advertisement -

Between November 2022 and early April 2023, Unit 42 noticed a 910 percent increase in monthly registrations for domains related to ChatGPT. In this same time frame, an increase of 17,818 percent of related squatting domains coming from DNS security logs. Up to 118 daily detections of ChatGPT-related malicious URLs were captured from the traffic seen in the Advanced URL Filtering system.

“These multiple phishing URLs are attempting to impersonate official OpenAI sites. Typically, scammers create a fake website that closely mimics the appearance of the ChatGPT official website, then trick users into downloading malware or sharing sensitive information such as credit card details and email addresses,” Ray explains.

Malaya Business Insight (MBI): What are the key trends or observations you’ve noticed regarding the rise of ChatGPT-themed scams in recent times? Example: Does it lead to more social Vicky Ray engineering? scams that use programs or apps?

Vicky Ray (VR): With the rise of ChatGPT, criminals are quick to exploit the generative AI for social engineering schemes for identity theft and financial fraud. Our Unit 42 researchers have seen an increase in the number of newly registered and squatting domains related to ChatGPT.

Between November 2022 and early April 2023, we noticed a 910% increase in monthly registrations for domains related to ChatGPT. In this same time frame, we observed a 17,818% growth of related squatting domains from DNS security logs. We also saw up to 118 daily detections of ChatGPT-related malicious URLs captured from the traffic seen in our Advanced URL Filtering system.

These multiple phishing URLs are attempting to impersonate official OpenAI sites. Typically, scammers create a fake website that closely mimics the appearance of the ChatGPT official website, then trick users into downloading malware or sharing sensitive information such as credit card details and email addresses.

MBI: Does generative AI–ChatGPT, in particular–contribute to the increased occurrence of scams?

VR: Phishing is often the most cited attack with the rise of generative AI, such as ChatGPT. A demonstration of hacking humans using AI-as-a-service at the Black Hat and Defcon security conferences last year showed how AI could create better spear phishing emails and devilishly effective phishing messages than people. This could spell further difficulty for countries like the Philippines, particularly susceptible to phishing attempts.

There is also a security concern around ChatGPT’s data collection. Companies like Amazon and Walmart have warned employees to take care when using generative AI services to avoid sharing crucial information or trade secrets via ChatGPT. If attackers get their hands on this data, they could write malicious code with various obfuscations embedded to augment content for social engineering attacks, enabling them to use ChatGPT to produce convincing phishing content. There is also the potential security risk of data leakage since people might submit sensitive data into the model, and this information might be retrieved later.

We also noticed some scammers exploiting the growing popularity of OpenAI for crypto fraud. One example is a scammer abusing the OpenAI logo and Elon Musk’s name to attract victims to this fraudulent crypto giveaway event, asking users to provide their data to participate.

In the long run, attackers will continue to adopt ChatGPT and forms of generative AI more prominently to orchestrate attacks at minimal cost as the cost of this technology continues to decline. The accessibility of many free and open-source tools online is foreseen to enable repeated, successful attacks against poorly defended networks.

MBI: What are the specific characteristics or techniques scammers employ when utilizing ChatGPT to deceive or manipulate users? Example: phishing, vishing, deep fakes? Could you provide examples of real-world incidents where ChatGPT has been exploited for scamming purposes?

VR: Our Unit 42 researchers have observed several ways cybercriminals take advantage of ChatGPT to attack both organizations and individuals today:

Multiple phishing URLs attempt to impersonate official OpenAI sites. Typically, scammers create a fake website that closely mimics the appearance of the ChatGPT official website, then trick users into downloading malware or sharing sensitive information.
Despite OpenAI giving users a free version of ChatGPT, scammers lead victims to fraudulent websites, claiming they must pay for these services.
Scammers also exploit the growing popularity of OpenAI for crypto fraud.
Many copycat AI chatbot applications have also appeared on the market. This could increase security risks, such as the theft of sensitive or confidential data.

MBI: How do these scams impact individuals, organizations, or society as a whole? Do you have cost figures as in man-hours lost, reputation issues, shutting down of systems, finance, etc.?

VR: Our 2022 Unit 42 Incident Response Report found that ransom demands have been as high as $30 million, and actual payouts have been as high as $8 million. Finance and real estate were among the industries that received the highest average ransom demands, with an average demand of nearly $8 million and $5.2 million, respectively. Ransomware and business email compromise (BEC), which often comes in phishing attacks, were the top incident types to which the incident response team responded, accounting for approximately 70% of incident response cases.

The 2023 State of Cloud-Native Security Report has also seen organizations increase cloud usage by more than 25% from the year prior. The ever-growing reliance on the cloud today is both necessary to stay current in the modern digital environment and a risk to enterprises due to the expanding attack surface. With the average total cost of a data breach being $4.35 million and 45% of breaches happening in the cloud, organizations need to be up-to-date regarding securing cloud-based applications and the sensitive data that flows through them.

- Advertisement -spot_img

MBI: Palo Alto Networks is well known for security information and event management, as well as next-generation firewalls and threat intelligence. In fact, it is very strong in TI. Taking this into consideration, what is in Palo Alto’s toolbox now that can be immediately implemented to combat or mitigate ChatGPT-themed scams?

VR: Palo Alto Networks Next-Generation Firewall and Prisma Access customers with Advanced URL Filtering, DNS Security, and WildFire subscriptions receive protection against ChatGPT-related scams. These solutions work together to use artificial intelligence and machine learning for real-time detection and prevention of phishing and other existing attacks and prediction of new and advanced attacks.

Wildfire is also the industry’s largest, most integrated cloud malware protection engine that utilizes patented machine learning models, detecting and preventing unknown malware 60X faster than average with the industry’s largest threat intelligence and malware prevention engine.

MBI: What challenges or complexities do you face in identifying and preventing these types of scams?

VR: Cyberattacks today are very fluid due to the wide range of aspects considered by perpetrators, such as emerging technologies like ChatGPT or generative AI, workplace trends, and even evolving laws and regulations. We must be vigilant and abreast of the trends that will defend our customers from evolving threats.

As attacks become more sophisticated with emerging technologies like ChatGPT, our Unit 42–a group of world-renowned threat researchers, incident responders and security consultants–becomes more crucial in assessing risks and exploring solutions for threat-informed approaches and responding to incidents in record time.

MBI:  Are there any particular industries or sectors that are more susceptible to ChatGPT-themed scams? If so, why?

VR: Based on our case studies, security concerns around ChatGPT primarily revolve around privacy and data leakage. Regardless of the industry, any employee can put an organization or an individual at risk when sensitive data is shared with the OpenAI website. These personal details are reported to have been shared with third parties, including vendors and service providers, other businesses, affiliates, legal entities, and AI trainers who review conversations. Once sensitive data is shared with the chatbot, users can no longer control the information.

Furthermore, all users become vulnerable when ChatGPT is attacked. In early May 2023, ChatGPT was reported to have suffered a data breach caused by a vulnerability in the Redis open-source library. OpenAI shared that some users could see another active user’s first and last name, email address, payment address, credit card number’s last four digits (only), and credit card expiration date.

MBI: Aside from using software or service-backed solutions, how can individuals and organizations protect themselves from falling victim to ChatGPT-related scams?

VR: ChatGPT users should be mindful of only accessing the official OpenAI website. The rising number of copycat chatbots with phishing links heightens the security risks, making it crucial for users to distinguish between the official website and fake domains.

Different markets and companies have also taken various measures to avoid security concerns on ChatGPT. In March, Italy temporarily banned ChatGPT amid concerns that the artificial intelligence tool violated the country’s policies on data collection. In early May, Samsung also banned ChatGPT after employees inadvertently revealed sensitive information to the chatbot.

MBI: One of the key elements of cybersecurity is data protection, which is also at risk with ChatGPT-propelled scams (mostly social engineering) that compromise personal data. What can Palo Alto do about this?

VR: Many organizations may be surprised to learn that their employees already use AI-based tools to streamline their daily workflows, potentially putting sensitive company data at risk. Software developers can upload proprietary code to help find and fix bugs, while corporate communications teams can ask for help in crafting sensitive press releases.

To safeguard against the growing risk of sensitive data leakage to AI apps and APIs, we recently announced a new set of capabilities to secure ChatGPT and other AI apps as part of our Next-Generation cloud access security broker (CASB) solution. Its key capabilities include as follows:

Comprehensive app usage visibility for complete monitoring of all SaaS usage activity, including employee use of new and emerging generative AI apps that can put data at risk.
Granular SaaS application controls that safely enable employee access to business-critical applications, while limiting or blocking access to high-risk apps–including generative AI apps–that have no legitimate business purpose.
Advanced data security that provides ML-based data classification and data loss prevention to detect and stop company secrets, personally identifiable information (PII), and other sensitive data from being leaked to generative AI apps by well-intentioned employees.

As new generative AI apps arise on an ongoing basis, we will continue to expand our app catalogue to provide comprehensive data protection.

Author

- Advertisement -

Share post: