Home Blog Software Development Is AI secure? A look at how to use ChatGPT safely

Is AI secure? A look at how to use ChatGPT safely

ChatGPT is undoubtedly a significant milestone in the development of chatbots and artificial intelligence in general. However, every advance carries risks. Is ChatGPT secure? In this article, we consider the different potential security risks and how to use ChatGPT safely.

Is AI secure? A look at how to use ChatGPT safely

Table of contents

By January 2023, just two months after launch, ChatGPT became the fastest-growing consumer application in history (source) when it reached 100 million monthly active users. While this popularity is understandable, it’s just good business sense to consider issues of security before jumping on the bandwagon and incorporating ChatGPT into your latest digital product.

As a tool, AI can be used to gain access to sensitive information or to distribute malicious software (source), not to mention a never-ending list of internet scams and frauds. This article will take a look at the potential risks involved in using ChatGPT from three perspectives:

Potential data leaks

Impact of hacking

Internet scams and frauds

You might be also interested in the article:

ChatGPT implementation: key takeaways from our internal projects

ChatGPT implementation: key takeaways from our internal projects

Potential data leaks

In March 2023, BBC published news about a glitch in ChatGPT that allowed some users to see the titles of other users’ conversations (source). Not too sensitive, perhaps, but it raised an obvious question: if ChatGPT can leak other people’s conversations, what else can be leaked? Perhaps the credit card data of those who purchased the Plus version? Or company login names and email addresses?

For developers using ChatGPT in the development of digital products, whether for code generation or as a built-in chatbot, this leak raises wider concerns for product owners and clients, including reputational damage, financial loss, and issues of liability and responsibility.

To prevent such scenarios it’s necessary to invest in employee awareness and education around ChatGPT risks, combined with specific usage policies. Measures you can take include:

  • ensure employees are aware of risks related to using personal or sensitive data,
  • ensure the development team has its own clear policy regulating usage of AI tools - especially if they work with user data,
  • create an approved set of AI tools and usage scenarios for guidance (this works far better than discouraging use – after all, AI is the future).

Advice from OpenAI

It’s worth noting that OpenAI, the company behind ChatGPT, warns against the sharing of sensitive information when using it.

In their FAQ section, they clearly state:

“We are not able to delete specific prompts from your history. Please don’t share any sensitive information in your conversations.” (source)

Examples of such sensitive information would include credit card data, email addresses, phone numbers, physical addresses, company and product names (especially if they are still being developed), etc.

Any ChatGPT or AI policy should warn against sharing these types of information in AI conversations, especially confidential third-party data, such as client and business partner details, and information relating current or upcoming products.

You might be also interested in the article:

We care about your product's security

We care about your product's security

Impact of Hacking

ChatGPT comes with usage restrictions built in, especially when it comes to information about or helping with illegal activity.

However, hackers are already working their way around ChatGPT’s restrictions and security measures. In February 2023, Checkpoint, a research company, discovered that ChatGPT had been used to improve the code of an infostealer malware program (source). In the same month, other reports noted that hackers had infiltrated the ChatGPT API and altered its code to generate malicious content, effectively creating a ‘dark’ version of ChatGPT producing restriction-free output (source).

The risks of a fake ChatGPT were spotted a month earlier. In January, a “ChatGPT app” was advertised in the App Store and Google Play (source), offering an ad-free weekly subscription for $7.99. Despite falsely claiming association with OpenAI, and operating suspiciously, it still passed the App Store and Google Play approval processes (though it was removed before reaching 100,000 downloads).

(The official ChatGPT app for iOS was released in May of 2023. Currently there is no launch date for an Android version).

With hackers accessing ChatGPT to create their own versions and fake APIs, users (and businesses) are faced with a new type of scam where users interact with AI believing it to be human (source), or may be subject to phishing scams or disguised links to malware. While precautions can easily be taken against knowingly using a ‘dark’ ChatGPT, these various reports also underline the importance of not sharing sensitive information or data with the official version.

Any guidance or policy around AI and ChatGPT use should also highlight safety when it comes to mobile app stores, including verifying an app’s status prior to downloading. The above recommendation to develop an approved tool kit is one way to ensure only approved (and safe) tech is in use.

You might be also interested in the article:

How to avoid security issues in your app - our best practices

How to avoid security issues in your app - our best practices

How to use ChatGPT safely?

When using AI tools, protecting your information or your business online means using those tools safely and reasonably. In a business setting, safe AI usage requires consistent AI usage, and you can lay the foundation for that with a clear policy on how to use conversational AI tools such as ChatGPT. Other measures you can take to protect business information include:

  • Ensure widespread policy awareness (wider than just development teams, all employees need to understand the risks of a ‘misguided’ download).
  • Run an audit of your company’s cybersecurity solutions, such as firewalls and antivirus software. If any leaks or potential risks are spotted, fix them as quickly as possible.
  • For deeper employee understanding and appreciation of the risks involved, specialist workshops run by experts in the field can effectively reinforce the message around security.

Use AI and stay secure

While it’s apparent that ChatGPT can be dangerous in the wrong hands, used responsibly, it is safe to use.

The central advice is to follow the guidance in OpenAI’s privacy policy – never share personal or financial information with ChatGPT. General questions are obviously the safest but the key is to avoid using prompts that can reveal or lead to discovery of sensitive information about you or your business (for example, if you share too many details about a specific product).

Secondly, always verify the information provided by ChatGPT. Bear in mind that it is just a language model and that, before launch, ChatGPT was trained on data up to 2021 – in other words, its database and reach is not necessarily up to date.

Finally, be sure that you’re using the genuine OpenAI product and not a cheap or dangerous imitation. (The safest way to use ChatGPT is via the chatbot’s official link: https://chat.openai.com/).