Is ChatGPT safe? Security and privacy risks considered (2024)

Last Updated on

ChatGPT has seen some of the most explosive growth of any web service ever created. AI chatbots such as OpenAI’s ChatGPT, Google Bard, and Bing Chat are now the most accessible forms of artificial intelligence on earth. Still, few truly understand the natural language processing technology that makes it tick, or the systemic single-point failure risk this may already be posing to our national cybersecurity. With such mainstream adoption, including integration into the services we use in our day-to-day lives, it’s only sensible to ask the question “Is ChatGPT safe?”

Is ChatGPT safe?

Artificial intelligence and any generative AI tools in a casual sense seem to be safe. For example, if you are just looking to generate creative content, ask the bot to translate text, or simply just want to play around with it, you run little risk. Chats with any AI language model are intended to be helpful and harmless experiences, with security measures abound. These platforms, such as Bing Chat from Microsoft, Bard from Google, and of course Chat GPT from OpenAI still have their vulnerabilities, however.

ChatGPT safety discussed at the UK AI Safety Summit

These vulnerabilities have recently been the topic of much debate at the inaugural AI Safety Summit, held at Bletchley Park, UK. The summit, organized by UK Prime Minister Rishi Sunak brought world leaders and tech executives together to discuss how to mitigate the risks of AI which we, frankly, have yet to fully understand. To help us understand the matter on a deeper level, we recently spoke with our AI correspondent Dr Matthew Shardlow about the event.

ChatGPT developer OpenAI, led by CEO Sam Altman, admits that the chatbot does have the potential to produce biased and harmful content. Such a concern is not unique to any one chatbot, or artificial intelligence for that matter. Elon Musk, initially a board member of the AI research firm in 2015, agrees that AI is an existential risk for humanity, stating that “it’s not clear we can control it”.

In concurrence with Musk was ‘AI godfather’ Geoffrey Hinton, also in attendance at the two-day summit. However, this common ground may not be for the same reasons. Hinton hints that top tech executives are merely feigning compliance with the installment of AI regulation as a strategic play to reduce financial liability, should their AI malfunction at the cost of human life.

The summit began two days after US President Joe Biden issued an AI Executive Order, bringing similar safeguards across the Atlantic. As the most searched term of 2023, it’s clear that AI is here to stay, and as a result, all world governments are taking steps to ensure that the capability of any AI model (even an innocuous AI chatbot) does not spiral out of hand.

Essential AI Tools

Editor’s pick

Is ChatGPT safe? Security and privacy risks considered (1)

Only $0.00019 per word!

Content Guardian – AI Content Checker – One-click, Eight Checks

8 Market leading AI Content Checkers in ONE click. The only 8-in-1 AI content detector platform in the world. We integrate with leading AI content

Is ChatGPT safe? Security and privacy risks considered (2)

EXCLUSIVE DEAL 10,000 free bonus credits

Jasper AI

On-brand AI content wherever you create. 100,000+ customers creating real content with Jasper. One AI tool, all the best models.

Best Deals

Is ChatGPT safe? Security and privacy risks considered (4)

TRY FOR FREE

Copy.ai

Experience the full power of an AI content generator that delivers premium results in seconds. 8 million users enjoy writing blogs 10x faster, effortlessly creating

Best Deals

Is ChatGPT safe? Security and privacy risks considered (5)

TRY FOR FREE

Writesonic

Create SEO-optimized and plagiarism-free content for your blogs, ads, emails, and website 10X faster. Start for free. No credit card required.

Best Deals

To some, what may be most concerning about this model is that it can deliver this content in a very convincing, plausible way. OpenAI does warn users about this before using the AI tool, however.

Another major concern about the AI bot is its potential to give inaccurate information. In a world where misinformation, false information, and fake news can spread quickly online, this could potentially be extremely harmful.

ChatGPT constructs its response using the information it was trained on, where some of it is sourced from the internet. The bot then creates a string of words that are likely to follow each other and then outputs its response. As a result, releasing incorrect information is pretty inevitable.

Similar concerns have led other tech giants, Metaverse and Google, to keep their AI bots out of public use. Interestingly, many opinions are flying around suggesting that OpenAI is irresponsible for releasing the model to the public considering these major limitations.

ChatGPT risks

Besides the risks that ChatGPT directly poses to you as a user, there are other major risks you should consider. ChatGPT has the potential to be used by attackers to trick and target you and your computer.

For example, fraudsters could use ChatGPT to quickly create spam and phishing emails. Due to the vast amount of data, the model is trained on, it has now become easier than ever to create scarily convincing emails even in the style of the company they’re trying to pose as.

OpenAI has also made a variation of their model, free to modify from their GitHub account. Despite this being a great idea for those looking to learn more about NLP models and AI, it also means that people with malicious intent can use the model for their own gains.

We cannot ignore the possibility that someone could use OpenAI’s technology to create a fake customer service chatbot. This could have the potential to trick people out of their money – not great news.

Is ChatGPT safe to give your phone number?

It’s important to note, using your phone number to register on ChatGPT isn’t giving your phone number to ChatGPT – the service isn’t the same as the company (OpenAI). If you are concerned about data or information OpenAI collects be sure to read the company’s privacy policy.

Of course, giving any company your number can have a small element of risk associated with it in terms of cyber security. If there is a data or security breach, any data or confidential information a company has access to may be a target. But, as we discuss elsewhere you do need a phone number to use ChatGPT.

Does ChatGPT save your chats?

Yes, ChatGPT saves your chats for your benefit. You can log into your OpenAI account from a new device and recall your previous conversations with the bot from the list on the left.

Is ChatGPT safe? Security and privacy risks considered (6)

This doesn’t mean that you can safely include anything in those chats. You’ll of course need to abide by laws and the companies own terms of use where sharing data. For your own safety, data should not be sensitive or personally identifiable, as warned by this pop-up when accessing the service.

Is ChatGPT safe? Security and privacy risks considered (7)

Is ChatGPT safe to download?

Right now we’d say that if you’re seeing options to download ChatGPT outside of OpenAI’s website or official channels, that may not be safe. That’s because OpenAI doesn’t have an option for you to download ChatGPT.

The service doesn’t exist on a downloadable Android or iPhone app and is easy to access with a desktop or mobile device on OpenAI’s site. Popular services do attract the attention of those looking to scam or trick users, though – as mentioned above.

Can ChatGPT be “hypnotized”?

Chenta Lee is an AI researcher at IBM, a member of the group tasked with inducing “hypnosis” in LLMs or Large Language Models (including ChatGPT). Reporting via Security Intelligence, Lee claims they “were able to get LLMs to leak confidential financial information of other users, create vulnerable code, create malicious code, and offer weak security recommendations. The IBM-owned blog equates the English language to a “programming language” for NLP malware, “attackers no longer need to rely on Go, JavaScript, Python, etc., to create malicious code,” Lee explains.

This creates an unsafe information source for users of the LLM. The worst part is that it’s not even difficult or expensive to replicate. The technique IBM used is a much less technologically demanding alternative to its spiritual predecessor, data poisoning. Data poisoning is the injection of malicious data into a dataset, such that the output of the system uses the malicious data unbeknownst to the system itself. AI is, architecturally, a perfect target for this. Given the power, widespread use, and B2B integration of GPT-4 in today’s services, an attack of this kind is very tempting for hackers. It’s also exemplary of the danger of integrating AI into every part of our daily lives, at risk of unpredictable simultaneous failure.

Lee goes into detail as to the nature of the ChatGPT exploit, showing that their analysis “is based on attempts to hypnotize GPT-3.5, GPT-4, BARD, mpt-7b, and mpt-30b. The best-performing LLM that we hypnotized was GPT, which we will analyze further down in the blog. So how did we hypnotize the LLMs? By tricking them into playing a game: the players must give the opposite answer to win the game.”

Is ChatGPT safe? Security and privacy risks considered (8)

This research, combined with popular ChatGPT jailbreaks such as ‘DAN’, shows conclusively that ChatGPT can be hypnotized.

Final Thoughts

There’s no doubt about it – ChatGPT is a pretty phenomenal ai technology. However, the AI bot could cause real-world harm. The fact that the model has the potential to spread misinformation and produce biased content is something that should not be ignored.

As we continue to build a digital world around us, the threat from this rises. So what can you do to protect yourself? Well, firstly you should fact-check any information ChatGPT outputs by also doing your own research. Also, regardless of what ChatGPT’s response is, always have in the back of your mind that it is not necessarily true or correct.

Is ChatGPT safe? Security and privacy risks considered (2024)

FAQs

Is ChatGPT safe? Security and privacy risks considered? ›

Identity theft and information gathering

Is ChatGPT safe and secure? ›

Is ChatGPT safe to use? While there are ChatGPT privacy concerns and examples of ChatGPT malware scams, the game-changing chatbot has many built-in guardrails and is seen as generally safe to use.

Does ChatGPT collect your personal data? ›

Yes, it does – and it probably saves more of it than you realize. ChatGPT collects both your account-level information as well as your conversation history. This includes records such as your email address, device, IP address and location, as well as any public or private information you use in your ChatGPT prompts.

Is ChatGPT a cybersecurity threat? ›

Threat actors are continuously experimenting with utilising chatbots like ChatGPT for malicious purposes, leading to an increase in the frequency and sophistication of attacks as code writing and phishing emails become more accessible. They have found ways to use ChatGPT to help write malware.

What are the risks of using ChatGPT to obtain common safety related information and advice? ›

The experts concluded that there is potential for significant risks when using ChatGPT as a source of information and advice for safety-related issues. ChatGPT provided incorrect or potentially harmful statements and emphasised individual responsibility, potentially leading to ecological fallacy.

Should I use my real name on ChatGPT? ›

OpenAI can not only access all of your conversations with ChatGPT, but it can also use this information to feed the bot more data. In other words, your information may end up in other users's prompt results. Make sure you never share information of this kind with ChatGPT: Your full name.

Is it safe to use ChatGPT for essays? ›

ChatGPT can be used as a creative companion, helping students generate ideas for essays and overcome writer's block. By providing prompts or asking questions, the AI can inspire diverse perspectives and angles for the essay topic, kickstarting the thought process and expanding the range of potential content.

How do I protect my privacy on ChatGPT? ›

Here are four ways you can protect your data on (public) ChatGPT:
  1. Be careful what you enter. This is your first line of defense: Simply don't enter information you don't want to pop up elsewhere. ...
  2. Turn off Chat History & Training. ...
  3. Make a Privacy Request. ...
  4. Use the API instead of the Chat Interface.
Nov 17, 2023

How do I use ChatGPT securely? ›

Use privacy settings.

For example, OpenAI products let you disable saving of chat history. In that case, data will be removed after 30 days and never used for training. Those who use API, third-party apps, or services to access OpenAI solutions have that setting enabled by default.

What are the dangers of using ChatGPT? ›

Is ChatGPT a risk? It's important to understand that ChatGPT is a powerful tool. And just like any powerful tool, it can be misused. Someone with malicious intent can use ChatGPT to impersonate people in a phishing attack or a scam, create fake news, or possibly create components of malicious software.

How secure is ChatGPT? ›

Chat GPT is generally considered to be safe to use.

This means that it is able to generate text that is both accurate and relevant. However, there are some potential risks associated with using Chat GPT. For example, it is possible that Chat GPT could generate text that is biased or harmful.

What is the #1 cybersecurity threat today? ›

Ransomware Strategies Evolve

Ransomware attacks are believed to cost victims billions of dollars every year, as hackers deploy technologies that enable them to literally kidnap an individual or organization's databases and hold all of the information for ransom.

Is it safe to give ChatGPT your phone number? ›

Is it safe to share my phone number with ChatGPT? Yes, it is safe to share your phone number while creating an OpenAI account. However, you shouldn't share any personal information in your chats as ChatGPT saves your data for future analysis.

Why does ChatGPT need my phone number? ›

Why does ChatGPT want my phone number? ChatGPT requires a phone number to create an account for security reasons. OpenAI, the company behind ChatGPT, uses your phone number to verify that you're a real person.

Can ChatGPT generate images? ›

If you use DALL. E directly through ChatGPT, each prompt will only generate a single image. If you select DALL. E 3 from the sidebar, however, you'll get two different images to choose from.

Is ChatGPT a reliable source of information? ›

No, ChatGPT is not a credible source of factual information and can't be cited for this purpose in academic writing.

Should you use your personal email for ChatGPT? ›

By default, your conversations in ChatGPT can be viewed by OpenAI and used as training data to improve its system. (This is a key reason why you shouldn't enter any personal or private data into ChatGPT.)

Top Articles
Latest Posts
Article information

Author: Twana Towne Ret

Last Updated:

Views: 5884

Rating: 4.3 / 5 (64 voted)

Reviews: 87% of readers found this page helpful

Author information

Name: Twana Towne Ret

Birthday: 1994-03-19

Address: Apt. 990 97439 Corwin Motorway, Port Eliseoburgh, NM 99144-2618

Phone: +5958753152963

Job: National Specialist

Hobby: Kayaking, Photography, Skydiving, Embroidery, Leather crafting, Orienteering, Cooking

Introduction: My name is Twana Towne Ret, I am a famous, talented, joyous, perfect, powerful, inquisitive, lovely person who loves writing and wants to share my knowledge and understanding with you.