Blog Layout

The big data security risks with ChatGPT

Tobias Fellas


July 23, 2024


We're not thrilled about ChatGPT and the security risks it poses on financial services firms is huge

Kane Nawrocki from Real Innovation Group is a cyber security specialist and provides IT security to financial services firm.


I sat down with him and had a discussion around how dangerous ChatGPT is when used incorrectly and how these software can access Sharepoint, Dropbox links when user privacy controls aren't put in place.


Here are my 3 takeaways from our discussion.


  1. There is the illusion of privacy - Any of the privacy disclaimers that ChatGPT or any other AI chatbot/enhanced software have are not an adequate deterrent to safeguarding your data. Simply, if you upload a fact find, SoA or any document that contains any personal information into these software it is available for scraping in the Dark web and can be easily sold off.
  2. Evaluate carefully all AI software you use - Any AI software you may use as part of your tech stack (financial advisers be aware of AI SOA generators etc) ask how they protect your data and what safeguards do they have from stopping them from crawling your data. If you are producing an SOA through AI software, it has access to your client's data. Ask your IT guys and your Cyber insurer on what they require as well. This is a very murky area.
  3. Disable ChatGPT in your organisation - The simplest way to protect your business is to just disable it. Disabling it in your Microsoft365 environment for all business devices or blocking it from AntiVirus software is one great step. The risks are high and its incredibly easy to accidentally share confidential information about simply not knowing how these software use your data.


Microsoft Copilot when limited to only accessing your local device or your organisation enviroment only i.e cannot access world wide web seems like the next best AI alternative where proper data security measures are in place.



Global regulation has not caught up yet so we have to be forward thinking on what these risks are.


A few things that Felcorp will be initiating in the coming months:


  1. We will be setting up a generative AI policy that will explain our use of AI software, the protections that we will place and will be an addendum to our recently updated Data Security policy.
  2. We will be disabling ChatGPT and other generative AI software.
  3. We will be conducting updated cyber awareness training with all staff and focusing more on business email compromise scenarios.

By Jaspreet Bhalla October 30, 2024
Felcorp Support helps a local charity supporting children with cancer
By Felcorp Team September 29, 2024
Updates to our company policies as at 1 October 2024
By Felcorp Support September 25, 2024
Our simplified guide to understanding Felcorp's policies and procedures.
More Posts
Share by: