ChatGPT comes with compliance caveats, experts warn

Written by Aaron Nicodemus, Compliance Week on Saturday March 4, 2023

There are downsides to every new technology, and artificial intelligence and machine learning are no exception. Experts discussed the importance for compliance professionals to understand the risks of such tools at Compliance Week’s virtual Cyber Risk & Data Privacy Summit.

Artificial intelligence (AI) and machine learning (ML) tools can have many benefits, but with them come plenty of risks that have emerged as this new frontier of technology begins to take shape.

AI/ML tools can search through millions of pieces of data to find new and previously unrecognised patterns. ML tools, in particular, can perform manual processes at blazing speed. Newly public AI-powered chatbots, like OpenAI’s ChatGPT, can write coherent blog posts, marketing copy, or new lines of computer code in seconds with a simple prompt.

But there are downsides to every new technology, and AI and ML are no exception. Understanding the unique risks posed by AI/ML tools – and planning for how to mitigate them – is (or soon will be) part of the job description for any serious compliance officer, as experts discussed in a session at Compliance Week’s virtual Cyber Risk & Data Privacy Summit.

A panel of three cybersecurity professionals said if new technology is implemented, it must be accompanied by proper guardrails to allow for safe use. Potential pitfalls discussed included the following.

  • AI chatbots drawing their data from the Internet, which everyone knows is peppered with factual errors and intentional misinformation. Unfair or biased information on the web might not be properly contained within the chatbot’s algorithm. Plus, the data that chatbots draw from is about 18 months out of date. Chatbot answers could be compromised for any (or all) these reasons as a result.
  • What sources are chatbots pulling information from? Do their answers violate intellectual property rights or use personally identifiable information (PII) in their responses? How are you to know? ChatGPT ‘doesn’t cite anything – it’s pulling information from lots of different places,’ said Baker McKenzie Partner Rachel Ehlers. ‘You have to check those answers.’
  • Chatbots are data sponges. All inquiries they receive are incorporated into the data set they draw from for future responses. If an employee has entered a query about a firm’s cyber defenses into a chatbot, that same information could be used by a bad actor accessing ChatGPT to develop a successful hack, said James Goepel, general counsel and director of education and content at cybersecurity firm
  • Are your firm’s employees posing work-related questions to ChatGPT? Doing so could cause unforeseen complications for your organisation. A cautionary tale about an employee’s use of ChatGPT at Amazon was recently disclosed in a report by Business Insider , with an engineer potentially sharing confidential corporate information with the chatbot. Employees entering queries into chatbots could be compromising company secrets, intellectual property, or competitive advantages. JPMorgan Chase and Verizon, among others, have recently banned employees from using ChatGPT, according to the Wall Street Journal .

  • As is the case with most cutting-edge technology, regulation is way behind the curve. There are no comprehensive laws and regulations regarding the use of AI and ML, Ehlers said, only pieces contained in data privacy laws in the European Union and California. That leaves firms without concrete guidance on how to proceed.

There are best practices for using AI/ML tools, despite a lack of regulatory clarity. The experts looked at the question of what policies and procedures a company should adopt when considering the use of such tools.

  • Start with good data hygiene, said David Kessler, vice president and associate general counsel, IT and cybersecurity at defense contractor BAE Systems. Having accurate and secure data will ensure the results of any AI project are valid. If your data contains unfair or biased information at the beginning, it will taint your results.

  • Only use data types for the purposes of creating or training your AI tool, Kessler said.
  • Be transparent. State your firm’s purpose for having the data, as well as how it’s been collected and stored.

  • Update privacy terms for the data being used, particularly if it contains PII. ‘Be very clear what you’re going to do with the data,’ Kessler said. ‘Generic, broadly worded statements on the use of data probably aren’t going to cut it.’

  • Launch your AI projects using privacy by design. ‘Build privacy protections up front and into the architecture so that down the road you don’t have a problem,’ Kessler said.

  • The use of AI/ML tools ‘represent an enterprise risk for the company,’ Ehlers said. It should be utilised carefully and thoughtfully, with strong policies and procedures in place and a strong governance structure overseeing its use. The C-suite should be notified about the project and kept informed of its progress.

  • Some companies are forming external advisory boards on the use of AI/ML tools, which can help create ethical policies regarding use of AI, Kessler said. ‘Just because we can do something doesn’t mean we should do something,’ he said.

With any new emerging technology, proper due diligence is necessary before implementation to avoid the risks it could pose, the panel concluded.

You may also like:

A crypto tipping point: A look back at 2022 and forwards to 2023

Horizon scanning – Part 2: Fraud, crypto & corruption


Please leave a comment

You can leave the name empty should you wish to remain Anonymous.

You are replying to post:



Email *

Comment *

Search posts

View posts by Author