Risks and Issues in using AI chatbot, Chat GPT

Chat Generative Pre-trained Transformer (ChatGPT), a model developed by OpenAI, has recently taken the world by storm. Driven by Artificial Intelligence (AI) technology, ChatGPT was launched in November 2022 and has very quickly gained popularity globally. Much of the exponential user growth may be attributable to ChatGPT’s seemingly prodigious capacity to produce nearly instantaneous responses on a wide array of issues and topics (for example, drafting referral letters, drafting legal documents and answering technical or legal queries).

While ChatGPT is a remarkable innovation and has brought AI to the forefront of technology, users should be mindful of the risks involved. It is worth noting that there are other chatbots known to be in development, suggesting ChatGPT merely represents the first mainstream example of what may soon be a crowded and competitive market.

In this article, we will briefly discuss and highlight the key general risks and issues in using AI chatbots (ChatGPT).

Personal Data Risks

AI chatbots require massive amounts of data to function and improve and this may include personal data of individuals. The personal data collected and used by OpenAI to train ChatGPT without obtaining the necessary consent of the relevant individuals may violate personal data protection laws. While certain personal data may well be published on publicly available platforms/websites, it does not mean that the relevant individuals have consented to the use by ChatGPT of their personal data for ChatGPT purposes.

Furthermore, when inputting queries in ChatGPT, there is a risk that personal data of the user or a third party (if personal data relating to the third party is keyed in by the user) may be captured and processed by ChatGPT. In such instances, please note that the processing of any personal data will be subject to the privacy policy published on ChatGPT (in particular, ChatGPT has the right to disclose personal data of a user to other users) and based on ChatGPT’s terms, ChatGPT puts the responsibility on the users of ChatGPT to procure the necessary consent for the processing of personal data in using the ChatGPT’s services. Accordingly, a user should exercise caution when feeding data to ChatGPT and avoid keying in personal data relating to other individuals (unless the necessary consent has been procured).

Inaccuracy and Misleading Information

ChatGPT is subject to certain accuracy and bias risks. Like any other AI chatbot, ChatGPT can only generate text based on the data on which it was trained. This means that if the datasets ChatGPT is trained on contain errors, inaccuracies, or biases, these will be reflected in the ChatGPT’s responses and if the datasets ChatGPT is trained on do not contain sufficient resources, ChatGPT may produce a lucid and comprehendible answer, but one that is incorrect or incomplete. Due to the risk of inaccuracy, it raises concerns about ChatGPT’s potential to create misleading content which may have serious consequences, such as damaging reputations or spreading misleading information.

While ChatGPT may seem to be a powerful tool, users should not overlook the risks of inaccuracy inherent in the use of AI chatbots. It remains imperative that users scrutinise ChatGPT’s output to satisfy themselves that the information contained in such output is correct and current and, where necessary, verify the accuracy of such output with the appropriate and competent personnel/professionals.

Intellectual Property and Infringement 

One of the main issues of using ChatGPT is its potential to infringe intellectual property rights. ChatGPT is trained on a vast amount of data, which may include works which are protected by intellectual property laws (for example, literary works, which are protected as copyrighted works in the context of Malaysian laws). If the datasets ChatGPT is trained on contain copyrighted works, the output generated by ChatGPT is likely to involve the reproduction of, or may be similar to, such copyrighted works, thus giving rise to a risk that the use of the output, without permission, could constitute copyright infringement. The infringement risk may extend to the user and not just ChatGPT.

While the legal position in respect of the output of AI chatbots under intellectual property laws remains unclear and debatable (e.g. is the output protected by intellectual properly laws? If yes, who owns it? etc), users should bear the above in mind in using AI chatbots generated outputs, especially in a commercial context. Users should scrutinise ChatGPT’s output to satisfy themselves that the information contained in such output does not reproduce third party materials in a way which may infringe the intellectual property rights of such third party.


Another risk in using ChatGPT is the unauthorised disclosure of confidential information. When a user keys in queries or requests that ChatGPT perform a task, the user may be feeding ChatGPT with commercially sensitive or confidential information. Such information may be used in generating output for other similar or relevant queries or requests as such commercially sensitive or confidential information is now part of ChatGPT’s database. Accordingly, a user should exercise caution when feeding data to ChatGPT to avoid disclosing information which should otherwise be kept confidential.


Plagiarism is considered a grave issue, especially in education and academia industries. ChatGPT is a convenient tool for users to get quick answers to their queries, for example students may use ChatGPT to complete an assignment or journalists may use ChatGPT to write an article. Plagiarism raises ethical concerns in addition to the other risks raised in this article. Furthermore, it also obviously discourages and therefore impedes personal improvement and development. 


When using ChatGPT, the users should be mindful of the disclaimers and limitation of ChatGPT’s liability under the terms of use of ChatGPT. In particular: (a) ChatGPT is provided on an “as is” basis without any warranties (to the maximum extent permitted by law), which means that ChatGPT does not warrant the accuracy or legality of the output; and (b) ChatGPT’s total aggregate liability is capped at 100 dollars or any amount paid for the ChatGPT’s service that gave rise to the claim during 12 months before the liability arose. 

Practical Steps when using ChatGPT

ChatGPT may be a powerful tool to many, but users should not overlook the potential risks. We recommend users to take the following practical steps when using ChatGPT (bearing in mind the risks outlined above): 

  1. scrutinise the output (i.e. verify the accuracy and relevancy of the output); 
  2. exercise caution in relation to the information being fed to ChatGPT (having regard to personal data risks and risks of exposure of confidential information).
  3. do not rely on the output or reproduce the output (especially in a commercial context) without verification; 
  4. be careful not to infringe the intellectual property rights of others in using the output; and
  5. understand the limitations and disclaimers in relation to the use of ChatGPT.

Key Contributor:

Jed Tan Yeong Tat 
Partner – Technology, Multimedia & Communications
Direct line : +603-2632 9918
Email : jedtan@rdl.com.my

error: Content is protected !!