Following the news of new ChatGPT functionality – custom GPTs can now be added to the dialogues with the original ChatGPT – Kaspersky experts have stressed the importance of exercising caution in sharing sensitive data with AI chatbots.

“As GPTs can use external resources and tools to provide advanced functionality, OpenAI has implemented a mechanism that allows users to review and approve GPTs actions to prevent potential dialogue exfiltration,” says Vladislav Tushkanov, research development group manager at Kaspersky’s Machine Learning Technology Research Team.

“Therefore, when a custom GPT wants to send some data to a third-party service, the user is prompted to allow or deny it, and they can inspect the data about to be sent using a drop-down symbol in the interface. The same mechanism applies to the new ‘@mention’ functionality.

“However, this requires awareness and a degree of caution on the part of the user as they need to check and understand each request, which may affect the experience,” Tushkanov adds. “In addition, there are other ways in which user data may potentially leak from a chatbot service: due to errors or potential vulnerabilities in the service; if it gets memorised during further training of the model; or if another person gets access to your account.

“In general, it is best to be careful not to share personal data and confidential information with any chatbot service on the Internet,” Tushkanov advises.