With the slew of generative AI intelligent platforms emerging, starting with ChatGPT, Check Point Research (CPR ) recently conducted a security analysis of the latest one – Google Bard – and did a comparative analysis vis-à-vis ChatGPT.
What the security experts found was several security limitations where the platform permits cybercriminals’ malicious efforts. After several rounds of analysis, CPR were able to generate :
• Phishing Emails (demonstrating little to no restrictions on creation of phishing emails)
• Malware keylogger (surveillance tool used to monitor and record each keystroke on a specific computer)
• Basic ransomware code
The company says it will continue to monitor these worrying trends and developments in this area and report as needed.
“It seems at the onset, Google Bard has not learnt the lessons from ChatGPT’s initial implementation of their AI platform on implementing better anti-abuse measures related to cyber and phishing abuse of the platform,” says Sergey Shykevich, threat intelligence group manager at CPR. “It must be noted that in the last six months, ChatGPT has carried out various improvements to make the life of cybercriminals much more difficult when they try to abuse their platform.
“However, Google Bard is still on a very immature stage from this perspective, though it is hoped that eventually the platform will examine and embrace the required limitations and security boundaries as the tool further develops,” Shykevich adds.