The business world has also been caught up in the Artificial Intelligence (AI) fever, with discussions taking place in boardrooms and C-suites about how to leverage AI tools.
By Kate Mollett, senior director of Commvault Africa
Employees in many companies are downloading AI apps to explore their potential for enhancing mundane tasks, while in-house developers are actively seeking new data libraries to build around.
Secure Adoption in the Race for AI Integration
Responding to the market frenzy, business software vendors, from major players to niche vendors, are racing to introduce AI-based tools and features. According to IDC, enterprise spending on AI is projected to increase by 27% this year to reach $154-billion. With prudent planning and a focus on cybersecurity, organisations can confidently embrace the transformative power of AI in the business landscape.
However, organisations need to proceed with caution. The rush to adopt AI technology may lead companies to overlook critical security measures, leaving them vulnerable to devastating hacks. Many of the new AI tools are based on open-source infrastructure or data repositories, which necessitate a different defensive strategy from the proprietary tools used in the past.
It is crucial for CIOs, CISOs, and other tech leaders within organisations to establish a process that allows security professionals to validate the libraries or platforms on which AI programs are based.
Safeguarding Against Open-Source Vulnerabilities
Open source, although a powerful tool, comes with its risks. South Africa, like many other regions, has witnessed instances of bad actors targeting open platforms. The SolarWinds hack serves as an example of the damage that can be caused by IT supply chain breaches when thousands of data networks were compromised. Hence, the widespread adoption of open AI platforms by enterprises increases the potential for catastrophic IT supply chain breaches.
Fortunately, there are steps that security leaders can take to continuously evaluate open-source tools for vulnerabilities. Businesses must conduct thorough research on potential vendors providing IT services to the enterprise.
Additionally, security teams should collaborate closely with development teams to assess the security protocols employed to safeguard open-source libraries. Once the in-house IT team confirms the security of repositories, they can establish access guidelines that enable employees to download preferred apps or utilise specific libraries to power machine learning algorithms. However, caution should still be exercised.
Evaluating and Monitoring AI Software
Both employees and security professionals need to evaluate the value that software brings against potential threats. Vendor scorecards can assist in assessing potential risks. Benchmarking IT providers against one another helps enterprises make informed decisions about which vendors to engage. Questions about development methodologies, code analysis, dynamic scanning capabilities, vulnerability remediation processes, and understanding the impact of supply chain hacks should be documented.
Once a vendor is deemed trustworthy, the responsibility does not end there. As more open-source tools are deployed, security teams must continuously monitor applications for unknown code or security breaches. AI can aid in this process by automating daily monitoring tasks, allowing analysts to focus on protecting next-generation AI software.
To stay ahead in the rapidly evolving AI landscape, businesses must prioritise robust security measures, including thorough vendor vetting and continuous monitoring of AI applications. By striking a balance between innovation and risk management, organisations can harness the power of AI while safeguarding their valuable data and systems.
Embracing AI with caution and a proactive security mindset will enable businesses to navigate the AI gold rush with confidence and resilience.