Zscaler report highlights Generative AI security risks
Cloud security frontrunner, Zscaler, has recently announced the publication of the commissioned "All Eyes on securing Gen AI Report". This report provides in-depth results from an extensive survey involving 900 global IT decision-makers. Findings indicate that while 92% of organisations in India consider generative AI tools (GenAI), such as ChatGPT, as potential security hazards, a vast majority, 95%, are already utilising these tools in some capacity within their enterprises.
Despite the burgeoning use of GenAI in the digital industry, India faces several challenges. The survey reveals that all respondents consider the absence of resources to monitor usage as a significant issue. Additionally, 75% acknowledge the lack of skills or talent as a reason for not yet utilising GenAI tools like ChatGPT effectively."
In the words of Sudip Banerjee, CTO, APJ, Zscaler, "Generative AI has become a technological revolution with unlimited possibilities. Our survey underscores the dynamism of GenAI adoption, highlighting the need to sharpen focus on both Zero trust principles and skill development to unlock the full potential of GenAI technology."
What makes this more alarming is that 22% of the respondents aren't even monitoring usage at all with 36% having yet to implement any additional GenAI-related security measures – though many have it within their sights. As Sanjay Kalra, VP Product Management at Zscaler, emphasised, "However, with the current ambiguity surrounding their security measures, a mere 30% of organisations in India perceive their adoption as an opportunity rather than a threat. This not only jeopardises their business and customer data integrity, but also squanders their tremendous potential."
Interestingly, contrary to popular belief, the impetus to adopt GenAI isn't coming from the people one might expect. The survey's results suggest that IT is the sector that can reclaim control. Only 3% of respondents in India said employee demand drove the adoption and use of GenAI, whereas 71% asserted that usage is being spearheaded directly by IT teams in India.
As Kalra commented, "The fact that IT teams are at the helm should offer a sense of reassurance to business leaders. It's essential to recognise that the window for achieving secure governance is rapidly diminishing."
Given 75% of the survey's Indian respondents anticipate a significant surge in GenAI tool interest before year's end, organisations need to move swiftly to bridge the gap between use and security. Some recommended steps for ensuring in-house GenAI use is properly secured include: implementing a thorough zero-trust architecture; conducting comprehensive security risk assessments for new AI applications; setting up an all-encompassing logging system for tracking AI prompts and responses; and enabling zero trust-powered Data Loss Prevention measures for all AI activities to guard against data exfiltration.