SecurityBrief India - Technology news for CISOs & cybersecurity decision-makers
Story image

Study reveals over half of employees risk data in AI tools

Yesterday

A recent study has shown that more than half of employees have admitted to entering high-risk information into generative AI tools.

Jared Siddle, Vice President of Risk & Compliance at Protecht, a company providing risk management software and solutions, cautioned business owners about the importance of instituting an AI use policy to mitigate the risk of data breaches. He highlighted the necessity of setting clear guidelines within organisations, noting the security challenges posed by AI tools.

Siddle stressed the importance of not entering confidential business data into AI tools, unless sanctioned by the organisation's risk management team. "If you wouldn't post it publicly, don't put it into an AI tool," he warned, explaining that while AI tools process and retain data, they lack perfect memory and can be compromised. "Enterprise versions of tools often offer stronger privacy protections, but inputting confidential data into an AI tool is like whispering secrets in a crowded room: you can't be sure who's listening. If an AI platform is compromised or misused, that data could become an easy target for cybercriminals," he said.

Highlighting the risks identified by TELUS Digital's study, which found that 57% of enterprise employees enter high-risk information into generative AI tools, Siddle advised businesses to implement AI policies urgently. "AI risk isn't theoretical, it's real - and if you don't have an AI policy (or if your formal policy is to ban all AI usage, which amounts to the same thing), then your employees will almost certainly be using it in an undocumented, unregulatable way," he explained. Siddle outlined four critical steps for businesses: setting AI policies, using enterprise AI solutions, educating employees, and monitoring AI usage.

Emphasising the necessity of AI security training for all employees, Siddle stated, "AI security training isn't optional, it's essential. AI is becoming a daily tool for many employees, but without proper guidance, a quick query can turn into a costly data breach. Businesses already train employees on cybersecurity, phishing, and data protection, so AI needs to be part of the same playbook." He described the importance of understanding what information not to input into AI tools, recognising the potential for AI-generated content to be misleading, and ensuring the use of enterprise-approved AI solutions.

Addressing concerns about internally developed AI tools, Siddle cautioned that "internal AI doesn't mean immune AI." He elaborated on potential risks, including weak access controls, insecure APIs, and inadequate monitoring, which could render such systems vulnerable to cyber threats. "Even closed systems can be compromised," he noted, emphasising the need for stringent security controls.

Highlighting the evolving tactics of cybercriminals, Siddle identified AI-powered phishing, automated hacking, deepfake scams, and AI model manipulation as emerging threats. "AI isn't just a tool for businesses, it's a weapon for cybercriminals," he commented, illustrating the sophisticated methods now being employed by attackers.

To bolster the security of AI tools, Siddle recommended measures such as encrypting data, enforcing strict access controls, securing APIs, continuous monitoring for threats, and adherence to ethical AI principles. These steps, he suggested, would help safeguard against data breaches and cyber-attacks.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X