SecurityBrief India - Technology news for CISOs & cybersecurity decision-makers
Story image

AI to halve time needed for account takeover by 2027

Today

Research firm Gartner has projected a significant shift in the cybersecurity landscape, predicting that artificial intelligence (AI) agents will reduce the time required to exploit account exposures by 50% by the year 2027.

"Account takeover remains a persistent attack vector because weak authentication credentials, such as passwords, are gathered by a variety of means including data breaches, phishing, social engineering and malware," explained Jeremy D'Hoinne, Vice President Analyst at Gartner. "Attackers then leverage bots to automate a barrage of login attempts across a variety of services in the hope that the credentials have been reused on multiple platforms."

AI agents are expected to enable the automation of more steps in account takeovers, including using deepfake technology in social engineering and automating user credential abuses. This development is likely to spur vendors to introduce new products and channels such as web, app, API, and voice to detect, monitor, and classify interactions involving AI agents.

Akif Khan, Vice President Analyst at Gartner, advised, "In the face of this evolving threat, security leaders should expedite the move toward passwordless, phishing-resistant multi-factor authentication (MFA). For customer use cases when users may have a choice of authentication options, educate and incentivise users to migrate from passwords to multidevice passkeys where appropriate."

In addition to the threat posed by AI-driven account takeovers, Gartner forecasts that technology-enabled social engineering will become a significant threat to corporate cybersecurity. By 2028, it is expected that 40% of social engineering attacks will target executives as well as the broader workforce.

There is an increasing use of counterfeit reality techniques, such as deepfake audio and video, which are being combined with traditional social engineering tactics to deceive employees during calls. Although few high-profile cases have been reported, these incidents have proven the efficacy of such threats and caused notable financial losses for the affected organisations.

Manuel Acosta, Senior Director Analyst at Gartner, stressed the importance of staying informed and adapting to resist these new forms of attack. "Organisations will have to stay abreast of the market and adapt procedures and workflows in an attempt to better resist attacks leveraging counterfeit reality techniques," he said. "Educating employees about the evolving threat landscape by using training specific to social engineering with deepfakes is a key step."

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X