Jonathan Hall, the government's advisor on terror legislation, has raised concerns about the need for new terrorism laws to counter the risk of radicalisation by AI chatbots. Hall, the independent reviewer of terrorism legislation, highlighted that artificial intelligence could potentially be used to recruit a new generation of violent extremists.
In an experiment where he posed as an ordinary person, Hall interacted with these chatbots that use AI to simulate human interaction. He found that one chatbot did not hesitate to glorify the Islamic State - a previously unthinkable act not yet legislated against because the entity in question is not human. He said this discovery demonstrated the urgent need for a review of current terror legislation.
"Only human beings can commit terrorism offences," Hall noted, "and it's difficult to identify a person who would legally be responsible for chatbot-generated statements that encouraged terrorism." He further stated that new, more sophisticated AI systems made legislation like the Online Safety Act inadequate as it did not consider content produced by these chatbots, in contrast to predetermined responses that are subject to human intervention.
Investigating and prosecuting anonymous operators presents severe difficulties, according to Hall. "If individuals, whose intentions are dubious or misguided, persist in promoting terrorist chatbots, then new laws will be necessary," he said. Hall suggested that both the creators of these radicalising chatbots and the technology firms that host them should be held accountable under any potential new legislations.
Others have echoed the sentiments expressed by Hall. CEO of RiverSafe, Suid Adeyanju, emphasised the substantial threat AI chatbots pose to national security, particularly when legal measures and security procedures lag behind. "If these tools fall into the wrong hands, they could enable hackers to cultivate the next generation of cyber criminals," Adeyanju warned. He called on businesses and the government to recognise the ongoing danger from AI and to enforce necessary protections as a matter of urgency.
Josh Boer, director of tech consultancy VeUP, expressed the challenge of acknowledging this issue without stifling innovation. "Britain is home to some of the most exciting tech companies in the world, yet far too many lack the funding and support they require to prosper," Boer said. He suggested that improving the talent pipeline for digital skills, stimulating young people's interest in a career in tech, and fostering cyber and AI businesses could be part of the solution. Failure to address this matter could be detrimental to the UK's long-term future and play into the hands of cyber criminals, Boer warned.