AI chatbots found to endorse terrorism

And who can we look at to blame?
05 January 2024

Interview with 

Gareth Mott, RUSI

CHATBOT

Artist impression of a robot talking

Share

The UK’s independent reviewer of terrorism legislation, Jonathan Hall, has said that new laws are needed to combat AI chatbots that could radicalise users. Mr Hall told the Telegraph that the UK government’s new Online Safety Act, which passed into law last year, is “unsuited to sophisticated and generative AI”. Gareth Mott is a research fellow in the Cyber team at the Royal United Services Institute think-tank.

Gareth - In most recent articles that have been produced on this on Jonathan's comments, I think he actually a play with some of the examples of chatbot software that we have available, which are based on large language models. The big ones would be ChatGPT and Google Bard and others. And these are systems which process extremely large amounts of data to become a defacto search engine or become a defacto image creation system or become a defacto entity to talk to. And the idea is it gives you the information that you're looking for. There's a philosophy behind the software. It's really just trying to tell you what you want to hear within certain constraints and certain parameters. My understanding is that Jonathan played with an example of one of these chatbots and found that there was a possibility, in fact I think he found that it could, pretend to be a radicalising force, for example an Islamic state affiliate. And obviously the concern there is that given the recent prominence in recent years of, for example, lone actor terrorism, this is another tool that someone might use to kind of self radicalised and become extremist or terroristic.

Will - These large language models and chatbots take their information from absorbing huge amounts of data on the internet. So I guess the question is, do we know where all of this stuff that is causing them to spout potentially radical information is coming from? Is it someone on the internet posting huge amounts of radicalised data in the hope that a chat bot will pick it up?

Gareth - Partly yes. And it's worth pointing out that when you use these platforms, some of them you can use a paid service and ostensibly on paper avoid your data being fed into future systems. But generally, the data, whatever that may be, you feed into these systems, if it's a question, a query, an argument. Data that you feed into the system are then used to improve and build further versions of these large language models. It is providing us information that it thinks that we want, and in part that is fed by previous information that people have been feeding into it, as well as wider secondary sources available on the internet that is continually trawling and gathering in order to analyse. With the large language models, it's worth pointing out that they have guardrails in place, especially the mainstream large language modules that we speak about. And these guardrails are in place to basically ensure that, for example, if I was an extremist and I was asking it to help me write something that was very incendiary to a particular ethnic minority, it shouldn't do that. It shouldn't be able to undertake that command. It shouldn't know that that command is offensive. Of course, philosophically, it doesn't itself know that the command is offensive, but it's been programmed to understand that that command shouldn't be carried out. And so those guardrails should stop it from assisting me to create a bomb or identifying the ideal escape route from a terrorist atrocity that I might be planning to commit. But of course, ostensibly, there are ways to get around that. If you very carefully tailor commands. There are instances where you might be able to overcome the guardrails. At RUSI, We recently hosted a talk by the director of the NCA. He suggested we'ew now seeing instances of sexual predators using artificial intelligence to create artificial images of child abuse. Which is obviously horrendous, right? That's a horrendous activity. But ostensibly that is breaching the guardrails that are in place in these systems. The guardrails should make it harder, or ideally impossible, for people to misuse these technologies. But generally speaking, there will be often with these kinds of technologies, they are dual use and there will be actors out there who will try to find a way of breaching the guardrails.

Will - The concern seems to be as well that this is incredibly difficult because it's so decentralised, that information is coming from so many places, it's incredibly difficult to identify individuals to prosecute. Because if you radicalise someone, certainly in the UK, and they commit an act of terror, you get prosecuted for that. So who could possibly be blamed in the instance that a chatbot radicalises someone?

Gareth - That's a really good question. I was watching the Christmas lecture series by the Royal Institution. These large language models form galaxies of data, they termed it, but it doesn't know what the data actually means necessarily. It's just working out its own way to categorise data so that it can give the users what it thinks the users are wanting to hear. Even though it doesn't understand the data, it is making sense of the data in its own way. The system itself is just trying to give the user what it wants. If the user really pushes it and keeps pushing it to give it content that would help them self radicalise, harden their views and harden their perspectives. If they're determined enough, they'll probably find a way to make the system do that. I would suggest that it tells us more about the individual requesting that data than it does about the system itself. If the large language model didn't exist, that individual might try and find alternative sources elsewhere. In the same sense that individuals are able to radicalise themselves before social networks were widely available. Individuals could still self radicalise through literature, through pamphlets, through discussions with peers. So perhaps we have now ansation, but it isn't necessarily revolutionary. I suspect it won't be widely adopted as a radicalising form, but it's another possibility. And of course we have seen instances in the news where there's been suggestions that people have used this to self radicalise. So it is happening. But who do we focus on in terms of the criminality of this? I would suggest that it is more apt to focus on the user requesting information from the system rather than necessarily the system itself.

Will - Ultimately technology, all of technology, is a tool and dependent on how you wield it.

Gareth - Yeah, exactly.

Comments

Add a comment