Artificial Intelligence (AI), ChatGPT, Cyber Crime, Cyber Security, Daily news, Europol, Fraud & Security, Identity, Risk & Compliance -

The criminal use of ChatGPT – a cautionary tale

In response to the growing public attention given to ChatGPT, the Europol Innovation Lab organised a number of workshops with subject matter experts from across Europol to explore how criminals can abuse large language models (LLMs) such as ChatGPT, as well as how it may assist investigators in their daily work.

cyber crime

The criminal use of ChatGPT – a cautionary tale

Their insights are compiled in Europol’s first Tech Watch Flash report. Entitled ‘ChatGPT – the impact of Large Language Models on Law Enforcement’, this document provides an overview on the potential misuse of ChatGPT, and offers an outlook on what may still be to come.

The aim of this report is to raise awareness about the potential misuse of LLMs, to open a dialogue with Artificial Intelligence (AI) companies to help them build in better safeguards, and to promote the development of safe and trustworthy AI systems.

A longer and more in-depth version of this report was produced for law enforcement only.

What are large language models? 

ChatGPT is an LLM, a large language model is a type of AI system that can process, manipulate, and generate text.

Training an LLM involves feeding it large amounts of data, such as books, articles and websites, so that it can learn the patterns and connections between words to generate new content.

The current publicly accessible model underlying ChatGPT is capable of processing and generating human-like text in response to user prompts. Specifically, the model can answer questions on a variety of topics, translate text, engage in conversational exchanges (‘chatting’), generate new content, and produce functional code.

The dark side of Large Language Models

As the capabilities of LLMs such as ChatGPT are actively being improved, the potential exploitation of these types of AI systems by criminals provide a grim outlook.

The following three crime areas are amongst the many areas of concern identified by Europol’s experts:

  • Fraud and social engineering: ChatGPT’s ability to draft highly realistic text makes it a useful tool for phishing purposes. The ability of LLMs to re-produce language patterns can be used to impersonate the style of speech of specific individuals or groups. This capability can be abused at scale to mislead potential victims into placing their trust in the hands of criminal actors.
  • Disinformation: ChatGPT excels at producing authentic sounding text at speed and scale. This makes the model ideal for propaganda and disinformation purposes, as it allows users to generate and spread messages reflecting a specific narrative with relatively little effort.
  • Cybercrime: In addition to generating human-like language, ChatGPT is capable of producing code in a number of different programming languages. For a potential criminal with little technical knowledge, this is an invaluable resource to produce malicious code.

As technology progresses, and new models become available, it will become increasingly important for law enforcement to stay at the forefront of these developments to anticipate and prevent abuse.

The report aims to provide an overview of the key results from a series of expert workshops on potential misuse of ChatGPT held with subject matter experts at Europol.

The use cases detailed provide a first idea of the vast potential LLMs already have, and give a glimpse of what may still be to come in the future.

ChatGPT is already able to facilitate a significant number of criminal activities, ranging from helping criminals to stay anonymous to specific crimes including terrorism and child sexual exploitation.

 

The post The criminal use of ChatGPT – a cautionary tale appeared first on Payments Cards & Mobile.