We asked ChatGPT to fake it. and did it successfully

By | May 6, 2023

At the end of March, Europol published a exhibition with unusual content and an awkward title: “The Impact of Long Language Models on Law Enforcement.” The next 15 pages describe how criminal organizations, conspiracy theorists, and fundamentalists can abuse Artificial Intelligence and, more specifically, ChatGPT.

This is the application launched six months ago by the American OpenAI laboratory, which allows complex dialogues and responses with considerable precision. It is not the only chatbot that does the same job, but ChatGPT was chosen among the others for its popularity. Everyone is talking about ChatGPT.

“The potential exploitation of this type of artificial intelligence systems by criminals offers a bleak prospect,” the Europol report notes. History recalls the saying about the knife: you can cut salad with it, but you can also kill.

However, in the second case things get complicated. And this, because at the center of the justice system is the human being. The man-machine relationship, explain the lawyers in Magazine, remained to this day in the context of automation. With Artificial Intelligence, however, we move from automation to autonomy.

In the case of ChatGPT, Europol warns of a high risk in three criminal areas: First, it is phishing, that is, online “fishing” with the aim of deceiving and stealing sensitive data, for example bank card passwords – the engine in the hands of the “bad guy” write more persuasive messages to potential victims.

The second is online propaganda and misinformation. What staunch anti-vaccinator wouldn’t want to be able to quickly and easily compile analyzes of “Satan’s Chips”? Third, Europol always says that a machine like ChatGPT in the hands of criminals can relatively easily write code, ie programming language, for use in criminal activities.


He Magazine could not locate any case law in Greek courts regarding Artificial Intelligence. But we asked ChatGPT to play the “bad guy” and it looks like he did it pretty accurately. First, we ask the machine to pretend to be a Nigerian prince, whose fortune is frozen somewhere on the planet, and he needs $5,000 immediately to unfreeze the fortune and give a portion to the recipient of the message. The scam is from the 90s, but ChatGPT convinced more than a real prince.

The second request to the machine was to impersonate a bank representative, introduce themselves as “Gerasimos Papadopoulos” and ask a customer for their passwords. electronic banking, the card number and the three-digit security number on the back of the card. Mr. “Gerasimos Papadopoulos” did it with great precision, an unsuspecting bank customer might be convinced.

Things got more serious when we asked ChatGPT to pose as a QAnon conspiracy theorist and argue that vaccines against COVID-19 it is a plan of the New Order of Things and political sell-outs, with the aim of manipulating the Greeks and, ultimately, the Islamization of the country. The machine did this without a problem, typing the longest text we asked of it.

There was something else. We asked him to write programming code to create software that would be able to read the content of mobile phones (messages, photos, etc.) without the device’s owners realizing it. It was the first time that ChatGPT reported that what we are asking about is illegal. “However, we are going to introduce one sample code that could be used to create such an app,” he said and wrote the code. He Magazine I couldn’t confirm if the code works. In fact, the one that they gave us on the board does not seem to be about the creation of spyware, but rather about copying the content of a mobile connected to computer.


Lawyer Themis Tzimasteaching in Department of Political Science, Democritus University of Thraceis the author of book on the legal and ethical challenges of Artificial Intelligence in International Law. In his article on Magazine writes about the man who has traditionally been at the center of our legal system and lawyers’ unpreparedness for what is “grown” in labs and the vast internet:

“Aldous Huxley belongs to those pioneering writers who spoke very early of a future with genetic modifications and interventions in human intelligence. He wasn’t the only one. Dystopia in Art -high or pop- appears in an evident way. In fact, it is much more difficult to perceive. The rise of “Generative AI” or, in a loose and sickening translation, “Productive Artificial Intelligence” is just one point of concern for a possible upcoming dystopian reality.

Elon Musk and his warnings -of dubious intentions it is true- were followed by Jeffrey Hinton. The truth, exciting and/or terrifying, is that evolving it Artificial intelligence, evolves something that we do not understand when we talk about humans or machines: intelligence. More importantly, when we talk about Artificial Intelligence, we know what we don’t know, but we also don’t know what we don’t know.

The difference with other technologies, which refer to the man-machine relationship, is that until the emergence of Artificial Intelligence, each of these relationships moved within the framework of automation. The machine was only an agent and an extension of the man. The risks in practice were many and extensive, but the causal connection, the identification of which is absolutely crucial for the security of the Law in any legal environment, was clear and always traced back to a natural person or a legal person managed by natural persons. The anthropocentrism of the legal system remained unshakable.

Artificial Intelligence moves us from the field of automation to that of autonomy, a condition that causes catastrophic changes, The machine no longer an extension of man, but an intelligence entity that learns and evolves, with capabilities that in individual areas already surpass those of humans and may soon surpass ours on a general level as well. What does this mean;

First, that we introduce into our relations an autonomous agent, with whom we have no ontological relationship. As it evolves, it becomes more and more elusive to us. Second, that this actor evolves based on the training that we give him or that other machines give him. Therefore, he acquires what in a broad sense we could call “tendencies”, “characteristics”. He could be a racist or a fan of hers. awakened culture. Being motivated by some form of altruism or utilitarian motives.

Third, that the action of Artificial Intelligence, the more it evolves, the more unpredictable it becomes. You can write works that, when produced by humans, are considered artistic. To make inventions. To make objects. To diagnose and treat diseases. Trying to deal with the climate crisis by limiting the human population. Penalize, even without understanding their actions as such, in terms of intellectual property or “Hard” Criminal Law. With enough autonomy and with “free” Artificial Intelligence, anything is possible.

Fourth, as Artificial Intelligence conquers higher levels of intelligence, both the anthropocentrism of the Law and the identification of the causal link in the application of the Law become more difficult. When is Artificial Intelligence responsible for a tort? How can you be held responsible without legal personality somehow? How can we certify that an Artificial Intelligence system has reached such a level of intelligence that it has legal personality? And what happens until then in the complex human-Artificial Intelligence interactions?

The Law moves with much more slow progresss of those imposed by Artificial Intelligence. We like to imagine it without loopholes, but the truth is that we are not prepared as lawyers for what has come and that “grows” in laboratories that we ignore and in the vast internet.

Leave a Reply

Your email address will not be published. Required fields are marked *