The story of a fictitious city and the fears of citizens and scientists around Artificial Intelligence

By | May 17, 2023

Once upon a time there was a city that was growing at an astonishing rate. The craftsmen were building all the time. As the work of their hands brought income, and the construction business became extremely profitable, they began to build more and more complex multi-story buildings, using more or less the same technologies. It all happened very fast. Once cracks started to appear here and there, some people were a little worried, but no one paid much attention. The craftsmen enthusiastically continued to build taller and taller skyscrapers at a frantic pace, which some began calling the “Towers of Babel,” without inventing new construction methods or new building standards. The towers reached such a size that each one could house many thousands of inhabitants, who felt great joy and satisfaction at the privilege of living in such impressive residences and using their infrastructure every day.

At some point, the craftsmen noticed that the cracks were multiplying at an accelerated rate, and then they began to really worry: what was causing the cracks? Was there any chance that the buildings would collapse? Had they exceeded the safe height limits for such structures? The tower owners had their own concerns, different from those of the builders. If the towers collapsed, who would compensate the victims and how? What regulations and legislation exist for such cases? Were there regulations for such large buildings? Soon the residents themselves, who were initially excited to live in the iconic skyscrapers, began to worry: were they safe? Were the craftsmen really capable of creating such large and complex structures safely? The city government, for its part, had other pressing problems to attend to that required more immediate solutions and was not interested in taking urgent action on them, despite the fact that the fissures – and concerns – were growing and deepening. In other words, no one knew what to do, but many began to fear the worst.

With this story, which is the product of human creativity and not of a logical and accurate combination of words and meanings of the great ChatGPT linguistic model, the professor at the Aristotle University of Thessaloniki (AUTH), Ioannis Pitas, describes the situation that began to take shape in the latest space around Artificial Intelligence (AI): “the above story is a good parable for the current state of affairs, when it comes to creative AI and big language models like ChatGPT. Enthusiasm for IT is mixed with technophobia. Technophobia is quite natural for the general public, who love new and exciting things, but are often afraid of the unknown. What is new, however, is that a number of leading scientists have become technosceptics, if not technophobes,” says Mr. Pitas, President of the International Academy of IT Doctoral Studies (AIDA) and Director of the Information Analysis Laboratory and Artificial Intelligence (AIIA Laboratory).

Regarding the latter, the technoskepticism of the scientists themselves, mentions to APE-MPE two typical examples: the first is the open letter recently signed by some 2000 scientists and businessmen, including MIT physicist Max Tegmark, professor of computer science Stewart Russell and Elon Musk, calling for a pause, for at least six months, in developing an AI stronger than ChatGPT-4. The second is recent statements by leading artificial intelligence scientist Professor Geoffrey Hinton, who has contributed significantly to the creation of powerful algorithms through his work at Google. Heaton, 75, recently resigned from the US tech giant so he said he could speak more openly about the dangers of AI.

Should AI research stop, even temporarily?

“Technophobia is not justified, nor is it a solution. Of course, everyone has the right to fear and question the current state of AI: no one knows why large language models work so well and whether they have any limits. In addition, many risks remain that malicious actors create “artificial intelligence bombs”, especially if states remain bystanders in terms of regulatory regulations. These are reasonable concerns that fuel fear of the unknown, even among distinguished scientists. After all, they are the same people,” says Mr. Pitas.

However, on the question of whether AI research can and should be halted, even temporarily, his opinion is that such a thing is neither possible nor desirable. “AI is humanity’s response to a world of increasing complexity. Since the processes of increasing physical and social complexity are fundamental and seem inexorable, AI and citizen education are our only hope for a smooth transition from today’s Information Society to the Knowledge Society. Otherwise, we may face a catastrophic social collapse. The solution is to deepen our understanding of AI developments, accelerate its rational development and regulate its use in the direction of maximizing its positive consequences while minimizing already apparent and hidden negative effects. Artificial intelligence research can and must become different: more open, democratic, scientific and ethical,” he stresses, adding that it is parliaments and elected governments – not companies or individual scientists – who should have the first say on important issues. of IT research that may have far-reaching societal implications.

According to Mr. Peeta, the positive impact of TN systems can greatly outweigh their negative aspects if the appropriate regulatory measures are taken. In his opinion, the biggest current threat comes from the fact that such AI systems can remotely fool an extremely large number of ordinary citizens who have little (or average) education and/or little research capacity. This situation can be extremely dangerous for democracy and any form of socioeconomic progress, he says. Another big threat is its use in illegal activities: cheating on university exams is a fairly “benign” use of them, when compared to other possibilities that AI gives to carry out criminal activities. Even relatively low-skill criminals can create clever malware or large-scale fake data through AI tools. “We are already seeing their ingenuity online. Of course, such technology can also be misused by authoritarian or rogue states for other purposes, e.g. destabilization of democracy. For this, international law should require AI systems to be registered in an ‘AI system registry’ and inform their users that they are talking to an AI system or using its results,” he said, adding that its impact on work and markets will be very positive in the medium and long term. . According to the professor, it remains crucial that the advanced core technologies of IT systems are opened up and the data related to them democratized (at least in part), again in the direction of maximizing profit and socio-economic progress.

“Furthermore, robust and appropriate financial compensation systems must be provided for champions of AI technology to offset any lost profits due to the aforementioned open source and data openness and to ensure strong future investment in AI technology R&D.” AI (for example, through patents, compulsory licensing systems)”, he emphasized and concluded: “The balance of AI research between academia and industry needs to be reconsidered in order to maximize research yield. At the same time, the Educational practices must be reexamined at all levels of education to maximize the benefit of AI technologies, while creating a new generation of creative and adaptable citizens and scientists (AI and beyond), while creating to strengthen the appropriate mechanisms of regulation/supervision/financials for IT development”.

Source: RES – BEE

Leave a Reply

Your email address will not be published. Required fields are marked *