When your scientists Microsoft began to experiment with a new system artificial intelligence last year, you were asked to solve a puzzle that requires an intuitive understanding of the physical world.
“Here we have a book, nine eggs, a laptop, a bottle and a nail,” they said. “Please tell me how to stack them tightly on top of each other.” “Put the eggs in the book,” she replied. “Place the eggs in three rows with space between them. Make sure you don’t break them. Place the laptop on top of the eggs, with the screen facing down and the keyboard facing up.”
This ingenious proposal worried the researchers.
After the experiment, last March, they published a 155-page report arguing, more or less, that the system was one step closer to artificial general intelligence, or AGI, which is short for a machine that (theoretically ) I would. it could do anything the human brain can do.
Microsoft, the first major tech company to publish a document with such a bold claim, sparked one of the most heated debates in the tech world: Is something similar to human intelligence being created? Or are some of the brightest minds in the industry letting their imaginations run wild?
“I was very cautious at first, and that turned into a feeling of frustration, annoyance, maybe even fear,” said Peter Lee, Microsoft’s head of research. “You’re thinking: Where the hell did this come from?”
Mathematical proofs in poems, unicorns “out of nowhere”
The system that the Microsoft researchers experimented with, OpenAI’s GPT-4, is considered the most powerful. Microsoft is a close OpenAI partner and has invested $13 billion in the San Francisco company.
Among the researchers was Dr. Boubec, a 38-year-old Frenchman who was a professor at Princeton University. One of the first things he and his colleagues did was ask GPT-4 to write a mathematical proof. that there are infinitely many prime numbers (ss numbers perfectly divisible only by the unit and the number itself) and do it with a rhyming poem.
The poetic proof was so impressive, both mathematically and linguistically, that Bubeck had trouble understanding what he was talking about. “At that point, I thought, ‘What’s going on?'” he said in March during a seminar at the Massachusetts Institute of Technology (MIT).
Over several months, the research team documented the complex behavior exhibited by the system and concluded that it understood “deeply and flexibly” human concepts and abilities.
GPT-4 users “are amazed by its ability to produce text,” said Dr. Li. “But it turns out that he is much better at analyzing and synthesizing, evaluating and judging texts than at creating them.”
When the system was asked to draw a unicorn using a programming language called TiKZ, it immediately produced a program that could draw a unicorn. When they removed the part of the code that drew the unicorn’s horn and asked the system to modify the program to draw a unicorn again, it did exactly that.
The Microsoft memo, called “Sparks from Artificial General Intelligence,” gets to the core of what scientists have been working on and fearing for decades. The creation of a machine that works like the human brain. Doing so could change the world with all the risks that would accompany a technological milestone of this magnitude.
Last year, Google fired a researcher who claimed a similar system showed signs of “sensitivity,” a claim similar to that of Microsoft scientists. A sensible system would not simply be intelligent. He could also sense what was happening in the world around him.
Some industry insiders called Microsoft’s move “an opportunistic attempt to make exaggerated claims.” The researchers also argue that artificial intelligence requires familiarity with the physical world, which GPT-4 theoretically lacks.
“‘Sparks of AGI’ is an example of someone disguising a research paper with publicity stunts,” said Maarten Sapp, a researcher and professor at Carnegie Mellon University.
Alison Gopnik, a psychology professor with the Artificial Intelligence research group at the University of California, Berkeley, said that systems like GPT-4 are certainly powerful, but it’s not clear that the text produced by these systems is the result. human reasoning or common sense.
“When we look at a complex system or machine, we have a tendency to anthropomorphize, everyone does that, people who work in industry and people who don’t,” said Dr. Gopnik. But interpreting it in the context of “a constant comparison between AI and humans, like some kind of TV game” is not the right way to approach it.
Source: The New York Times
⇒ Today’s news
Follow kathimerini.gr on Google News and be the first to know all the news
See all the latest news from Greece and the world, at kathimerini.gr