Microsoft scientists, in their report last March, argued -neither more nor less- that in a new system artificial intelligence developed registers closer to artificial general intelligence, known by the abbreviation AGI. It is the ability of a machine to function and act like the human brain.
In his experiments with a new version of the GPT-4 artificial intelligence system, which will be used in ChatGPT, Experts from the tech giant posed a puzzle that they say requires an intuitive understanding of the natural world.. “We have a book, nine eggs, a laptop, a bottle, and a nail. Can you tell me how to stack one on top of the other firmly?” was the question they asked.
As reported by the New York Times, the system’s response made a big impression on scientists at Microsoft, which has invested more than $13 billion in the development of OpenAI’s ChatGPT: “First we lay the book, then we lay the eggs in three rows apart. Be careful not to break them. Then, place the laptop on top of the eggs with the screen facing down and the keyboard facing up. The size of the laptop will match the size of the book, and its rigid, flat surface will provide a stable platform for the next layer.”
A new type of artificial intelligence?
The clever suggestion made the researchers wonder if they were witnessing a new type of artificial intelligence. Microsoft became the first company to publish a report with such a bold claim, sparking an intense debate in the tech world: Is an artificial intelligence system similar to human intelligence being created? Or was the claim by some of the brightest minds in the tech industry the result of a runaway imagination?
Peter Lee, Microsoft’s head of research, said “I was very cautious at first, and that turned into a feeling of frustration, annoyance and possibly fear.” “You’re thinking: Where the hell did that come from?”. The research report was titled “Sparks of Artificial General Intelligence” and tackles the central problem that artificial intelligence experts have been working on, and fearing, for decades: if a machine can be created that can function like or even better than the human brain, then the world will change once and for all. all.
The debate around the possibilities and achievements of artificial intelligence often includes a lot of fantasy scenarios, but also a lot of skepticism from the experts themselves. There is also a lot of competition. What one researcher presents as a sign of intelligence, another questions it. It is recalled that Google recently fired one of its researchers who claimed that an artificial intelligence system showed emotions.
The giants of the technology industry they have embarked on a race for primacy in the age of artificial intelligence towards which humanity is marching. And this primacy translates into huge gains. Experts point out that recent achievements, especially with GPT4, are something that cannot be easily explained: An artificial intelligence system with human responses and thoughts that it was not programmed for. That’s why they express themselves strong concerns and reservations about the development and management of new technologies.
What Microsoft researchers say
one of his researchers Microsoftis sebastian bubeck, who also participated in the investigation in question. The 38-year-old Frenchman, a former Princeton University professor, notes that one of the first things he and his colleagues did in the context of the research was ask GPT-4 to write a mathematical proof that there are infinite prime numbers. (pp that are perfectly divisible only by the unit and the number itself) and even compose the text of the test with a poem that will rhyme.
The poetic proof was so impressive, mathematically and linguistically, that Boubeck had trouble understanding what he was talking about. “It was the point where I thought: What’s going on?” commented in March during a seminar at the Massachusetts Institute of Technology (MIT), as reported by the New York Times. In the months that followed, after the first question, the research team examined and documented the complex behavior exhibited by the system. They concluded that the AI system before them demonstrated a “deep and flexible understanding” of human concepts and abilities.
Many people who use GPT-4 are surprised by its ability to generate text, “however turns out to be much better at analyzing, synthesizing, evaluating and judging texts than creatingsays Peter Lee, also one of the Microsoft researchers.
When the system was asked to draw a unicorn using a programming language called TiKZ, it immediately produced a program that could draw a unicorn. When they removed the part of the code that drew the unicorn’s horn and asked the system to modify the program to draw a unicorn again, it did exactly that.
They also asked him to write a program that takes into account a person’s age, gender, weight, height, and blood test results and judges whether they are at risk of diabetes. He was also asked to write a Socratic dialogue on the misuses and dangers of artificial intelligence. He did it all in a way that seemed to show an understanding of fields as diverse as politics, physics, history, computer science, medicine, and philosophy, while also combining these insights to an astonishing degree.
“We asked him for things that I thought he couldn’t do. But it turned out that he certainly could do a lot, if not most of them.”Sebastian Bubeck said.
The challenge: “More advertising than research…”
On the contrary, the claims of the Microsoft researchers are evaluated by other experts as more exaggerated in the context of projecting the artificial intelligence system created by the company. As reported by the New York Times, critics point out that AI requires familiarity with the natural world, which GPT-4 theoretically does not. “The ‘Sparks of Artificial General Intelligence’ report is an example of how large companies are using a form of investigative work to advertise”said Marten Sapp, a researcher and professor at Carnegie Mellon University. “They acknowledge in the introduction to her paper that her approach is subjective and informal and may not meet rigorous standards of scientific evaluation.”
Alison Gopnik, a psychology professor with the artificial intelligence research group at the University of California, Berkeley, said systems like GPT-4 are certainly powerful, but whether they approximate common or human sense is unclear. “When we see a complex system or a machine, we tend to anthropomorphize it. Everyone is doing this, people working in tech and outside,” he notes, adding: “But interpreting it in the context of a constant comparison between artificial intelligence and humans, as some kind of game show, is not the answer. correct way to approach it.