With PaLM 2 language model, Google wants to better compete with OpenAI’s GPT-4 – Google

By | May 15, 2023

Among the major announcements at Google I/O this week was PaLM 2: Google’s latest AI language model that will compete with systems like OpenAI’s GPT-4.

“The PaLM 2 language model is stronger in logic and reasoning, thanks to extensive training in logic and reasoning,” Google CEO Sundar Pichai said onstage at the company’s I/O conference. “He is also trained in multilingual texts covering more than 100 languages.”

PaLM 2 is much better at a number of text-based tasks, Google’s senior director of research Slav Petrov told reporters. “It has improved significantly compared to PaLM 1 [το οποίο ανακοινώθηκε τον Απρίλιο του 2022]Petrov said.

As an example of its multilingual capabilities, Petrov showed how PaLM 2 can understand idioms in different languages, giving the example of the German phrase “Ich verstehe nur Bahnhof”, which literally translates to “I only understand the train station”, but is understood better as “I don’t understand what you’re saying” or, as an English idiom, “everything is Greek to me”.

In a research paper describing the capabilities of PaLM 2, Google engineers stated that proficiency in the system’s language is “sufficient to teach this language” and noted that this was partly due to the higher prevalence of texts that are not in English in the training data.

Like other large language models, which require enormous amounts of data, time, and resources to create, PaLM 2 is not so much a single product as a family of products. Its different versions are expected to be deployed in commercial and consumer environments. The system is available in four sizes, named Gecko, Otter, Bison, and Unicorn, from smallest to largest, and is configured with domain-specific data to perform certain tasks for enterprise customers.

Think of these customizations as taking a basic truck chassis and adding a new engine or front bumper to make it perform certain tasks or perform better on certain terrain. There is already a version of PaLM trained on health data (Med-PaLM 2), which, according to Google, can answer questions similar to those on the US medical licensing exams at the “specialist” level. Another trained in cybersecurity data (Sec-PaLM 2), which can “explain the behavior of potential malicious scripts and help identify threats in code,” Petrov said. Both models will be available through Google Cloud, initially for select customers.

As for Google, PaLM 2 is already behind 25 of the company’s features and services, including Bard, the company’s experimental chatbot. Updates available through Bard include improved coding capabilities and increased language support. It is also used for artificial intelligence functions in Google Workspace online applications such as Docs, Slides, and Sheets.

In particular, Google says that the lightest version of PaLM 2, Gecko, is small enough to run on mobile phones and processes 20 tokens per second, the equivalent of about 16-17 words. Google did not say what hardware was used to test this model, other than that it works “on the latest smartphones.” However, the miniaturization of such language models is important. Such systems are expensive to run in the cloud, and being able to use them locally would have other benefits, such as improved privacy. The problem, of course, is that the smaller versions of the language models are inevitably less capable than their bigger brothers.

image.png.537c3c72c391a606d409a2270cc8ba2c.png

With PaLM 2, Google hopes to close the “artificial intelligence gap” between the company and rivals such as Microsoft, which has aggressively pushed AI language tools into its Office software suite. Microsoft now offers AI features that help summarize documents, compose emails, create presentation slides, and more. Google will have to introduce at least the same features or risk being seen as slow to implement its AI research.

While PaLM 2 is certainly a step forward for Google’s work on AI language models, it suffers from issues and challenges common to technology in general.

For example, some experts are beginning to question the legitimacy of the training data used to build language models. This data usually comes from the Internet and often includes pirated copyrighted texts and e-books. The tech companies that build these models refuse to answer questions about where they get their training data from. Google continued this tradition in its description of PaLM 2, noting only that the core training part of the system consists of “a diverse set of sources: web docs, books, code, math, and chat data,” without elaborating.

There are also the well-known problems in the results of AI language models such as “hallucinations” or the tendency of these systems to simply make up information. Speaking to The Verge, Google’s VP of Research Zoubin Ghahramani says that in this sense, PaLM 2 is an improvement over previous models “in the sense that we’re putting a lot of effort into continually improving the metrics.” performance and good luck”. . At the same time, he points out that the industry as a whole “still has a ways to go” in the fight against false information generated by artificial intelligence.

Leave a Reply

Your email address will not be published. Required fields are marked *