In a bid to push the boundaries of artificial intelligence (AI) language models, Google Unveils PaLM 2, the latest addition to its lineup, aiming to compete with OpenAI’s GPT-4. Google CEO Sundar Pichai made the announcement during the company’s I/O conference, highlighting PaLM 2’s superior logic and reasoning capabilities, bolstered by comprehensive training in those areas.
Integration Into Google Workspace
PaLM 2, an acronym for Pathways Language Model, builds upon its predecessor, PaLM, which was introduced just last April. Distinctly powered by Pathways, a machine learning technique developed in-house at Google, PaLM 2 boasts support for over 100 languages and exhibits remarkable proficiency in reasoning, code generation, and multi-lingual translation.
Already in action, PaLM 2 is driving innovation across 25 features and products within Google’s domain. One such advancement is Bard, an experimental chatbot that offers users access to cutting-edge coding features and enhanced language support. Furthermore, PaLM 2 has been seamlessly integrated into Google Workspace applications like Docs, Slides, and Sheets, enhancing their functionality and empowering users with a more refined experience.
Google Unveils PaLM 2, a Robust AI-Language Model to Challenge Open AI’s GPT-4
Four Different Sizes
PaLM 2 arrives in four distinct sizes: Gecko, Otter, Bison, and Unicorn, each tailored for deployment in different settings, be it consumer-oriented or enterprise-focused. In fact, Google has tailored specific versions of PaLM 2 for enterprise customers, catering to their unique needs. Med-PaLM 2, trained on health data, has the expertise to answer questions akin to those encountered in the US Medical Licensing Examination. Similarly, Sec-PaLM 2, trained on cybersecurity data, can elucidate the behavior of potential malicious scripts and assist in detecting threats in code.
Google vs. OpenAI
While Google asserts that PaLM 2 outperforms GPT-4 in mathematical operations, translation tasks, and reasoning abilities, some experts have raised doubts about the veracity of Google’s benchmarks. They argue that in informal language tests, PaLM 2’s performance seems to lag behind that of GPT-4 and Bing, warranting further scrutiny.
Google has chosen not to disclose the exact number of parameters in PaLM 2, which serve as the model’s learned “knowledge” and are often used to gauge the complexity and size of language models. For comparison, the 2020 iteration, GPT-3, comprised 175 billion parameters, while the previous iteration, PaLM 1, released in 2022, featured a whopping 540 billion parameters. OpenAI, on the other hand, has yet to divulge the parameter count for GPT-4, adding an air of mystery to the ongoing competition between these tech giants.
As PaLM 2 continues to evolve and face rigorous testing, the AI community eagerly awaits a deeper exploration of its capabilities and a potential paradigm shift in the field of natural language processing.