One of the selling points of Google’s flagship generative AI models, Gemini 1.5 Pro and 1.5 Flashthat’s the amount of data they’re supposed to process and analyze. In press briefings and demonstrations, Google has repeatedly claimed that the models can accomplish previously impossible tasks thanks to their “long context,” such as summarizing documents hundreds of pages long or searching through movie scenes.
But new research suggests that the models are not, in fact, very effective in these areas.
Two separated studies Researchers have studied how well Google’s Gemini and other models manage to understand enormous amounts of data (think works the length of “War and Peace”). They both found that Gemini 1.5 Pro and 1.5 Flash struggled to correctly answer questions on large datasets; in a series of document-based tests, the models only gave the correct answer 40-50% of the time.
“While models like Gemini 1.5 Pro can technically handle long contexts, we’ve seen many cases where the models don’t actually ‘understand’ the content,” Marzena Karpinska, a postdoctoral fellow at UMass Amherst and co-author of one of the studies, told TechCrunch.
Gemini pop-up is missing
A model’s context, or context window, refers to the input data (e.g., text) that the model considers before generating output (e.g., additional text). A simple question — “Who won the 2020 US presidential election?” — can serve as context, just like a movie script, show, or audio clip. And as context windows expand, so does the size of documents inserted into them.
The latest versions of Gemini can take up to 2 million tokens as context. (“Tokens” are subdivided pieces of raw data, such as the syllables “fan,” “tas,” and “tic” in the word “fantastique.”) That’s about 1.4 million words, two hours of video, or 22 hours of audio—the most context of any commercially available model.
At a press conference earlier this year, Google showed off several pre-recorded demos meant to illustrate the potential of Gemini’s long-form context capabilities. One had Gemini 1.5 Pro search the Apollo 11 moon landing television transcript (about 402 pages) for quotes that contained jokes, and then found a scene in the broadcast that looked like a pencil sketch.
Oriol Vinyals, vice president of research at Google DeepMind, who led the meeting, described the model as “magical.”
“(1.5 Pro) does these kinds of reasoning tasks on every page, every word,” he said.
Perhaps that was an exaggeration.
In one of the aforementioned studies assessing these abilities, Karpinska, working with researchers at the Allen Institute for AI and Princeton, asked models to evaluate true/false statements about fiction books written in English. The researchers chose recent works so that the models could not “cheat” by relying on prior knowledge, and they peppered the statements with references to specific details and plot points that would be impossible to understand without reading the books in their entirety.
Given a statement like “Using his Apoth skills, Nusis is able to reverse engineer the type of portal opened by the Reagents Key found in Rona’s wooden chest”, Gemini 1.5 Pro and 1.5 Flash – after ingesting the corresponding book – had to say whether the statement was true or false and explain their reasoning.
Tested on a book of about 260,000 words (~520 pages), the researchers found that 1.5 Pro correctly answered true/false statements 46.7% of the time, while Flash only answered correctly 20% of the time. That means a coin does a much better job of answering questions about the book than Google’s latest machine learning model. When averaging across all the benchmark results, neither model managed to achieve better than chance accuracy in answering questions.
“We noticed that the models have a harder time verifying claims that require considering larger parts of the book, or even the entire book, than those that can be solved by retrieving sentence-level evidence,” Karpinska said. “Qualitatively, we also observed that the models have a harder time verifying claims about implicit information that is clear to a human reader but not explicitly stated in the text.”
The second of the two studies, co-authored by researchers at UC Santa Barbara, tested the ability of Gemini 1.5 Flash (but not 1.5 Pro) to “reason” about videos—that is, to search for and answer questions about their content.
The co-authors created a dataset of images (e.g., a photo of a birthday cake) associated with questions for the model to answer about the objects depicted in the images (e.g., “Which character cartoon is on this cake?). To evaluate the models, they chose one of the images at random and inserted “entertaining” images before and after it to create slideshow-style sequences.
Flash did not give good results. In a test where the model had to transcribe six handwritten digits from a “slideshow” of 25 images, Flash got about 50% correct transcriptions. Accuracy dropped to around 30% with eight digits.
“On real picture-question-answering tasks, this seems to be particularly difficult for all the models we tested,” Michael Saxon, a doctoral student at UC Santa Barbara and one of the study’s co-authors, told TechCrunch. “That little bit of reasoning — recognizing that a number is in a frame and reading it — might be what breaks the model.”
Google promises too much with Gemini
Neither study has been peer-reviewed, nor do they examine versions of Gemini 1.5 Pro and 1.5 Flash with 2 million token contexts. (Both tested the 1 million token context versions.) And Flash isn’t supposed to be as good as Pro in terms of performance; Google is pitching it as a low-cost alternative.
However, both pour oil on the flame that Google over-promised – and under-delivered – with Gemini From the beginning. None of the models tested by the researchers, including that of OpenAI GPT-4o and Anthropic Claudius 3.5 Sonnethas performed well. But Google is the only template provider that gives the pop-up the top spot in its ads.
“There’s nothing wrong with simply saying, ‘Our model can take X number of tokens’ based on objective technical details,” Saxon said. “But the question is: What can you do useful with that?”
Generative AI, as a whole, is coming under increasing scrutiny as companies (and investors) grow increasingly frustrated with the technology’s limitations.
In a two recent surveys of According to the Boston Consulting Group, about half of those surveyed — all senior executives — said they did not expect generative AI to lead to substantial productivity gains and were concerned about the risk. of errors and data compromises arising from tools powered by generative AI. PitchBook recently reported that for two consecutive quarters, early-stage AI-powered generative deals have declined, falling 76% from their peak in Q3 2023.
Faced with meeting-summarizing chatbots that bring up fictional details about people and AI search platforms that are essentially plagiarism generators, customers are looking for promising differentiators. Google — which ran, sometimes awkwardlyto catch up with its rivals in generative AI, desperately wanted to make Gemini’s context one of those differentiators.
But the bet was apparently premature.
“We haven’t found a way to really show that ‘reasoning’ or ‘understanding’ over long documents is happening, and basically every group publishing these models is tinkering with their own ad hoc evaluations to make these claims,” Karpinska said. “Without knowing how long contextual processing is implemented — and companies don’t share those details — it’s hard to say how realistic these claims are.”
Google did not respond to a request for comment.
Saxon and Karpinska argue that antidotes to the exaggerated claims around generative AI are better benchmarks, and, along those lines, a greater emphasis on third-party reviews. Saxon notes that one of the most common tests for long context (typically cited by Google in its marketing materials), the “needle in the haystack,” only measures a model’s ability to retrieve specific pieces of information, like names and numbers, from data sets—not to answer complex questions about that information.
“All the scientists and most engineers who use these models essentially agree that our current reference culture is broken,” Saxon said, “so it’s important for the public to understand that these giant reports containing numbers like ‘general intelligence across references’ should be taken with a considerable grain of salt.”