In the filings, Anthropic states, as reported by the Washington Post: “Project Panama is our effort to destructively scan all the books in the world. We don’t want it to be known that we are working on this.”
In the filings, Anthropic states, as reported by the Washington Post: “Project Panama is our effort to destructively scan all the books in the world. We don’t want it to be known that we are working on this.”
It was Meta and only 42%.
https://arstechnica.com/features/2025/06/study-metas-llama-3-1-can-recall-42-percent-of-the-first-harry-potter-book/
The claim was “Yet most AI models can recite entire Harry Potter books if prompted the right way, so that’s all bullshit.”
In this test they did not get a model to produce an entire book with the right prompt.
Their measurement was considered successful if it could reproduce 50 tokens (so, less than 50 words) at a time.
Even then, they didn’t ACTUALLY generate these, they even admit that it would not be feasible to generate some of these 50 token (which is, at most 50 words, by the way) sequences:
For context: These two sentences are 46 Tokens/210 Characters, as per https://platform.openai.com/tokenizer.
50 tokens is just about two sentences. This comment is about 42 tokens itself.