From the tweet (minus pictures):
Language models are bad a basic math.
GPT-4 has right around 0% accuracy rate on 5 digit multiplication.
Most open models can’t even add. Why is that?
There are a few reasons why numbers are hard. The main one is Tokenization. When training a tokenizer from scratch, you take a large corpus of text and find the minimal byte-pair encoding for a chosen vocabulary size.
This means, however, that numbers will almost certainly not have unique token representations. “21” could be a single token, or [“2”, “1”]. 143 could be [“143”] or [“14”, “3”] or any other combination.
A potential fix here would be to force single digit tokenization. The state of the art for the last few years is to inject a space between every digit when creating the tokenizer and when running the model. This means 143 would always be tokenized as [“1”, “4”, “3”].
This helps boost performance, but wastes tokens while not fully fixing the problem.
A cool fix might be xVal! This work by The Polymathic AI Collaboration suggests a generic [NUM] token which is then scaled by the actual value of the number!
If you look at the red lines in the image above, you can get an intuition for how that might work.
It doesn’t capture a huge range or high fidelity (e.g., 7.4449 vs 7.4448) but they showcase some pretty convincing results on sequence prediction problems that are primarily numeric.
For example, they want to train a sequence model on GPS conditioned temperature forecasting
They found a ~70x improvement over standard vanilla baselines and a 2x improvement over really strong baselines.
One cool side effect is that deep neural networks might be really good at regression problems using this encoding scheme!
Or just … put a calculator in the middle of the LLM and let it learn how to use it. Why are you jumping through so many hoops to help a language model learn an incredibly inefficient way to inaccurately do math in its head? The whole point of machines is to do things in ways that are better than what we do in our heads.