System 3 thinking

I’ve been thinking for a while that there’s a piece missing from LLMs. There are hints that this hole might soon be filled, and it could drive the next leg up in AI capabilities.

Many people have observed that LLMs, for all their abilities, seem to lack “spark”. The new reasoning models are remarkably good at a certain kind of knowledge-based problem solving, based on chaining together obscure facts, but they don’t seem to show the novel creative insights that characterize top human solutions. It’s somewhat reminiscent of the Deep Blue era in computer chess: the models approach problems in a grind-it-out kind of way. Humans sometimes do this too, but also have some other mode which the models seem to lack.

Will this just fall out of further scaling? Or do we need some new ideas? While I am very bullish on scaling, I also think ideas are going to matter.

More

AI fiction

Science fiction stories are a lot more important than Serious People will admit. Most of us are at some level aiming towards or away from things we read as teenagers. Here are a few stories that live in my head as we’re watching the birth of AI:

More

Where is the world rich in feedback?

As Ilya said at NeurIPS, we only have one internet. Once the fossil fuel of existing human-generated data has been consumed, further AI progress requires new sources of information. Broadly speaking, this can happen in two ways: search against a verifier, which trades compute for information, or through direct observation and interaction with the world. So if you want to predict medium-term AI progress, ask “Where is the world rich in feedback?”

There are two major dimensions for scaling AI: pre-training, and inference time scaling1. The most recent cycle of AI progress has been driven by inference time scaling (OpenAI O1). These models are trained with reinforcement learning (RL), which requires a reward signal. Reward is much easier to specify in some domains than others, which is why O1 shows huge performance gains in math and code (domains with verifiable right answers), but little to no improvement in more subjective areas such as writing.

Scaling pre-training with synthetic data is almost the same problem. For generated data to be a net win, you need some quality signal to filter it. So essentially all AI progress now turns on availability of scalable quality/reward signals2.

A skewed pattern of AI progress is therefore likely to persist in the medium term. The graph below shows some areas which have faster and cheaper sources of reward. Those seems like a good bet for where AI will move fastest3.

Where is the world rich in feedback?

I’ll dig into a few of these areas, and then speculate about how some problems with less precise reward might be addressed.

More

Investing for the Singularity

The previous post looked at how you might invest for a scenario where AI can do most white-collar work (AGI, roughly speaking), without being broadly superhuman (ASI). However, a short1 transition from AGI to ASI seems plausible, even likely under certain conditions. My focus on the simpler AGI scenario is partly a case of looking for the keys under the streetlight. So in this post I’d like to think through how to invest for a full ASI scenario.

More

Investing for the AI transition

At this point it seems basically certain that AI will be a major economic transition. The only real question is how far it goes and how fast. In a previous essay I talked through four scenarios for what the coming few years might look like. In this essay I want to think through how to invest against those scenarios.

More

How much LLM training data is there, in the limit?

Recent large language models such as Llama 3 and GPT-4 are trained on gigantic amounts of text. Next generation models need 10x more. Will that be possible? To try to answer that, here’s an estimate of all the text that exists in the world.

More

Revised Chinchilla scaling laws – LLM compute and token requirements

There’s a nice blog post from last year called Go smol or go home. If you’re training a large language model (LLM), you need to choose a balance between training compute, model size, and training tokens. The Chinchilla scaling laws tell you how these trade off against each other, and the post was a nice guide to the implications.

A new paper shows that the original Chinchilla scaling laws (from Hoffmann et al.) have a mistake in the key parameters. So below I’ve recalculated some scaling curves based on the corrected formulas.

More

On the ordering of miracles

AI is starting to arrive. Those close to the action have known this for a while, but almost everyone has been surprised by the precise order in which things occurred. We now have remarkably capable AI artists and AI writers. They arrived out of a blue sky, displaying a flexibility and finesse that was firmly in the realm of science fiction even five years ago. Other grand challenge problems like protein folding and Go also fell at a speed that took experts by surprise. Meanwhile, seemingly simpler mechanical tasks like driving a car remain out of reach of our best systems, despite 15+ years of focused, well-funded efforts by top quality teams.

What gives? Why does AI seem to race ahead on some problems, while remaining stuck on others? And can we update our understanding, so that we’re less surprised in future?

More