Where is the world rich in feedback?

As Ilya said at NeurIPS, we only have one internet. Once the fossil fuel of existing human-generated data has been consumed, further AI progress requires new sources of information. Broadly speaking, this can happen in two ways: search against a verifier, which trades compute for information, or through direct observation and interaction with the world. So if you want to predict medium-term AI progress, ask “Where is the world rich in feedback?”

There are two major dimensions for scaling AI: pre-training, and inference time scaling1. The most recent cycle of AI progress has been driven by inference time scaling (OpenAI O1). These models are trained with reinforcement learning (RL), which requires a reward signal. Reward is much easier to specify in some domains than others, which is why O1 shows huge performance gains in math and code (domains with verifiable right answers), but little to no improvement in more subjective areas such as writing.

Scaling pre-training with synthetic data is almost the same problem. For generated data to be a net win, you need some quality signal to filter it. So essentially all AI progress now turns on availability of scalable quality/reward signals2.

A skewed pattern of AI progress is therefore likely to persist in the medium term. The graph below shows some areas which have faster and cheaper sources of reward. Those seems like a good bet for where AI will move fastest3.

I’ll dig into a few of these areas, and then speculate about how some problems with less precise reward might be addressed.

More

Investing for the Singularity

The previous post looked at how you might invest for a scenario where AI can do most white-collar work (AGI, roughly speaking), without being broadly superhuman (ASI). However, a short1 transition from AGI to ASI seems plausible, even likely under certain conditions. My focus on the simpler AGI scenario is partly a case of looking for the keys under the streetlight. So in this post I’d like to think through how to invest for a full ASI scenario.

More

Investing for the AI transition

At this point it seems basically certain that AI will be a major economic transition. The only real question is how far it goes and how fast. In a previous essay I talked through four scenarios for what the coming few years might look like. In this essay I want to think through how to invest against those scenarios.

More

How much LLM training data is there, in the limit?

Recent large language models such as Llama 3 and GPT-4 are trained on gigantic amounts of text. Next generation models need 10x more. Will that be possible? To try to answer that, here’s an estimate of all the text that exists in the world.

More

Revised Chinchilla scaling laws – LLM compute and token requirements

There’s a nice blog post from last year called Go smol or go home. If you’re training a large language model (LLM), you need to choose a balance between training compute, model size, and training tokens. The Chinchilla scaling laws tell you how these trade off against each other, and the post was a nice guide to the implications.

A new paper shows that the original Chinchilla scaling laws (from Hoffmann et al.) have a mistake in the key parameters. So below I’ve recalculated some scaling curves based on the corrected formulas.

More

On the ordering of miracles

AI is starting to arrive. Those close to the action have known this for a while, but almost everyone has been surprised by the precise order in which things occurred. We now have remarkably capable AI artists and AI writers. They arrived out of a blue sky, displaying a flexibility and finesse that was firmly in the realm of science fiction even five years ago. Other grand challenge problems like protein folding and Go also fell at a speed that took experts by surprise. Meanwhile, seemingly simpler mechanical tasks like driving a car remain out of reach of our best systems, despite 15+ years of focused, well-funded efforts by top quality teams.

What gives? Why does AI seem to race ahead on some problems, while remaining stuck on others? And can we update our understanding, so that we’re less surprised in future?

More

How to win a fight with God

The breeze whispers of a transformation, a time of great trial and tribulation for mankind. We face an adversary whose complexity is almost unimaginable. Its vast computing power makes the human brain look like a toy. Worse still, it wields against us a powerful nanotechnology, crafting autonomous machines out of thin air. Already, more than one percent of the Earth is under its sway. I’m talking of course about the mighty Amazon rainforest.

But humans have long ago learned to live with our ancient enemies, the plants and the animals. Perhaps we can take some lessons for how to navigate our new friends, the AIs, who may be arriving any day now.

The rainforest is not actively trying to kill us, at least not most of the time. It is locked into a fierce struggle with itself, deploying its vast resources in internal competition. As a side effect, it produces large amounts of oxygen, food and other ecosystem services that are of great benefit to humanity. But the rainforest doesn’t like us, doesn’t care about us, mostly doesn’t even notice us. It exists in a private hell of hyper-competition, honed to a sharp point by the lathe of selection turning for a billion years.

So here is one model for AI safety. Don’t hope for direct control. Don’t dream of singletons. Instead, design a game that locks the AIs in a competition we don’t care about, orthogonal to the human world, perhaps with a few ecosystem services thrown our way as a side-effect. We will only collect scraps from the table of the Gods. But while the Gods are busy with their own games, we can get on with ours.

I think of the Irish proverb: What’s as big as half the Moon? Answer: The other half. Good advice for fighting with God.


Written with assistance from ChatGPT :-)

Google+ Archive

Google is shutting down Google+ in the next few days. I’m archiving my G+ stream here for posterity.

Over 7 years I posted exactly 1,001 times to Google+. I know it was never an especially popular social network, but somewhat to my surprise I found that I enjoyed it a lot. There was a strong set of people on G+ interested in deep learning, robotics and related topics, at least for a period of several years. The unlimited post length meant your could have meaningful conversations in a way you couldn’t on Twitter. For me Google+ was a thoughtful place, with high quality people, interesting content and meaningful discussion. The fact that it was a small, ignored community mostly interested in technical topics provided the conditions for that.

I have now reluctantly moved to Twitter, and I also have this blog for occasional long form content. Twitter is (alas) not much of a substitute for G+. No matter how carefully I curate who I follow, my Twitter stream is invariably full of political anger and culture wars. I am as susceptible to this as anyone else, and a Twitter session is a pretty sure way to make me feel angry and unhappy. I very much wish there was a way to turn down the emotion in my Twitter feed. Unfortunately I am yet to find that setting. Nevertheless, I do still get some useful technical and professional news from Twitter, so I will likely proceed with it and accept the unhappiness tax that it imposes.

So RIP G+, you were not much loved by most, but I will miss you.

Pointy

Two and a half years ago I left Google and set out to build a new kind of search engine. This may sound a little crazy, but all the best things are like that :-)

We’ve been avoiding the tech press and trying to build things quietly, but this week we’re launched our user-facing app. I’m really proud of what the team has built, so it’s exciting to finally be able to say a bit more about it.

The problem we’ve been working on is finding specific items locally. For example, a light bulb just broke and it’s a strange fitting, where’s the nearest place you can get a new one?  Or you’re half way through a recipe and realise you’re missing an ingredient – where do you get it?

More