Dwarkesh - Amodei interview notes
The recent Dwarkesh Patel interview with Dario Amodei is the best AI conversation I’ve heard recently.
Dwarkesh presses firmly on a variety of intelligently chosen hot topics, and in the details you can see cracks in some of the narratives pushed by generalist media, business leaders, and “AI LinkedIn/Twitter”. Worth your time.
I expect to write more on this later; for now, here are some raw notes with my reactions [in brackets]:
-
Amodei: scaling laws still in full force, in pretraining and Reinforcement Learning; RL more attractive right now.
-
Training on verifiable tasks through RL: Dwarkesh presses that this isn’t how human intelligence generalizes.
-
Amodei agrees AI lacks some human elements, but will generalize anyway.
-
“Country of geniuses in a datacenter”: 2-3 years, by current means or new techniques.
-
Dwarkesh presses: where are the effects on the economy? Brings up Diffusion, ie, natural lag due to bureaucracy, reskilling, culture, etc.; points out it’s sometimes an excuse.
-
Amodei counters that Anthropic’s growth shows capabilities are spreading into the economy, but admits 10x revenue growth will be hard to sustain even this year.
-
Amodei admits he’s unsure AI will work for non-verifiable domains like writing a novel or planning a Mars expedition [isn’t this crucial for AGI?!]
-
Claims “SWE jobs will disappear” is coming true; nearly all Anthropic code AI-generated [the latter tracks with my experience: ~99% of my code is AI-generated, many devs claim the same].
-
Splits prediction into milestones (1) AI writes code (2) does the whole SWE job. Still thinks (2) is on track in a couple of years [I’m doubtful: requirements or architecture aren’t obviously verifiable; potentially a non-sequitur].
-
First AI does 90% of SWE, then soon 100% [last 10% may be hardest and no time soon]
-
Dwarkesh: coding may be “special” because context is in the code. Amodei dismisses this: context will be acquired somehow in other domains too.
-
Dwarkesh prefers to hire video editors because humans can learn context and acquire taste in 6 months; AIs can’t yet. Amodei retorts “country of geniuses” will conquer all.
-
Anthropic researching Continual Learning; someone will crack it soon, but may not be necessary for AGI [wishy-washy].
-
Anthropic currently sees 20% speed-up in engineering (up from 5% a year ago) [far from hype by the Exalted; also matches my experience]. This will continue to scale according to Amodei [Maybe, maybe not? I can see some potential improvements on end-to-end tasks; others still hard].
-
Economics: interesting framing on training vs inference, and why revenue lags. Differentiation, moats, and protecting margins [still fuzzy on many questions].
-
Politics/ethics: Amodei worries about power concentrating; supports export controls to keep democracies ahead; says he thinks deeply about preventing oppressive uses [China already has an AI oppression apparatus…].