On current AI and non-verifiable tasks

Dario Amodei recently admitted in his interview with Dwarkesh Patel that he’s unsure whether current AI approaches will ever achieve good results in non-verifiable tasks, like “writing a novel or planning an expedition to Mars”.

(Verifiable tasks are those, like coding or mathematics, where a set of inputs - the code or the math proof - can be mapped to a concrete result which can then be verified against an expected output).

Amodei holds that the now proverbial “country of geniuses in a datacenter” will be online in two or three years regardless, implying AI that will generalize to an immense variety of tasks across the economy and the sciences.

If nonetheless Amodei’s fears become true and the current direction of AI does not generalize to non-verifiable tasks, logically, the only way we could get a “country of geniuses” would be if a great variety of tasks in the economy and sciences prove to be verifiable.

It’s hard to imagine the latter being true: one can conceive many tasks that will prove very hard to verify. In a (perhaps ample) subset of these, AI will be able to intervene regardless, as accuracy, taste, trust or instinct (e.g., operating with imperfect context) may not be critical. In many other non-verifiable tasks - and the bar need not be as high as writing a novel or planning an expedition to Mars - the results may nevertheless turn out anywhere between disappointing and unacceptable.