- Argument : For most interesting tasks known to computer science, it requires exponentially greater investments of computing power to gain a linear return in performance. Most search spaces are exponentially vast, and low-hanging fruits are
exhausted quickly. Therefore, an AI trying to invest an amount of cognitive work
w to improve its own performance will get returns that go as log(w), or if further
reinvested, log(w + log(w)), and the sequence log(w), log(w + log(w)),
log(w + log(w) + log(w + log(w))) will converge very quickly.
Argument : The history of hominid evolution to date shows that it has not required exponentially greater amounts of evolutionary optimization to produce substantial real-world gains in cognitive performance—it did not require ten times the evolutionary interval to go from Homo erectus to Homo sapiens as from Australopithecus to Homo erectus. All compound interest returned on discoveries such as the invention of agriculture, or the invention of science, or the invention of computers, has occurred without any ability of humans to reinvest technological dividends to increase their brain sizes, speed up their neurons, or improve the low-level algorithms used by their neural circuitry. Since an AI can reinvest the fruits of its intelligence in larger brains, faster processing speeds, and improved low-level algorithms, we should expect an AI’s growth curves to be sharply above human growth curves.