It is the new paper Daron Acemoglu, and he is skeptical about its overall economic effects. Here is part of the summary:
Using existing estimates of AI exposure and task-level productivity improvements, these macroeconomic effects appear nontrivial but modest: no more than 0.71% increase in total factor productivity over 10 years. The paper then argues that even these estimates might be inflated, because the early evidence comes from easy-to-learn tasks, whereas some of the future effects will come from hard-to-learn tasks, where many context-dependent factors affect the decision. -manufacturing and no objective measurement of results from which to learn from successful performance. Therefore, projected TFP gains over the next 10 years are even more modest and expected to be less than 0.55%.
Note that he does not suggest that TFP (total factor productivity, a measure of innovation) will increase by 0.71 percentage points (a plausible estimate, in my opinion), he says that it will increase by 0.71 % over a period of ten years, or 0.07 per year. Here is the explanation of the method:
I show that when the microeconomic effects of AI are driven by cost savings (equivalently, productivity improvements) at the task level – due either to automation or task complementarities – its macroeconomic consequences will be given by a version of Hulten’s theorem: aggregate GDP and productivity gains can be estimated based on the fraction of tasks affected and average savings at the task level. This equation disciplines all the effects of AI on GDP and productivity. Despite its simplicity, the application of this equation is far from trivial, as there is great uncertainty about which tasks will be automated or completed, and what the cost savings will be.
Most of the time I think this article is wrong, and I think it’s wrong for economic reasons. It’s not that I think the estimate is wrong, I think the method is completely misleading.
As for international trade, a large part of the benefits of AI will come from eliminating the least productive companies within distribution. This factor is never taken into account.
And as with international trade, much of the benefits of AI will come from “new goods.” Since the prices of these new goods were previously infinite (note the degree of substability that matters), these gains can be much greater than what we have. take advantage of incremental improvements in productivity. The very popular Character.ai is already one of those new goods, not to mention that I and many others enjoy playing with LLMs almost every day.
By the way, the basic model of this article – see pages 6 and 7 – posits only one good for the economy. Mention of the opposite case appears on page 11, and starting on page 19, where most of the attention is devoted to bad new products, such as more effective manipulation of consumers. Note that the paper contains no empirical arguments for why most new AI products might be bad for social welfare.
Pages 34 and 35 focus on the possibility of a public goods problem for the use of AI, similar to what has been suggested for social media. This discussion seems far removed from current AI practices and much of the speculation by AI experts. Should I use Midjourney because all my friends do it and I wish all this didn’t exist? Or do I just find it a lot of fun, like many people do when they create their own songs with AI? It is doubtful to place so much emphasis on the effects of the Prisoner’s Dilemma, but Acemoglu returns to this point with great force in the conclusion.
Towards the end he writes:
Productivity improvements resulting from new tasks are not incorporated into my estimates. This for three reasons. First of all and more parochially, this is much more difficult to measure and is not included in the types of exposure considered in Eloundou et al. (2023) and Svanberg et al. (2024). Secondly, and more importantly, I think it is fair not to include them in the likely macroeconomic effects, as these are not the areas currently receiving industry attention, as Acemoglu (2021) also argues ), Acemoglu and Restrepo (2020b). ) and Acemoglu and Johnson (2023). The priority areas for the tech industry seem to instead be around automation and online monetization, for example through digital searches or digital ads on social media. Third, and relatedly, more beneficial outcomes may require new institutions, policies and regulations, as also suggested by Acemoglu and Johnson (2023) and Acemoglu et al. (2023).
While many points in this paragraph strike me as downright wrong (like the industry focus), what he can’t bring himself to say is that the gains from these new tasks will be in made minimal. Because they won’t. But whether you agree or not, what’s going on? in the newspaper is that the AI gains are so small because they are assumed AI will not do new things. I just don’t see why it’s worth doing such an exercise.
A more general question is whether this model can predict that TFP changes that much. I’m pretty sure the answer is “no”, far from it.
Generally speaking, I found this sentence (p. 4) very strange: “…my framework also specifies that what is relevant for consumer welfare is TFP, rather than GDP, since additional investment comes from consumption. » I would say that what is relevant for consumer well-being is the sum of consumer and producer surplus, of which TFP is not a sufficient statistic. Perhaps this “unusual redefinition of the entire welfare economy in one sentence” stems from how many other commercial gains he removed from the system? And footnote 6 is strange and also wrong: “For example, if AI models continue to increase their energy requirements, this would contribute to measured GDP, but would not constitute a beneficial change for well-being . » Even for dirty energy this could be a mistake, let alone green energy. If an innovation induces the market to invest more in a service, the costs of this additional investment simply do not outweigh the gains. And if Acemoglu wants to argue that strange welfare economics is true in his model, that’s a good argument against his model, not a good argument that such gains wouldn’t matter in the real world, which is the subject of this article. .
Acemoglu explicitly excludes gains from better science, because they might not be realized within ten years. On this one, he is a prisoner of his own preconceptions. If much progress occurs in, say, years 10-15, I would simply say that the document is misleading, even if its words are defensible in a purely literal sense.
That said, to what extent does the “no new scientific data” clause exclude? In terms of business model, how does “new science” differ from “PTF”? I’m not sure and we don’t get clear guidelines. Is better software engineering a “new science”? Maybe? Won’t we get a lot of that within ten years? Don’t we already have some of it?
In summary, I don’t think this article establishes the “point of small wins” that it is trying to promote in the abstract.
It’s absolutely fair to point out that optimists haven’t seen big gains, but in this paper the cards are entirely – and unfairly – stacked in the opposite direction.
The post office “The Simple Macroeconomics of AI” appeared first on Marginal REVOLUTION.