The conversation around AI has settled into a predictable cycle: the announcement of a reality-altering feature from a new model, followed by a scientific study reminding us that AI is neither truly intelligent nor capable of reasoning, and may, in fact, be making us dumber. I should be upfront: I think AI models are great. I use them as much as I can, I try to learn with them, and I believe they will fundamentally transform how we work. In this essay, I’ll explain why.
Read MoreThe BIS has a Bulletin out on the usefulness of AI and large language models. They’re not terribly impressed.
Read MoreWhen posed with a logical puzzle that demands reasoning about the knowledge of others and about counterfactuals, large language models (LLMs) display a distinctive and revealing pattern of failure.
The LLM performs flawlessly when presented with the original wording of the puzzle available on the internet but performs poorly when incidental details are changed, suggestive of a lack of true understanding of the underlying logic.
Our findings do not detract from the considerable progress in central bank applications of machine learning to data management, macro analysis and regulation/supervision. They do, however, suggest that caution should be exercised in deploying LLMs in contexts that demand rigorous reasoning in economic analysis.