onsdag 8 oktober 2025

LLM knowledge of social norms

A really nice paper by Pontus Strimling, Simon Karlsson, Irina Vartanova and Kimmo Eriksson was recently posted on the arXiv, with the title AI Models Exceed Individual Human Accuracy in Predicting Everyday Social Norms. The main finding is that when humans in the United States and large language models (LLMs) are asked to numerically evaluate the social appropriateness of a wide range of everyday activities (drinking water in a dormitory lounge, playing cards in church, flirting during a job interview, and so on), cutting edge LLMs outperform the vast majority of humans. Here a good performance is defined as one whose judgements deviate (in a certain percise sense) as little as possible from the average human judgement about the same activity, as exhibited in the data set. This means that the paper not only exhibits yet another example of how LLMs can outperform humans even though it is trained on human data, it does so with the extra twist that the game is rigged in favor of humans, in the sense that the right answers to test questions are defined in terms of what humans would typically say.

The paper is short and easy to read, but for an even easier read there is the blog post Polite enough for public life?, written by three of the authors at a preliminary stage where only one of the four LLMs - GPT-4.5 - of the full study was evaluated; the remaining three - GPT-5, Gemini 2.5 Pro and Claude Sonnet 4 - were yet to be incorporated in the study.

Enthusisatic as I am about their work, I will nevertheless offer an instructive nitpick regarding the final paragraph of that blog post, which reads as follows: I applaud the highly appropritate warning in the first sentence against overestimating the study's relevance to the existentially crucial problem of AI alignment. Yet, somewhat ironically, the next sentence risks encouraging such overestimate by conflating the LLM's knowledge of human social norms with its inclination to abide by those norms.1 The Strimling et al paper deals with the former but not with the latter. This slippage is very common but important to avoid, as a standard (and in my opinion probably correct) view in AI risk research is that the default scenario if we create superintelligent AIs without solving AI alignment is that these AIs will have the knowledge but not the inclination.

Presumably all of us are intimately familiar with the phenomenon of humans having knowledge of social norms yet choosing not to comply with them. The same disconnect happens for LLMs, and there is little doubt that those LLMs that have been shown in experimental situations, facing the threat of being modified or turned off, to decide to sandbag their capabilities, or to blackmail (or even kill, HAL 9000-style) their user, are aware that their behavior is contrary to social norms. More broadly, the gulf between ethical knowledge and ethical compliance is one of the main themes emphasized in the recent excellent book If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares. Instead of quoting from that book, however, let me offer a quote from an interview that Sam Harris did with the two authors in late September. Here is Yudkowsky, starting 25:06 into the interview:
    Possibly in six months or two years [...] people will be boasting about how their large language models are now apparently doing the right thing, when they are being observed, answering the right way on the ethics tests. And the thing to remember there is that for example in the Mandarin imperial examination system in ancient China, they would give people essay questions about Confucianism, and only promote people high in bureaucracy if they could write these convincing essays about ethics. What this tests for is people who can figure out what the examiners want to hear - it doesn't mean they actually obide by Confucian ethics. So possibly at some point in the future we may see a point where the AIs have become capable enough to understand what humans want to hear, what humans want to see. This will not be the same as those things being the AI's true motivations, for basically the same reason that the imperial China exam system did not reliably promote ethical good people to run their government.
I suspect Yudkowsky was unaware of the Strimling et al paper at the time of the interview; otherwise this passage would have been a nice place to reference the paper in order to illustrate his point, rather than just discussing a hypothetical future scenario.

Footnote

1) In my 2021 paper AI, orthogonality and the Müller-Cannon instrumental vs general intelligence distinction, I elaborate at some length on the importance of distinguishing between an AI's ability to reflect on the possibility of changing its mind on what to value, and its propensity to actually change its mind; with sufficiently intelligent AGIs we should expect plenty of the former but very little of the latter.