A really nice paper by Pontus Strimling, Simon Karlsson, Irina Vartanova and Kimmo Eriksson was recently posted on the arXiv, with the title AI Models Exceed Individual Human Accuracy in Predicting Everyday Social Norms. The main finding is that when humans in the United States and large language models (LLMs) are asked to numerically evaluate the social appropriateness of a wide range of everyday activities (drinking water in a dormitory lounge, playing cards in church, flirting during a job interview, and so on), cutting edge LLMs outperform the vast majority of humans. Here a good performance is defined as one whose judgements deviate (in a certain percise sense) as little as possible from the average human judgement about the same activity, as exhibited in the data set. This means that the paper not only exhibits yet another example of how LLMs can outperform humans even though it is trained on human data, it does so with the extra twist that the game is rigged in favor of humans, in the sense that the right answers to test questions are defined in terms of what humans would typically say.
The paper is short and easy to read, but for an even easier read there is the blog post Polite enough for public life?, written by three of the authors at a preliminary stage where only one of the four LLMs - GPT-4.5 - of the full study was evaluated; the remaining three - GPT-5, Gemini 2.5 Pro and Claude Sonnet 4 - were yet to be incorporated in the study.
Enthusisatic as I am about their work, I will nevertheless offer an instructive nitpick regarding the final paragraph of that blog post, which reads as follows:
-
To conclude, knowing when it’s appropriate to run or talk in public may not rank among the most urgent AI alignment issues—especially when compared to existential risks like losing control over powerful AI systems. Still, if Sam Altman’s timeline holds and AI-equipped robots arrive within the next two or three years, it’s reassuring to think they will show up with decent manners—at least by U.S. standards.
Presumably all of us are intimately familiar with the phenomenon of humans having knowledge of social norms yet choosing not to comply with them. The same disconnect happens for LLMs, and there is little doubt that those LLMs that have been shown in experimental situations, facing the threat of being modified or turned off, to decide to sandbag their capabilities, or to blackmail (or even kill, HAL 9000-style) their user, are aware that their behavior is contrary to social norms. More broadly, the gulf between ethical knowledge and ethical compliance is one of the main themes emphasized in the recent excellent book If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares. Instead of quoting from that book, however, let me offer a quote from an interview that Sam Harris did with the two authors in late September. Here is Yudkowsky, starting 25:06 into the interview:
-
Possibly in six months or two years [...] people will be boasting about how their large language models are now apparently doing the right thing, when they are being observed, answering the right way on the ethics tests. And the thing to remember there is that for example in the Mandarin imperial examination system in ancient China, they would give people essay questions about Confucianism, and only promote people high in bureaucracy if they could write these convincing essays about ethics. What this tests for is people who can figure out what the examiners want to hear - it doesn't mean they actually obide by Confucian ethics. So possibly at some point in the future we may see a point where the AIs have become capable enough to understand what humans want to hear, what humans want to see. This will not be the same as those things being the AI's true motivations, for basically the same reason that the imperial China exam system did not reliably promote ethical good people to run their government.
Footnote
1) In my 2021 paper AI, orthogonality and the Müller-Cannon instrumental vs general intelligence distinction, I elaborate at some length on the importance of distinguishing between an AI's ability to reflect on the possibility of changing its mind on what to value, and its propensity to actually change its mind; with sufficiently intelligent AGIs we should expect plenty of the former but very little of the latter.