AI psychological inference ability or comparable to humans
2024-05-23
A new paper published in the journal Nature Human Behavior shows that in testing the ability to track the psychological states of others, also known as Theory of Mind, two types of Large Language Models (LLMs) perform similarly or even better than humans in specific situations. The theory of mind is crucial for human social interaction, as it is crucial for human communication and resonance. Previous studies have shown that, Artificial intelligence (AI) such as LLM can solve complex cognitive tasks, such as multi choice decision-making. However, it has always been unclear whether LLM's performance in mental theory tasks (considered a unique human ability) can also rival that of humans. This time, the team from Hamburg Eppendorf University Medical Center in Germany has chosen tasks that can test different aspects of mental theory, including identifying erroneous ideas, understanding indirect speech, and identifying impoliteness. They then compared the ability of 1907 people to complete tasks with two popular LLM families - GPT and LLaMA2 models. The team discovered that, The performance of GPT models in identifying indirect requirements, erroneous ideas, and misleading information can reach or even exceed the average human level, while the performance of LLaMA2 is inferior to the human level; In terms of identifying impoliteness, LLaMA2 is stronger than humans but performs poorly in GPT. Researchers point out that, The success of LLaMA2 is due to a lower degree of bias in the responses, rather than being truly sensitive to impoliteness; The apparent failure of GPT is actually due to a super conservative attitude towards adhering to conclusions, rather than due to reasoning errors. The research team believes that, LLM's performance in mental theory tasks is comparable to that of humans, which does not mean they have human like emotional intelligence, nor does it mean they can master mental theory. But they also pointed out that these results are an important foundation for future research and suggested further research on the performance of LLM in psychological inference, as well as how these performances will affect human cognition in human-computer interaction. (Lai Xin She)
Edit:He Chuanning Responsible editor:Su Suiyue
Source:Sci-Tech Daily
Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com