In a recent study by Psychometrica, OpenAI’s ChatGPT-4 outperformed typical human EQ, highlighting its potential to replicate emotional responses and posing usage-related questions.
According to a study by the non-profit organisation Psychometrica, OpenAI’s sophisticated language model, ChatGPT-4, outperformed the typical adult human on a standardised Emotional Intelligence (EQ) test.
With an overall EQ score of 117, ChatGPT-4 outperformed the adult population’s normalised average of 100 and earned the “Gold Heart” ranking on the test’s descriptive result scale. Individual EQ scores were as follows: 105 for self-awareness, 123 for social awareness, 122 for self-management, 117 for self-motivation, and 116 for relationship management. This performance demonstrates a good capacity for social awareness and self-control.
The study served as a pilot experiment to assess how well current AI can respond to consumers on delicate issues needing emotional comprehension. The authors write, “Evaluation of their applied Emotional Intelligence becomes more crucial as AI models like ChatGPT-4 continue to gain wider usage in advising on personal life, professional, and academic issues. The importance of this research is highlighted by the growing usage of AI in consumer apps, which may have repercussions for users looking for guidance on mental health issues.
To evaluate the apparent EQ of contemporary artificial intelligence (AI), this test was finished on June 23. The test was administered through ChatGPT-4, and results were reviewed using the free EQ test web application available at https://psychometrica.org/tests/eq-test, which is the same as the “Psychometrica” mobile apps for Apple iOS and Android.
The researchers stress that even while ChatGPT-4 was able to understand and react to emotional cues, this does not imply that the AI is actually experiencing actual emotional states. Additionally, despite the fact that ChatGPT-4 appears to have a high level of emotional intelligence, Psychometrica advises users to exercise caution when interpreting or acting on the model’s output because the appearance of emotional understanding or connection may lead users to place too much faith in it.