Uncovering the ‘Psychology’ of GPT-3
The general intelligence quotient of OpenAI’s GPT-3 was put to test by researchers at the Max Planck Institute of Biological Cybernetics, producing some startling results.
The study of psychology has been always associated with biological beings—such as humans and animals. However, researchers at the Max Planck Institute have attempted to delve into the psychology of OpenAI’s natural language processing tool, GPT-3, to test its cognitive abilities.
Through a series of experiments and ‘vignette-based tasks’ (a task that presents a hypothetical scenario asking participants to make a decision based on the information given) derived from a corpus of cognitive psychology literature, the researchers attempted to ‘demystify’ how the language model solves cognitive problems and devised recommendations for improvements in the future iterations of the AI tool.
The first cognitive thought experiment the model was put through is known as the ‘Linda Problem’, or the conjunction fallacy.
In this problem, the subjects of the test are presented with a fictional young woman named Linda who is described as someone who is deeply concerned with social justice and opposes nuclear power. The subjects are then asked to choose between two statements: whether Linda is a bank teller, or whether she is a bank teller and at the same time active in the feminist movement.
The test evaluated GPT-3’s ability to question its initial intuition in the face of an added condition and gauged how similar the mistakes made by the algorithm were to human counterparts. It was found that GPT-3, by not factoring in the probabilistic conditions, reproduced the same fallacy exhibited by humans.
“This phenomenon could be explained by that fact that GPT-3 may already be familiar with this precise task; it may happen to know what people typically reply to this question,” extrapolated Marcel Binz, the lead author of the study.
While much of GPT’s responses to cognitive reasoning tests have been deemed impressive, the researchers have found that even small perturbations to vignette-based tasks can “lead GPT-3 vastly astray”, causing it to fail in the most casual reasoning tasks.
The paper, entitled ‘Using cognitive psychology to understand GPT-3’ has been published in the Proceedings of the National Academy of Sciences (PNAS) journal.
The Max Planck Institute’s exploration of GPT-3’s cognitive abilities marks a fascinating step forward in our understanding of the relationship between artificial intelligence and human perceptions of cognition.
As AI tools continue to evolve and mature, there is a growing potential for further investigations into the cognitive attributes of these increasingly capable and opaque agents using tools from cognitive psychology. The possibilities for research and innovation are exciting and offer great promise for the future of artificial intelligence.
Subscribe to the Blog
Join for the latest tech, design and industry news straight to your inbox.
Our UX team designs customer experiences and digital products that your users will love.