Intellectual humility and acceptance of AI
We live in a time where artificial intelligence (AI) is starting to revolutionize our lives. New research (Li, 2023) shows how intellectual humility – realizing the fallibility of our beliefs and knowledge – influences our attitudes towards AI, specifically ChatGPT . Four studies with a total of 943 participants investigate this, using both self-report and behavioral outcome variables.
Study 1: Chinese university students
This study (N=243) examined the relationship between intellectual humility and attitudes toward ChatGPT among Chinese university students. Through a survey, it was found that students with higher levels of intellectual humility were more accepting of ChatGPT and had less fear of it.
Study 2: Behavioral measures
Participants from another Chinese sample (N=250) were used to investigate whether the findings of Study 1 held up in a behavioral context. The results showed that participants with more intellectual humility were more likely to choose texts generated by ChatGPT.
Study 3: Experimental design with adult population
This study (N=225) examined how intellectual humility causally influences attitudes toward ChatGPT. Results indicated that participants in the high intellectual humility condition had more positive attitudes toward the use of ChatGPT in advertising.
Study 4: Mediation of openness to experience
The fourth study (N=225) looked at which psychological mechanisms mediate the relationship between intellectual humility and ChatGPT acceptance, with openness to experience emerging as an important factor.
Conclusion and future prospects
This research found consistent evidence across studies that intellectual humility is associated with more positive attitudes toward ChatGPT. This suggests that understanding and developing intellectual humility can contribute to more successful integration and acceptance of AI in society.
This study emphasizes the value of a balanced approach to AI, where intellectual humility does not of course mean blind enthusiasm, but a willingness to learn and remain critical. It recognizes the complexity and risks of AI and strives for an approach that balances both benefits and risks.