Perceiving Intentions in Artificial Agents
This PhD project addresses the critical question of how humans perceive intentionality in algorithmic and AI agents, contributing both theoretically and practically to the fields of social cognition and human-AI interaction. People increasingly interact and collaborate with AI agents in work and home settings. Understanding what an AI agent wants, or aims to attain, is crucial for successful interactions. Although scientist have made 鈥渞eading of intentions鈥 easier by making AI agents human-like (e.g., humanoid robots), many interactions take place with non-embodied algorithms, of which only the behaviour can be observed (e.g., decision support systems). The current project investigates how intentions are inferred from mere behaviour of AI agents. Using basic principles of intention perception, it aims to understand when AI agents are perceived as intentional, and how this affects human-AI interactions.
Involved researchers
- I am a PhD Candidate, interested studying the psychological aspects of human-AI interactions. I am currently researching intentionality perceptions in AI agents
- I am an associate professor, working on goal-directed behaviour, habits, and the role of consciousness. I am involved in various projects exploring these issues in relation to AI and fear-learning. I am co-directing the GoalLab with Baptist Liefooghe
- I am an associate professor with a special interest in social and cognitive aspects of Human-AI interactions. I am co-directing the GoalLab with Ruud Custers
- I am a professor in Psychology, interested in human habits, goals, and autonomy. I am involved in several projects studying basic and applied questions, such as intentionality, Human-AI interactions and health
Funding
- Incentive grant
- Seed grant Focus area Human-centered AI