Computational framework of human values
2024-Present
This interdisciplinary research aims to explore and establish how human values can be effectively implemented in agents.
Value-aligned decisions and explanations
2022-Present
This research focuses on aligning agent decisions and explanations to human values.
Social signals and human factors in multiagent systems
2020-Present
This research explores how human factors and social signals—such as subtle hints, verbal messages, sanctions, or other real-world methods humans use to convey their attitudes—influence and shape agent behaviors within multiagent systems.
Social context-aware and normative agents
2020-Present
This research aims to design AI that can recognize and adapt to social norms and make decisions that align with ethical values within multiagent systems.
Specifically, agents must understand expected behaviors across different contexts and adapt to emerging or changing social norms.
Dialogue response generation
2020
This project aimed to improve dialogue response generation in interactive settings by building new models or fine-tuning pre-trained language models.
Social dilemmas in multiagent games
2017-2020
This research aimed to study social dilemmas in multiagent games with deep reinforcement learning.
Belief convergence among agents
2017
This project aimed to investigate the coordination between inconsistent agents using belief modeling.