Back to Home

Theme A: Psychological Dynamics in Conversational Search

Psychological Distance in Construal Level Theory

Construal Level Theory.jpeg

CHI 2026 Fit Matters: Format–Distance Alignment Improves Conversational Search

*Yitian Yang, Yugin Tan, Jung-Tai King, Yang Chen Lin, Yi-Chieh Lee*

fig_concept.png

<aside> 💡

The "Mental Zoom" Effect: Just as we zoom in for details and zoom out for the big picture, our brains process information differently based on "Psychological Distance." This paper proposes a new design principle: AI responses should align their format (Text vs. Image, Abstract vs. Concrete) with the user's mental distance. When the format fits the mindset, trust and satisfaction soar.

</aside>


CHI 2025 Understanding How Psychological Distance Influences User Preferences in Conversational Versus Web Search

*Yitian Yang, Yugin Tan, Yang Chen Lin, Jung-Tai King, Zihan Liu, Yi-Chieh Lee*

第 3 页_副本.jpeg

<aside> 💡

The Core Question: Why do we sometimes prefer ChatGPT and other times run back to Google? We discovered that "Psychological Distance" is the hidden switch. Users prefer conversational AI for "distant" tasks (abstract, future-oriented) but stick to traditional search for "near" tasks (concrete, immediate). A fundamental theory for the era of Generative Search.

</aside>


Theme B: Human-AI Collaboration & Decision Making

Exploring the subtle, unconscious ways AI influences human self-perception and trust.

CHI 2026 AI-exhibited Personality Traits Can Shape Human Self-concept through Conversations

Jingshu Li, Zicheng Zhu, Tianqi Song, Yitian Yang, Nattapat Boonprakong, Yi-Chieh Lee

第 1 页.jpeg

<aside> 💡

Identity Contagion: Can chatting with an extraverted AI make you feel more extraverted? Yes. We found that AI isn't just a tool—it's a mirror. This paper reveals a "malleable self-concept" effect where users unconsciously align their own personality self-perception with the AI's exhibited traits.

</aside>


🎉 Best Paper Award, Honorable Mention, CHI 2025 As Confidence Aligns: Exploring the Effect of AI Confidence on Human Self-confidence in Human-AI Decision Making

Jingshu Li, Yitian Yang, Q. Vera Liao, Junti Zhang, Yi-Chieh Lee

image.png

<aside> 💡

Confidence is Contagious: In decision-making, we often mirror the confidence of those around us. Does this apply to AI? We discovered a "Confidence Matching" phenomenon: high-confidence AI prompts humans to feel more self-confident, sometimes dangerously so. A critical look at the psychological ripple effects of AI calibration.

</aside>


Understanding the Effects of Miscalibrated AI Confidence on User Trust, Reliance, and Decision Efficacy

Jingshu Li, Yitian Yang, Renwen Zhang, Q. Vera Liao, Tianqi Song, Zhengtao Xu, Yi-chieh Lee

<aside> 💡

The "Confident Idiot" Problem: What happens when an AI is wrong but sounds totally sure of itself? We investigate the dangers of Miscalibrated Confidence. We show that once trust is established via high confidence, it is incredibly hard to break—even when the AI fails—leading to persistent over-reliance.

</aside>


Theme C: AI for Social Good & Qualitative Analysis

Leveraging LLMs to tackle sensitive societal issues and scale up rigorous qualitative research.

CHI 2026 Designing Computational Tools for Exploring Causal Relationships in Qualitative Data

Han Meng, Qiuyuan Lyu, Peinuan Qin, Yitian Yang, Renwen Zhang, Wen-Chieh Lin, Yi-Chieh Lee

第 2 页.jpeg

<aside> 💡

From Text to Cause: Qualitative data is rich but messy. We built a computational tool that helps researchers extract and visualize Causal Graphs directly from interview transcripts. It turns unstructured narratives into structured causal insights, bridging the gap between qualitative storytelling and quantitative modeling.

</aside>


TOCHI 2025 Exploring the Human-LLM Synergy in Advancing Theory-driven Qualitative Analysis

Han Meng, Yitian Yang, Wayne Fu, Jungup Lee, Yunan Li, Yi-Chieh Lee

figure_method.jpeg

<aside> 💡

Methodological Innovation: Qualitative coding is notoriously time-consuming. Instead of simply automating it, we propose a "Human-LLM Synergy" framework. We demonstrate how to use LLMs as collaborative partners to scale theory-driven analysis while preserving the nuance and rigor that human researchers provide.

</aside>


ACL 2025 (🎉 Oral Presentation; SAC Highlight) What is Stigma Attributed to? A Theory-Grounded, Expert-Annotated Interview Corpus for Demystifying Mental-Health Stigma

Han Meng, Yancan Chen, Yunan Li, Yitian Yang, Jungup Lee, Renwen Zhang, Yi-Chieh Lee

image.png

<aside> 💡

A Data Milestone: To fight mental health stigma, we first need to measure it. We present the first expert-annotated dataset grounded in psychological theory. This corpus allows NLP models to detect subtle, implicit stigma in conversations, paving the way for AI interventions that are clinically valid.

</aside>


CHI 2025 Deconstructing Depression Stigma: Integrating AI-driven Data Collection and Analysis with Causal Knowledge Graphs

Han Meng, Renwen Zhang, Ganyi Wang, Yitian Yang, Peinuan Qin, Jungup Lee, Yi-Chieh Lee

image.png

<aside> 💡

Unpacking Prejudice: Why do people stigmatize depression? Using a novel mix of AI-driven interviewing and Knowledge Graphs, we deconstruct the complex web of beliefs behind stigma. We found that stigma isn't random—it follows specific causal logic chains that we can now map and target.

</aside>


CSCW 2025 AI-Based Speaking Assistant: Supporting Non-Native Speakers' Speaking in Real-Time Multilingual Communication

Peinuan Qin, Zicheng Zhu, Naomi Yamashita, Yitian Yang, Keita Suga, Yi-Chieh Lee

image.png

<aside> 💡

Empowerment via AI: For non-native speakers, the biggest barrier isn't just vocabulary—it's the cognitive load of real-time translation. We designed an AI speaking assistant that provides just-in-time support, reducing anxiety and enabling smoother, more confident cross-cultural communication.

</aside>