Vertically Integrated Project

Trust, Fairness, and Social Robotics in AI-Driven Decision-Making

Stylized image consisting of a set of scales with a computer chip labeled AI on one side and a brain on the other

Goals

As artificial intelligence (AI) increasingly integrates into decision-making across industries, concerns about trust, fairness, transparency, and ethical considerations continue to shape user adoption and perception. This project explores algorithmic decision-making, social robotics, and synthetic relationships, examining how individuals and AI systems interact in both functional and social contexts.

Impact & Long-Term Vision

This project will provide valuable insights into how AI can be designed and deployed responsibly, balancing efficiency with fairness. It aims to contribute to academic research, industry applications, and policy discussions on AI governance.

Issues Involved or Addressed

  • Understand AI Trust and Fairness: Explore how users perceive fairness in AI decision-making in critical domains such as finance, healthcare, and human resources.
  • Investigate Algorithm Aversion and Appreciation: Identify the conditions under which users reject or embrace algorithmic decisions.
  • Explore Synthetic Relationships and Social Robotics: Analyze AI-to-AI interactions (synthetic relationships) and human engagement with social robots, examining their impact on trust, reliance, and ethical considerations.
  • Develop Trust Recovery Models: Design and test strategies for rebuilding trust in AI when errors occur.
  • Enhance AI Adoption Strategies: Provide insights on how businesses and policymakers can increase AI acceptance while ensuring ethical and transparent practices.
  • Interdisciplinary Research & Application: Bridge the gap between computer science, behavioral psychology, and business analytics to inform AI policy and user experience design.

Methods and Technologies

Participants in this VIP will engage in hands-on, interdisciplinary research, including:

  • Conducting User Studies & Experiments: Designing and running behavioral studies to measure fairness perception and trust in AI and social robots.
  • Analyzing Data & AI Bias: Using statistical and machine learning methods to evaluate patterns in algorithmic decision-making.
  • Developing Fairness Metrics: Creating frameworks to assess AI transparency, fairness, and accountability.
  • Building AI Trust Recovery Models: Testing interventions to regain user trust after algorithmic errors.
  • Communicating Findings: Publishing research insights in academic venues and engaging with stakeholders in AI ethics and policy.

Academic Majors / Major Area of Interest

Academic Majors: Artificial Intelligence (AI) and/or Machine Learning (ML)

Team Composition & Interdisciplinary Nature

This VIP will bring together:

  • Undergraduate and Graduate Students from disciplines such as Communication, Information Systems, Business Analytics, Communication, Marketing, and Human-Computer Interaction.
  • Postdoctoral Researchers specializing in algorithmic decision-making, fairness, and transparency.
  • Faculty Mentors with expertise in AI ethics, machine learning, psychology, and human-AI communication.

 

Team Meeting Time and Place

Meeting times and locations will be determined at the beginning of each semester based on team members' availability.

  • Full Project Team Meetings: Held at least once per week to discuss progress, challenges, and next steps.
  • Sub-Team Meetings: Conducted as daily stand-ups to ensure continuous collaboration and alignment on tasks.

Related Projects