HCI and Accessibility

Narrative

Our faculty and students collaborate to find impactful solutions to real-world problems facing users as they interact with computing technologies, including in the domains of communication, information access, education, personal well-being, and security.  They also investigate the design of computing technologies to ensure that they are accessible for individuals with diverse abilities and how we can leverage computing technologies to promote inclusion.

Ph.D. Students

 

  • Bryan Manrique Amador (advisor: Richard Zanibbi)
  • Sara Andrew (advisor: Garreth Tigwell)
  • Kevin Barkevich (advisor: Reynold Bailey)
  • Bone Cedric (advisor: Cecilia O. Alm)
  • Calua De Lacerda Pataca (advisor: Matt Huenerfauth)
  • Xiwen Dengxiong (advisor: Yunbo Zhang)
  • Abhisek Dey (advisor: Richard Zanibbi)
  • John Dunham (advisor: Konstantinos Papangelis)
  • Paul Chukwuemeka Ezeamii (advisor: Kristen Shinohara)
  • Noella Kolash (advisor: Peter Hauser)
  • Emily Kuang (advisor: Kristen Shinohara)
  • Kevin Laporte (advisor: David Schwartz and Justin Pelletier)
  • Ziming Li (advisor: Roshan Peiris)
  • Xinmiao Lin (advisor: Matthew Wright)
  • Sanzida Mojib Luna (advisor: Konstantinos Papangelis)
  • Nathanael James Mathews (advisor: Matthew Wright)
  • Laleh Iran Nourian (advisor: Garreth Tigwell)
  • Michael Peechatt (advisor: Cecilia O. Alm)
  • Muhammad Raees (advisor: Konstantinos Papangelis)
  • Mohammad Saidur Rahman (advisor: Matthew Wright)
  • Ayush Kumar Shah (advisor: Richard Zannibi)
  • Shaikh Akib Shahriyar (advisor: Matthew Wright)
  • Saniat Javid Sohrawardi (advisor: Matthew Wright)
  • Akira Takeuchi (advisor: Sungyoung Kim)
  • Murtaza Tamjeed (advisor: Kristen Shinohara)
  • Rajesh Titung (advisor: Cecilia O. Alm)
  • Yijing Wu (advisor: Matthew Wright)
  • Jiangnan Xu (advisor: Konstantinos Papangelis)
  • Zhizhuo Yang (advisor: Reynold Bailey)

Related Courses

HCIN-600
Credits 3
This course provides students with an introduction to the practical application of various research methods that can be used in human computer interaction. The course provides an overview of the research process and the literature review, and provides experience with qualitative, survey, and experimental research methods. Students will study existing research and design and conduct studies. Students will need to have taken a statistics course before registering for this class.
HCIN-610
Credits 3
Human-computer interaction (HCI) is a field of study concerned with the design, evaluation and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them. This course surveys the scope of issues and foundations of the HCI field: cognitive psychology, human factors, interaction styles, user analysis, task analysis, interaction design methods and techniques, and evaluation. This course will focus on the users and their tasks.
HCIN-620
Credits 3
Designing meaningful relationships among people and the products they use is both an art and a science. This course will focus on the unique design practice of: representing and organizing information in such a way as to facilitate perception and understanding (information architecture); and, specifying the appropriate mechanisms for accessing and manipulating task information (interaction design). This course will also explore the various design patterns (design solutions to particular problems) that are appropriate for the HCI professional. Students will need prior knowledge of an interface prototyping tool.
HCIN-630
Credits 3
This project-based course will focus on the formal evaluation of products. Topics include usability test goal setting, recruitment of appropriate users, design of test tasks, design of the test environment, test plan development and implementation, analysis and interpretation of the results, and documentation and presentation of results and recommendations.
HCIN-720
Credits 3
Wearable computers and Internet of Things devices involve both hardware and software. In order to design user experiences for these systems, professionals must understand how they are built. Students will learn how to rapidly prototype and evaluate wearable and IoT devices combining hardware and software. Experience in programming is helpful but not a prerequisite.
HCIN-722
Credits 3
Mobile phones are now a major computing platform, and wearable and Internet of Things devices are emerging as major technologies. Each device offers different interaction opportunities and challenges. Students will learn about the research in interaction with these devices and how to design effective interactions for mobile, wearable, and ubiquitous devices.
PSYC-714
Credits 3
In this course the students will learn to recognize the integrated (systems) nature of Engineering Psychology, the centrality of human beings in systems design, and to use the topics covered and the available knowledge base to adapt the environment to people. This course will cover several fundamental models of human information processing in the context of human-system interactions. The models may include but are not limited to Signal Detection Theory, Information Theory, theories of attention, both normative and naturalistic decision-making models, Control Theory, and the Lens Model of Brunswick, as well as models of the human as a physical engine, that is, anthropometry, biomechanics, and work physiology. Most topics include readings in addition to the course text as well as a lab exercise with a detailed lab report.

Research Projects

  • Intelligent Mobility Cane with the indoor navigation for blind, visually impaired and deaf-blind [Oh]: Vision loss knows no boundaries; it can affect anyone, of any age, income level, race, or ethnic background, at any time. Regardless of the level of visual impairment, vision loss can impact a person’s life and their ability to complete everyday tasks. One of the greatest challenges facing a person who is blind or deaf-blind is the ability to navigate safely and independently through the physical world. Traveling with little or no vision at all can be uncomfortable and frightening, limiting the ability to work, go to school, take care of personal needs, or socialize with others. The goal of the intelligent mobility cane (IMC) is to bring to the global marketplace and is to uses advanced technology that will increase the independence and safety of people who are blind or deaf-blind in a way that is affordable and convenient.
  • Reading Assistance for DHH Technology Workers [Huenerfauth]: Low English reading literacy among some Deaf and Hard of Hearing (DHH) adults can be a particular barrier to computing professions, where workers must regularly "upskill" to learn about rapidly changing technologies throughout their career. We investigate the design of a web-browser plug-in to provide automatic English text simplification (on-demand) for DHH individuals, including providing simpler synonyms or sign-language videos of complex English words or simpler English paraphrases of sentences or entire documents. http://cair.rit.edu/projects.html#simplify
  • Video-Based Search for ASL Dictionaries [Huenerfauth]: Looking up an unfamiliar word in a dictionary is a common activity in childhood or foreign-language education, yet there is no easy method for doing this in ASL. We are investigating a computer-vision-based sign-lookup interface for online ASL video dictionaries. http://cair.rit.edu/projects.html#dictionary
  • ASL Corpora for Linguistic Research [Huenerfauth]: We collect video and motion-capture recordings of native sign-language users, in support of linguistic research and machine-learning modeling of aspects of ASL. http://cair.rit.edu/projects.html#linguisticcorpora
  • Tools and Techniques to Encourage Inclusion in Design Thinking [Shinohara]: We examine the effectiveness of design techniques and tools to incorporate accessibility in the technology design process, by including multiple perspectives and social consideration. This project includes creating and evaluating tools and inclusive strategies for designers to involve and center people with disabilities in the design process.
  • Design of Future Captioning Technology [Huenerfauth]: We investigate the requirements and preferences of Deaf and Hard of Hearing (DHH) users for captioning technology for video programming or for real-time captioning in live meetings, including how the confidence scores for words or phrases identified through automatic speech recognition (ASR) could be revealed to users of such captions. http://cair.rit.edu/projects.html#captionmeetings
  • Effective Methods of Teaching Accessibility [Ludi, Hanson, Huenerfauth, Shinohara]. This project examines the effectiveness of a variety of methods for teaching computing students about concepts related to computer accessibility for people with disabilities. This multi-year project will include longitudinal testing of students two years after the instruction to search for lasting impacts. http://cair.rit.edu/projects.html#ethics
  • Investigating Effective Pedagogies for Teaching Accessibility [Shinohara]: We investigate what approaches are more effective in teaching accessibility to computing students in higher-education. We also examine what accessibility topics and concepts are covered in computing curriculum in order to determine what topics should be addressed and how. This project includes perspectives from students, instructors, and working professionals.
  • Learning ASL through Real-Time Practice [Huenerfauth]: We are investigating how computer-vision technologies can enable students learning American Sign Language (ASL) to practice their signing independently, through a tool that provides feedback automatically based on a video of their signing. http://cair.rit.edu/projects.html#learn
  • Generating ASL Animation from Motion-Capture Data [Huenerfauth]: We investigate techniques for making use of motion-capture data collected from native American Sign Language (ASL) signers to produce linguistically accurate animations of ASL. We produce machine-learning models of various phenomena, e.g. speed and timing during signing, and we use these models to partially automate the generation of animations, to reduce the cost in providing ASL content for Deaf users. http://cair.rit.edu/projects.html#mocap
  • Tools for Blind Programmers [Huenerfauth]: This project investigates what the difficulties are that blind computer programmers face when navigating through software code. By investigating what current tools these programmers use when moving through computer code and studying the work-arounds that many of these programmers use to make technologies work for them, we look for ways to improve this experience with new technologies. http://cair.rit.edu/projects.html#blindprogrammers
  • Word Importance in Captions for Deaf Users [Huenerfauth]: Automated Speech Recognition (ASR) technology is still imperfect in many settings. To evaluate the usefulness of captions for Deaf users based on ASR, simply counting the number of errors is insufficient, since some words contribute more to the meaning of the text.  We study methods for automatically predicting the importance of individuals words in a text, and we are using these models to develop alternative evaluation metrics for analyzing ASR accuracy, to predict how useful ASR-based captions would be for users. http://cair.rit.edu/projects.html#importance
  • Creating Linguistic Stimuli for ASL Research [Huenerfauth]: Animated virtual humans can produce a wide variety of subtle performances of American Sign Language, including minor variations in handshape, location, orientation, or movement. This technology can produce stimuli for display in experimental studies with ASL signers, to study ASL linguistics. http://cair.rit.edu/projects.html#aslstimuli
  • Methodologies for DHH User Research [Huenerfauth]: We have conducted a variety of methodological research on the most effective ways to structure empirical evaluation studies of technology with Deaf and Hard of Hearing (DHH) users. This research has included the creation of standard stimuli and question items for studies with ASL animation technology, analysis of the relationship between user demographics and responses to question items, the use of eye-tracking in studies with DHH users, and the creation of American Sign Language versions of standard usability evaluation instruments. http://cair.rit.edu/projects.html#aslmethods
  • Eye-Tracking to Predict User Performance [Huenerfauth]: Computer users may benefit from user-interfaces that can predict whether the user is struggling with a task based on an analysis of the user's eye movement behaviors. We are investigating how eye movements may suggest this is happening. This work also considers whether eye-tracking can be used as a complementary or alternative method of evaluation for animations of sign language, by examining the eye-movements of native signers who view these animations to detect when they may be more difficult to understand. http://cair.rit.edu/projects.html#eyepredict
  • Detecting DeepFakes [Wright, Hickerson, Ptucha]: DeepFake videos like a fake presidential address could be used to undermine democracy and legitimate news. We are working with journalists to develop and deploy an AI-based tool to robustly detect DeepFakes and prevent their spread.
  • Deception and Privacy Issues in Social Media [Wright]: Spam and clickbait in social media waste users’ time and can lead them to malicious sites. Third-party applications and logins via social media can gain surprising levels of access into our private lives. We study how users interface with these issues and how to better protect users through effective and helpful warnings.
  • Projects in the REU in Computational Sensing for Human-centered AI [Alm, Bailey, Geigel, Huenerfauth, Ptucha, Shinohara]: The REU Site in Computational Sensing for Human-centered Artificial Intelligence recognizes that as the boundaries between HCI and AI blur, and AI grows increasingly agile and pervasive, the next generation of computational scientists must be capable of responsibly and effectively leveraging a spectrum of sensing data from data-generating humans. With this focus, the REU Site will expand its trajectory as an attractor for highly diverse students who will gain experience with sensing hardware and software towards transformative advances in intelligent systems focused on human behaviors and cognitive processes. Enabling diverse stakeholders early in their careers to discover how to collect, fuse, make inference with, and visualize multimodal human data can transform how humans and machines engage and collaborate. The research in the REU Site will address two limitations in AI: first, that underserved populations are at risk of being marginalized with the present focus on big data AI and, second, that AI trainees often lack experience with human data collection and critical thinking about human-elicited datasets. The REU Site will stimulate novel, safe ways for systems to team up with people to address society's vexing problems while remaining fair, attuned to bias, and representative of the diverse fabric of the general population. https://www.cs.rit.edu/~reu/projects.html
  • Sensing Multimodal Surprise [Alm, Bailey]: This work centers on new methods for sensing intuitive surprise reactions and for analyzing and making computational inference about surprise for developing more realistic, human-centered systems.
  • MathSeer project [Zanibbi]: A project seeking to make finding mathematical information easier. Our slogan is 'Math search for the masses.' https://www.cs.rit.edu/~dprl/mathseer

Research Labs