Researchers make big splash at computing accessibility conference

RIT wins top award for deaf and hard-of-hearing access technologies at ACM Conference

More than a dozen RIT faculty and students presented their work at the ACM computers and accessibility conference in Baltimore, including Matt Huenerfauth (left), Peregrine Hawthorn, Kesavan Kushalnagar, Larwan Berke, Kristen Shinohara, Abraham Glasser, Lisa Elliot, James Mallory, Michael Stinson, Khaled Albusays, Daniel Ashbrook and Sushant Kafle.

Rochester Institute of Technology was well-represented at the Association for Computing Machinery (ACM) ASSETS conference in early November, with more than a dozen faculty and students presenting projects in the field of computing accessibility.

With the university’s longstanding commitment to education and research in access technologies, RIT has become a leader at the annual International ACM SIGACCESS (Special Interest Group on Accessible Computing) ASSETS Conference on Computers and Accessibility. This year’s event, held Oct. 30–Nov. 1 in Baltimore, saw many researchers showcase their work on technologies for people who are deaf and hard-of-hearing.

Sushant Kafle, a computing and information sciences Ph.D. student, and Matt Huenerfauth, professor in the Department of Information Sciences and Technologies, took home the Best Paper award for their presentation, “Evaluating the Usability of Automatically Generated Captions for People who are Deaf or Hard of Hearing.” A second paper by RIT authors was among the six finalists for the Best Paper award.

“A major goal of our research is to promote greater inclusion of people who are deaf or hard-of-hearing in research studies or evaluations of technology,” said Huenerfauth. “In our ASSETS papers this year, we presented tools that other researchers can use to create or evaluate new communication technologies.”

Huenerfauth and Kafle presented their research on inventing a new metric for automatically evaluating the accuracy of automatic speech recognition software, which is known to occasionally make errors.

“We conducted experiments to determine whether our metric matched the judgements of deaf and hard-of-hearing users, as to which automatically-produced captions were understandable,” said Kafle, who is originally from Nepal. “Disseminating our research results will benefit scientific progress on technologies that benefit deaf and hard-of-hearing individuals, and in general, advancements in the field of computing accessibility research that ultimately lead to new policies and practices for the betterment of our society.”

Research was presented by faculty and students from RIT’s B. Thomas Golisano College of Computing and Information Sciences (GCCIS) and the National Technical Institute for the Deaf (NTID). Other research presented by RIT authors focused on tools to support communication or collaboration, especially in education settings. For example, NTID faculty shared their work that uses speech recognition technology to support communication during group meetings or in workplace settings.

RIT papers accepted for presentation talks, included:

  • Best Paper Award winner: “Evaluating the Usability of Automatically Generated Captions for People who are Deaf or Hard of Hearing,” by Kafle and Huenerfauth.
  • Best Paper Award nominee: “Design and Psychometric Evaluation of an American Sign Language Translation of the System Usability Scale,” by Huenerfauth, Kasmira Patel ’17 (human-computer interaction) from Hastings-on-Hudson, N.Y., and Larwan Berke, a computing and information sciences Ph.D. student from Fremont, Calif.
  • “Deaf and Hard-of-Hearing Perspectives on Imperfect Automatic Speech Recognition for Captioning One-on-One Meetings,” by Berke, Huenerfauth and Christopher Caulfield ’17 (information sciences and technologies) from Champaign, Ill.
  • “Interviews and Observation of Blind Software Developers at Work to Understand Code Navigation Challenges,” by Huenerfauth; Khaled Albusays, a computing and information sciences Ph.D. student from Saudi Arabia; and former RIT professor Stephanie Ludi.
  • “Personal Perspectives on Using Automatic Speech Recognition to Facilitate Communication between Deaf Students and Hearing Customer,” by James Mallory, NTID professor; Michael Stinson, NTID professor; Lisa Elliot, senior research scientist at NTID; and Donna Easton, research assistant at NTID.
  • “Cyborg Pride: Self-Design in e-NABLE,” by Peregrine Hawthorn, a Monroe Community College student from Burien, Wa., and Daniel Ashbrook, assistant professor in the IST Department.
  • “Deaf, Hard of Hearing, and Hearing perspectives on using Automatic Speech Recognition in Conversation,” by Raja Kushalnagar, instructional faculty at NTID; Abraham Glasser, a third-year computer science major from Rochester, N.Y.; and Kesavan Kushalnagar, a third-year computer science major from Pittsford, N.Y.

 

RIT papers accepted for presentation as scientific posters, included:

  • “CollabAll: Inclusive Discussion Support System for Deaf and Hearing Students,” by Anthony Peruma, a software engineering graduate student from Sri Lanka, and Yasmine El-Glaly, a lecturer in the Department of Software Engineering.
  • “Feasibility of Using Automatic Speech Recognition with Voices of Deaf and Hard-of-Hearing Individuals,” by Glasser, Kesavan Kushalnagar and Raja Kushalnagar.
  • “Teaching Inclusive Thinking in Undergraduate Computing,” by Huenerfauth; Ludi; Nidhi Palan, a human-computer interaction graduate student from India; and Vicki Hanson, a Distinguished professor in GCCIS.
  • “User Experiences When Testing a Messaging App for Communication Between Individuals who are Hearing and Deaf or Hard of Hearing,” by Elliot; Stinson; Easton; and Syed Ahmed, a fifth-year computer engineering student from Bangladesh.
  • “Using Automatic Speech Recognition to Facilitate Communication Between an Individual who is Hearing and One who is Deaf or Hard of Hearing,” by Stinson, Ahmed, Elliot and Easton.

 

 four people shaking hands while holding certificates smiling.RIT professor Matt Huenerfauth and Ph.D. student Sushant Kafle took home the Best Paper award at the 2017 ACM ASSETS Conference in early November, for their work using automatically generated captions to help people who are deaf or hard-of-hearing.