Ashique KhudaBukhsh
Assistant Professor
Ashique KhudaBukhsh
Assistant Professor
Education
Ph.D., Carnegie Mellon University
Bio
Ashique KhudaBukhsh is an Assistant Professor in the Software Engineering Department and is affiliated with the ESL Global Cybersecurity Institute, where he directs the Social Insight Lab.
Dr. KhudaBukhsh's current research lies at the intersection of natural language processing (NLP) and AI for Social Impact, as applied to: (i) globally important events arising in linguistically diverse regions, requiring methods to tackle practical challenges involving multilingual, noisy, social media texts; (ii) polarization in the context of the current US political crisis; and (iii) auditing generative AI systems and platforms for unintended harms. In addition to having his research accepted at top artificial intelligence conferences and journals, his work has also received widespread international media attention, including coverage from The New York Times, BBC, Wired, Times of India, The Indian Express, The Daily Mail, VentureBeat, and Digital Trends.
Dr. KhudaBukhsh has been selected for the New Faculty Highlight program at AAAI 2024 and was the recipient of the RIT GGCIS Emerging Scholar Award in 2023. He enjoys melding teaching and research and, in 2022, he offered a course that studied contemporary US politics through the lens of cutting-edge AI and NLP techniques. This course was offered simultaneously at RIT and CMU (team-taught with Dr. Mark Kamlet, a noted political scientist), and students from both schools conducted collaborative research. KhudaBukhsh received his PhD in artificial intelligence in 2017 from Carnegie Mellon University (CMU) and started his tenure-track position at RIT in 2021.
Currently Teaching
In the News
-
October 31, 2023
AI Research Team at RIT Publish Findings on Generative Harmful Content
In a recent preprint paper, faculty and Ph.D. students in RIT’s ESL Global Cybersecurity Institute, identified issues surrounding generative hate speech in Google’s PaLM2 Large Language Model (LLM), which powers Bard, Google’s answer to ChatGPT. Google was informed about the toxic content generated by PaLM2, and thanks to their responsible approach, have since rectified issues identified by the team in their initial study.