Artificial Intelligence
Narratives
Our faculty and students conduct research on fundamental and applied topics in artificial intelligence:
Our research efforts aim to understand the nature of visual perception so that we can create more visually intelligent machines. This is enabled via fundamental concepts in reasoning, prediction, supervised, semi-supervised and unsupervised learning, and stochastic optimization techniques. Our research involves (i) fundamental computer vision topics such as video analytics, detecting humans and their poses from images, object detection and tracking, etc; (ii) computer vision applications such as medical image analysis, understanding social dynamics from videos of human interactions, collision detection in self-driving cars, and other vision-related regression problems in videos; and (iii) the intersection of computer vision and graphics where we aim to model realistic avatars that interact more naturally with humans. We are constantly pushing the boundaries in applying computer vision techniques to a myriad of problems such as 3D reconstruction of the heart from MRI images; deception detection from visual cues, understanding group interactions such as in a volleyball game, improving STEM classroom learning through video analytics, and other such problems.
Our research spans the spectrum from theory to algorithms to applications. We are interested in developing novel computational methods for large-scale problems where conventional methods are often computationally infeasible. These methods include the development of novel estimators using tools from constrained optimization theory, convex analysis, and Bayesian nonparametrics. We are also interested in statistical problems involving covariance estimation for high-dimensional data, as lots of classical data analysis methods break down in high-dimensions. These methods have numerous applications in areas such as statistical genetics, image processing, computational biology, cognitive science, and natural language processing. On the theory side, we study questions motivated by the applications, and try to come up with new methods that are computationally feasible to fix the problems we have identified.
Faculty and students engaged with computational linguistics—natural language processing and speech processing—aim to advance theoretical and applied research and human language technologies involving text, speech, conversational interaction, and multimodal data.
We focus on problems that deal with multiple intelligent agents interacting with each other and the environment. In particular, we study computational, algorithmic, and axiomatic aspects of (possibly strategic) decision making that influence the behavior of agents, and develop approaches to overcome computational issues which arise in practical applications such as resource allocation, voting and preference aggregation, and in general, social choice.
We focus on the development of novel computational problem solving methods based on abstractions of real-world processes. Specifically, the soft computing techniques we develop draw inspiration from how naturally occurring phenomena behave when adapting to various environmental situations. These techniques have applications in a wide range of fields such as physics, biology, and engineering. Our research in this area includes work in concurrent and distributed evolutionary algorithms, neuroevolution (automated design of neural networks), ant colony optimization, swarm intelligence, and neuro-cognitively motivated machine learning, which focuses on the design of learning algorithms and models strongly guided by principles in cognitive science and neuroscience.
Faculty
- Cissi O. Alm: natural language and speech processing
- Ivona Bezáková: machine learning, learning theory
- Reynold Bailey
- Jessica Bayliss
- Carlos Castellanos
- Travis Desell: nature inspired and evolutionary computation, computer vision
- Anne Haake
- Jamison Heard
- Edith Hemaspaandra: multiagent systems
- Dukka KC
- Christopher M. Homan: natural language and speech processing, multiagent systems, machine learning
- Matt Huenerfauth: natural language and speech processing
- Ashique KhudaBukhsh
- Daniel Krutz: nature inspired and evolutionary computation
- Richard Lange
- Yangming Li: machine learning, computer vision
- Rui Li: machine learning
- Dongfang Liu
- Xumin Liu: machine learning, natural language and speech processing
- Eva Navarro: symbolic AI, automated reasoning and verification, spiking neural networks, brain-inspired and neuromorphic computing, computational neuroscience, collective intelligence, machine learning.
- Alexander Ororbia: nature-inspired and evolutionary computation, machine learning
- Yin Pan: machine learning
- Nidhi Rastogi
- Leon Reznik
- Justus Robertson
- Pengcheng Shi: computer vision, machine learning
- Lu Sun
- Zhiqiang Tao
- Linwei Wang: computer vision, machine learning
- Haibo Yang
- Qi Yu: machine learning
- Bo Yuan: machine learning
- Richard Zanibbi: computer vision, machine learning
- Weijie Zhao
- Andreas Savakis
- Yunbo Zhang
Ph.D. Students
- Abhinab Acharya (advisor: Xumin Liu)
- Anusha Acharya (advisor: Xumin Liu)
- Md Tanvirul Alam (advisor: Nidhi Rastogi)
- Bryan Manrique Amador (advisor: Richard Zannibi)
- Pradeep Bajracharya (advisor: Linwei Wang)
- Kevin Barkevich (advisor: Reynold Bailey and Gabriel Diaz)
- Dipkamal Bhusal (advisor: Nidhi Rastogim)
- Cedric Bone (advisor: Cecilia O. Alm)
- Angelina Brilliantova (advisor: Ivona Bezakova)
- Jun Woo Chung (advisor: Weijie Zhao)
- Soumyajit Datta (advisor: Ashique KhudaBukhsh & Naveen Sharma)
- Calua De Lacerda Pataca (advisor: Matt Huenerfauth)
- Xiwen Dengxiong (advisor: Yunbo Zhang)
- Abhisek Dey (advisor: Richard Zannibi)
- Arka Dutta (advisor: Ashique KhudaBukhsh)
- Sujan Dutta (advisor: Ashique KhudaBukhsh)
- William Gebhardt (advisor: Alexander Ororbia)
- Valentina Genoese-Zerbi (advisor: Justus Robertson)
- Faezeh Habibi (advisor: Alexander Ororbia)
- Devroop Kar (advisor: Daniel Krutz & Travis Desell & Alexander Ororbia)
- Zohair Raza Hassan (advisor: Edith Hemaspaandra and Stanislaw Radziszowski)
- Joshua Karns (advisor: Travis Desell)
- Nilesh Kumar (advisor: Linwei Wang)
- Luke Kurlandski (advisor: Yin Pan & Matt Wright)
- Bipin Lekhak (advisor: Linwei Wang)
- Zhe Li (advisor: Haibo Yang)
- Shichang Lian (advisor: Linwei Wang)
- Dandan Liang (advisor: Haibo Yang & Rui Li)
- Huawei Lin (advisor: Weijie Zhao)
- Yang Liu (advisor: Daniel Krutz)
- Zimeng Lyu (advisor: Travis Desell)
- Casey Meisenzahl (advisor: Linwei Wang)
- Hannah Miller (advisor: Ivona Bezakova & Edith Hemaspaandra)
- Ryan Missel (advisor: Linwei Wang)
- Le Nguyen (advisor: Nidhi Rastogi)
- Viet Dung Nguyen (advisor: Alexander Ororbia and Renold Bailey)
- Deepak Pandita (advisor: Christopher M. Homan)
- Krishna Prasad Neupane (advisor: Qi Yu)
- Deep Shankar Pandey (advisor: Qi Yu)
- Michael Peechatt (advisor: Cecilia O. Alm & Reynold Bailey)
- Patrick Philippy (advisor: Richard Zannibi)
- Suresh Pokharel (advisor: Dukka KC)
- Suraj Poudel (advisor: Linwei Wang)
- Pawel Pratyush (advisor: Dukka KC)
- Spandan Pyakurel (advisor: Qi Yu)
- Qiaofan Que (advisor: Qi Yu)
- Sheeraja Rajakrishnan (advisor: Daniel Krutz)
- Paribesh Regmi (advisor: Rui Li)
- Hitesh Sapkota (advisor: Qi Yu)
- Ayush Kumar Shah (advisor: Richard Zannibi)
- Ruochen Shi (advisor: Rui Li)
- Nibesh Shrestha (advisor: Pengcheng Shi)
- Ruby Shrestha (advisor: Linwei Wang)
- Guohao Sun (advisor: Zhiqiang Tao)
- Allahsera Auguste Tapo (advisor: Christopher M. Homan)
- Jeevan Thapa (advisor: Rui Li)
- Mahendra Singh Thapa (advisor: Rui Li)
- Pujan Thapa (advisor: Travis Desell)
- Rajesh Titung (advisor: Cecilia O. Alm)
- Maryam Toloubidokhti (advisor: Linwei Wang)
- Sumeet Atul Vadhavkar (advisor: Linwei Wang)
- Dingrong Wang (advisor: Qi Yu)
- Jiamian Wang (advisor: Zhiqiang Tao)
- Hong Yang (advisor: Travis Desell and Alexander Ororbia)
- Zhizhuo Yang (advisor: Alexander Ororbia and Reynold Bailey)
- Dayou Yu (advisor: Qi Yu)
- Timothy Zee (advisors: Ifeoma Nwogu and Alex Ororbia)
- Minxun Zheng (advisor: Qi Yu)
- Ziqi Zhou (advisor: Zhiqiang Tao)
- Yuansheng Zhu (advisor: Qi Yu)
Related Courses
Research Projects
Learn more about Artificial Intelligence initiatives at RIT
- Dynamic Resource Allocation [Hosseini]: Through the integration of artificial intelligence (AI), economics, and computation this project investigates novel solutions for resource allocation in dynamic environments and situations that lack transferable currency. With the advent of online platforms, economic theory emerges as a fundamental approach to promote desirable social properties of efficiency, fairness, and truthfulness in a variety of domains such as shift scheduling, course registration, cloud computing, and crowdsourcing. This project tackles a variety of fundamental problems at the intersection AI and economics while enriching the algorithmic and societal understanding of resource allocation in dynamic settings. This contrasts with classical mechanisms that either focus solely on economic aspects of resource allocation in static and offline settings or disregarded social aspects such as fairness. Specifically, the project studies two interconnected components: (1) sequential allocation under uncertainty, by synthesizing models studied in AI with economic theory to investigate, analyze, and create new mechanisms that are fair and discourage strategic manipulation in environments where agents' preferences are evolving (e.g. nurse scheduling and course allocation); and (2) online mechanisms, by employing insights from algorithm design and AI to study fairness and efficiency of allocation mechanisms when agents arrive and depart over time or the availability of items is uncertain (e.g. food bank organizations and crowdsourcing platforms).
- Evolutionary Exploration of Neural Architectures (EXONA) [Desell,Ororbia]: Determining the optimal structure and hyperparameters of neural networks is a challenging open question in the field of AI. EXONA provides and investigates various evolutionary and bio-inspired approaches to “grow” optimal neural architectures through algorithms such as EXACT (Evolutionary Exploration of Augmenting Convolutional Topologies), EXAMM (Evolutionary Exploration of Augmenting Memory Models) as well as approaches such as ant colony optimization and other bio-inspired methods. These algorithms are developed to use large scale high performance and distributed computing resources to speed neural network evolution. https://github.com/travisdesell/exact
- Fair Allocation of Resources [Hosseini]: This project focuses on various models for fair allocation of indivisible resources or goods. Various concepts from game theory and economics are used to model, analyze, and develop algorithms that provide fair and efficient outcomes. This project aims at investigating various economic and game theoretical properties of fair allocation to 1) devise new algorithms through AI techniques for reasoning over preferences, 2) provide theoretical guarantees and bounds, and 3) provide empirical evaluations of the proposed algorithms.
- Multiagent Bug Assignment [Hosseini]: Bug assignment in large software projects is typically a time-consuming and tedious task; effective assignment requires that bug triagers hold significant contextual information about both the reported bugs and the pool of available developers. In this project, the goal is to efficiently assign bugs to developers by 1) predicting the bug lifetime using machine learning techniques, 2) modeling developers’ expertise by learning their traits, and 3) utilizing techniques from multi-agent systems to efficiently and fairly assign bug reports to relevant developers. We will use data from various bug repositories, such as Eclipse and Firefox, to train our model and evaluate its efficiency against the current state-of-the-art approaches that rely solely on machine learning techniques.
- Open UAS Repository (OURepository) [Desell]: Management of large scale remote sensing data such as that gathered by unmanned aircraft systems (UAS) presents significant data management and analysis questions. The OURepository has been developed to provide researchers an open repository to upload, share, annotate and analyze large scale imagery (orthomosaics) gathered by UAS. A major goal of this project is to allow easy use of computer vision algorithms such as convolutional neural networks to non-computer scientists through this cloud based system. https://digitalag.org/our/
- Label Distribution Learning [Homan]: Machine learning models learn from human annotated labeled data. The annotation is often subjective and based on their personal experiences. In supervised learning, the multiple annotations per data item is usually reduced to a single label representing the ground truth. This hides the diversity and objectivity of the labels. Label distribution learning associates a probability distribution for each data item preserving the diversity between labels.
- Reading Assistance for DHH Technology Workers [Huenerfauth]: Low English reading literacy among some Deaf and Hard of Hearing (DHH) adults can be a particular barrier to computing professions, where workers must regularly "upskill" to learn about rapidly changing technologies throughout their career. We investigate the design of a web-browser plug-in to provide automatic English text simplification (on-demand) for DHH individuals, including providing simpler synonyms or sign-language videos of complex English words or simpler English paraphrases of sentences or entire documents. http://cair.rit.edu/projects.html#simplify
- Neurocognitively-Inspired Lifelong Machine Learning [Ororbia]: Neural architectures trained with back-propagation of errors are susceptible to catastrophic forgetting. In other words, old information acquired by these models is lost when new information for new tasks is acquired. This makes building models that continually learn extremely difficult if not near impossible. The focus of our research is to draw inspiration from models of cognition and biological neurocircuitry, as well as theories of mind and brain functionality, to construct new learning procedures and architectures that generalize across tasks and continually adapt to novel situations, combining input from multiple modalities/sensory channels. https://www.cs.rit.edu/~ago/nac_lml.html
- Video-Based Search for ASL Dictionaries [Huenerfauth]: Looking up an unfamiliar word in a dictionary is a common activity in childhood or foreign-language education, yet there is no easy method for doing this in ASL. We are investigating a computer-vision-based sign-lookup interface for online ASL video dictionaries. http://cair.rit.edu/projects.html#dictionary
- ASL Corpora for Linguistic Research [Huenerfauth]: We collect video and motion-capture recordings of native sign-language users, in support of linguistic research and machine-learning modeling of aspects of ASL. http://cair.rit.edu/projects.html#linguisticcorpora
- Design of Future Captioning Technology [Huenerfauth]: We investigate the requirements and preferences of Deaf and Hard of Hearing (DHH) users for captioning technology for video programming or for real-time captioning in live meetings, including how the confidence scores for words or phrases identified through automatic speech recognition (ASR) could be revealed to users of such captions. http://cair.rit.edu/projects.html#captionmeetings
- Learning ASL through Real-Time Practice [Huenerfauth]: We are investigating how computer-vision technologies can enable students learning American Sign Language (ASL) to practice their signing independently, through a tool that provides feedback automatically based on a video of their signing. http://cair.rit.edu/projects.html#learn
- Generating ASL Animation from Motion-Capture Data [Huenerfauth]: We investigate techniques for making use of motion-capture data collected from native American Sign Language (ASL) signers to produce linguistically accurate animations of ASL. We produce machine-learning models of various phenomena, e.g. speed and timing during signing, and we use these models to partially automate the generation of animations, to reduce the cost in providing ASL content for Deaf users. http://cair.rit.edu/projects.html#mocap
- Word Importance in Captions for Deaf Users [Huenerfauth]: Automated Speech Recognition (ASR) technology is still imperfect in many settings. To evaluate the usefulness of captions for Deaf users based on ASR, simply counting the number of errors is insufficient, since some words contribute more to the meaning of the text. We study methods for automatically predicting the importance of individuals words in a text, and we are using these models to develop alternative evaluation metrics for analyzing ASR accuracy, to predict how useful ASR-based captions would be for users. http://cair.rit.edu/projects.html#importance
- Visual Prediction using Multimodal Data [Kong]: This project develops deep learning-based methods for predicting future human actions and visual frames from large-scale video data. Thanks to the extra data including audio and text data, we can create knowledge base that provides us with rich prior information, and help achieve accurate and reliable visual prediction.
- Reliable Tensor Data Analytics (theory, algorithms, applications) [Markopoulos]: Data collected across diverse sensing configurations can be naturally organized and processed in the form of tensors, which are the direct generalization of matrices. Tensor entries can be severely faulty, due to errors in sensing, storing, communication, or even deliberate corruption. In this line of research, we explore new theory, problem formulations, and optimization algorithms for corruption resistant analysis of tensor data. This project can provide important new foundations for reliable analysis of diverse multi-modal and multi-aspect datasets.
- Data Driven Adaptive and Robust Subspace Learning with Computer Vision Applications [Markopoulos]: We design Data Driven Adaptive Learning (DDAL) frameworks for robust subspace tracking, combined with deep learning architectures. Motivating applications that we explore include people detection, object detection, and change detection in new domains that are markedly different from those used for training. Data starved environments are also of interest.
- Signal Processing, Data Analysis, and Machine Learning for Indoors Radar-based Motion Recognition - Applications in Assisted Living [Markopoulos]: In this project, we apply adaptive machine learning methods for motion recognition based on micro-Doppler signatures, collected from indoors radar measurements. A key application is real-time fall detection towards safer self-dependent living and aging-in-place.
- Gait Recognition from Wearable Sensors with Application in Injury Prediction and Athlete Rehabilitation [Markopoulos]: This project focuses on gait motion classification based on acceleration signals collected from low-cost commercial wearable inertia measurement units. The project tasks span from dataset collection to multi-way data analysis and machine learning. Our final goal is to deliver adaptive and transferable machine learning for fatigue estimation and real-time injury prediction.
- Column2Vec [Mior, Ororbia]: Column2Vec is a distributed representation of database columns based on column metadata. Our distributed representation has several applications. Using known names for groups of columns (i.e., a table name), we train a model to generate an appropriate name for columns in an unnamed table. We demonstrate the viability of our approach using schema information collected from open source applications on GitHub. https://arxiv.org/abs/1903.08621
- Computational Models for Evaluating Interactional Synchrony [Nwogu]: Face-to-face meetings are still a very effective way to capture the attention of participants and drive useful collaborations. To this end, this project involves developing techniques to model the interactions and influence patterns occurring within small groups of people, specifically in educational settings. Because of the limited, but rich multimodal data, we are investigating the use of traditional probabilistic graphical modeling methods for better understanding the data. We are also exploring the use of dynamic neural networks. There will be continual data collection to improve inference from current models.
- Role of Emotional Regulation in the Workplace [Nwogu]: Emotional regulation refers to refers to the ability to respond to a range of emotions in a manner that is controllable and socially tolerable. To this end, we designed an experiment where participants experienced a range of emotions and were required to respond in pre-specified manners. Their neurological, physiological and expressive manifestations of emotion were recorded and now we are using deep learning and other statistical techniques techniques to better understand how changes in sympathetic activations are exhibited across modalities.
- ASR for Resource Constrained Languages [Ptucha]: Developing Automatic Speech Recognition with little training data. Developing unique generative models with multiple transfer learning on acoustic models along with statistical based language models.
- Graph CNN [Ptucha]: While CNNs have transformed the machine learning landscape, they do not work with generic graphs such as those describing protein structures, social media graphs, or point clouds. This research is discovering new convolution and pooling methods which work generically on heterogeneous graphs.
- Common Vector Space [Ptucha]: Research into underlying low dimensional vector spaces of concepts across different modalities. The modalities include still and motion vision, speech, text, and graphs. The idea is that similar concepts lie close in this Common Vector Space (CVS), while dissimilar concepts lie apart. For example, pictures and sentences about elephants map close to one another, but far apart from sentences about flowers.
- Video Sign language to text [Ptucha]: Convert video streams of Chinese sign language into Mandarin and English text using various vector representations in a sequence-to-sequence recurrent nnet.
- Integrating Physics-based Models into Data-Driven Inference [Linwei]: We develop theoretical and mathematical foundations that support the integration of physics-based modeling and data-driven inference methods to improve individualized assessment of systems. Our interests in particular focus on data-driven identification and adaptation of the errors in the physics-based models in the statistical inference process.
- ECG-Imaging [Linwei]: ECGi is an emerging imaging technique that allows beat-to-beat computational imaging of electrical activity in the heart using noninvasive surface ECG data in combination of patient-specific anatomical information. Over the last 10 years, we have been pushing the boundary of ECGi techniques through a combination of three general thrusts: methodological research to expand the capacity and improve the accuracy of ECGi, technical developments to lower the cost and improve the accessibility of ECGi, and experimental and clinical research to translate ECGi to improve the management of a variety of heart diseases.
- Learning Disentangled Representations [Linwei]: We develop deep representation learning methods that are able to separate these inter-subject variations from clinical data. We work with clinicians to deploy such deep-learning based software tools to guide clinicians progressively closer towards the surgical target in real time during the procedure.
- End-to-End Uncertainty Quantification [Linwei]: Mathematical models of a living system are always subject to epistemic uncertainties that represent our limited knowledge about a system. While personalized models have shown increasing potential in medicine, their uncertainties remain the main roadblock to their widespread adoption in the healthcare industry. We develop novel active learning based approaches to first infer the uncertainty within the data-driven model elements, before propagating this uncertainty to model predictions.
- Transferring Simulation Data to Real Data [Linwei]: A primary factor for the success of machine learning is the quality of labeled training data. However, in many fields, labeled data can be costly, difficult, or even impossible to acquire. In comparison, computer simulation data can now be generated at a much higher abundance with a much lower cost. We develop machine learning and deep learning techniques that are able to leverage the knowledge in simulation data and transfer it to real data based tasks.
- Data quality and security evaluation framework for mobile devices platform [Reznik]: The project builds a proof-of-the-concept design, which will be used to develop, verify and promote a comprehensive methodology for data quality and cybersecurity (DQS) evaluation focusing on an integration of cybersecurity with other diverse metrics reflecting DQS, such as accuracy, reliability, timeliness, and safety into a single methodological and technological framework. The framework will include generic data structures and algorithms covering DQS evaluation. While the developed evaluation techniques will cover a wide range of data sources from cloud based data systems to embedded sensors, the framework implementation will concentrate on using an ordinary user’s owned mobile devices and Android based smartphones in particular.
- Intelligent Security Systems [Reznik]: The project designs a curriculum, develops course materials, tests and evaluates them in real college classroom settings, prepares and submits them for dissemination of a college level course on Intelligent Security Systems. In order to facilitate interconnections with other courses and its inclusion into the national Cybersecurity curricula, the course is composed of nine separate modules. Five modules cover the specialized topics including: a review of the modern state of the cybersecurity and the current problems and approaches; firewall design; intrusion detection systems; anti-malware methods and tools; hacking activity and attack recognition and prevention. Other modules provide additional support to assist in course teaching preparation, such as test and exam questions, course project and research assignment specifications, and tool presentation descriptions. This course idea is innovative and unique. It merges together various knowledge areas as diverse as artificial intelligence and machine learning techniques with computer security systems and applications. The course will allow to instill into students a unique knowledge in the very intense domain and will lead students towards getting much better prepared to their practical work ahead. It combines theoretical knowledge and practical skills development. Also, it advances students research, communication and presentation skills.
- Testing graphical models [Bezáková]: For given parameters of a graphical model, can we efficiently test whether provided random samples are truly generated according to the distribution of the graphical model? https://arxiv.org/abs/1901.07361
- Computational Social Choice [Hemaspaandra]: Elections are broadly used in both human and computational settings, including a rapidly expanding range of applications in multiagent systems. It has been known since the mid-1970s (the Gibbard-Satterthwaite Theorem) that every reasonable election system has instances on which voters have an incentive to vote strategically. Computational social choice seeks to sidestep that impossibility result by making manipulation not impossible but rather computationally prohibitive.
- Projects in the REU in Computational Sensing for Human-aware AI [Alm, Bailey, Geigel, Huenerfauth, Ptucha, Shinohara]: The REU Site in Computational Sensing for Human-centered Artificial Intelligence recognizes that as the boundaries between HCI and AI blur, and AI grows increasingly agile and pervasive, the next generation of computational scientists must be capable of responsibly and effectively leveraging a spectrum of sensing data from data-generating humans. With this focus, the REU Site will expand its trajectory as an attractor for highly diverse students who will gain experience with sensing hardware and software towards transformative advances in intelligent systems focused on human behaviors and cognitive processes. Enabling diverse stakeholders early in their careers to discover how to collect, fuse, make inference with, and visualize multimodal human data can transform how humans and machines engage and collaborate. The research in the REU Site will address two limitations in AI: first, that underserved populations are at risk of being marginalized with the present focus on big data AI and, second, that AI trainees often lack experience with human data collection and critical thinking about human-elicited datasets. The REU Site will stimulate novel, safe ways for systems to team up with people to address society's vexing problems while remaining fair, attuned to bias, and representative of the diverse fabric of the general population. https://www.cs.rit.edu/~reu/projects.html
- Automated Analysis of Pragmatic Language Development in ASD [Alm, with external teams at BC, URMC]: Individuals with autism often experience challenges with language pragmatics or using language in a socially or contextually appropriate way. In this project, we plan to collect spontaneous language data from adults with autism, and to use computational methods to automatically analyze that language in order to learn more about the pragmatic language features that are associated with autism. We will compare the language of adults with Autism Spectrum Disorder (ASD) with the language of both young children and adolescents with ASD, which will help us determine whether the pragmatic deficits that characterize autism improve with age and development. The information about pragmatic ability gained from this work will have important implications for clinicians developing interventions for improving communication skills and social outcomes for individuals with autism.
- Affective Video Recommendation [Bailey, Alm]: Within a sensing and learning framework, this work studies multimodal affective reactions of observers towards automatically recommending content to viewers.
- Expanding the Use of Natural Language Processing AI in Linguistics Classrooms [Alm]: A growing trend is to enable general users to take advantage of and apply AI technology to aid reasoning and decision-making, without the requirement of in-depth knowledge of under-the-hood technical details. This effort extends a web-based system that makes a range of machine-intelligent computational linguistics methods accessible to users with language data and visualizations, focusing on educational use cases. https://nlp.rit.edu/linguine/
- MathSeer [Zanibbi]: We are creating a system to make finding mathematical information easier. https://www.cs.rit.edu/~dprl/mathseer
- Utilizing synergy between human and computer information processing for complex visual information organization and use [Qi Yu]: https://pht180.rit.edu/mining/research/
- A Multimodal Dynamic Bayesian Learning Framework for Complex Decision-making [Qi Yu]: https://pht180.rit.edu/mining/research/
- CAPTURE [Yang]: CAPTURE forecasts occurrences of future cyber attacks using a large number of seemingly irrelevant signals from the publicly accessible domains. The project investigates machine learning techniques that deal with incomplete signals, insignificant lagged features, imbalanced and non-stationary data.
- ASSERT [Yang]: ASSERT integrates Bayesian learning, clustering, and information theory to provide timely separation of intrusion alerts and empirically generated attack models to enhance cyber situation awareness.
- SynGAN [Yang]: SynGAN learns from moderate amount of cyber intrusion logs to generate synthetic data that maintains critical and essential part of the joint feature distribution.
Research Labs
- Center for Accessibility and Inclusion Research (CAIR) [Huenerfauth]: http://cair.rit.edu/index.html#main
- Computational Biomedicine Lab (CBL) [Linwei Wang]: https://people.rit.edu/lxwast
- Computational Linguistics and Speech Processing Lab (CLaSP lab) [Alm]: https://www.rit.edu/clasp/
- Computer Graphics and Applied Perception Lab [Nwogu]: https://www.cs.rit.edu/~graphics/lab/
- Computer Vision and Learning Lab [Yu Kong]: https//people.rit.edu/yukics
- Robotic Collaboration and Autonomy Lab [Yangming Li]: https://rocalrobot.wordpress.com/
- Document and Pattern Recognition Lab [Zanibbi]: https://www.cs.rit.edu/~dprl
- Lab for Social Intelligence [Homan]: https://www.cs.rit.edu/~cmh/isi
- Linguistic and Assistive Technologies Laboratory (LATLab) [Huenerfauth]: http://latlab.ist.rit.edu
- Machine Intelligence Laboratory [Ptucha]: https://www.rit.edu/mil
- Machine Learning and Data Intensive Computing (Mining Lab) [Qi Yu]: https://pht180.rit.edu/mining/
- Multiagent Systems Lab [Hosseini]: https://www.cs.rit.edu/~hho/mas
- Signal Processing for Data Analysis and Learning Lab [Markopoulos]: https://sites.google.com/view/spanlab/
- Neural Adaptive Computing (NAC) Lab [Ororbia]: https://www.cs.rit.edu/~ago/
- Network and Information Processing (NetIP) Lab [Yang and Kwasinski]
- Human-Centric Multi-Modal Modeling Lab: http://hccl.gccis.rit.edu
- Lab of Use-inspired Computational Intelligence (LUCI): https://ruililuci.com