Research in AI:

College of Science

At the forefront of scientific research, our faculty are dedicated to advancing theoretical understanding and practical applications of AI technologies.

RIT's College of Science faculty are actively engaged in researching AI applications, covering a diverse range of areas including computational neuroscience, machine learning, computer vision, human-aware AI, cognitive science, remote sensing, image processing, eye tracking, hyperspectral imaging, space optics, sensor protection, and laser photonics.

Faculty Research in AI

Bartosz Krawczyk

Assistant Professor
585-475-2916

Bartosz Krawczyk heads the Machine Learning and Computer Vision (MLVision) Lab. His current research interests include continual learning, data streams, concept drift, class imbalance, ensemble learning, and XAI. Dr. Krawczyk’s team is working on novel ML algorithms designed for holistic continual learning from evolving data streams. These algorithms address the challenges of robustness to catastrophic forgetting and the accumulation of knowledge over time, while also ensuring adaptability to concept drift and data shift phenomena through proactive memory revisitation and relevant past information updating. Another vital part of Dr. Krawczyk’s research portfolio lies in the critical area of data imbalance and fairness, where he and his team are at the forefront of devising strategies to mitigate bias inherent in both data and algorithms. This research holds profound implications across numerous domains, particularly in contexts involving underrepresented groups and sensitive information, where biased decision-making processes can have significant ramifications. Dr. Krawczyk has co-authored “Learning from Imbalanced Datasets” (Springer, 2018), a seminal monograph in this field. Furthermore, the MLVision team explores methodologies for handling sparse access to data, a common challenge in real-world scenarios characterized by limited ground truth or training examples. Dr. Krawczyk focuses on the development of active and semi-supervised learning algorithms, as well as meta-models for few/one/zero-shot learning, to accommodate these constraints effectively. Beyond core ML/CV research, Dr. Krawczyk's team applies their algorithms to solve practical challenges, particularly in the domains of medical image analysis and remote sensing. Through their interdisciplinary approach, they seek to translate theoretical innovations into tangible solutions with real-world impact.

Areas of Specialization: Machine Learning, Data Streams, Continual Learning, Ensemble Learning, Class Imbalance, XAI

Dimah Dera

Assistant Professor
5854752454

Dimah Dera specializes in robust and trustworthy modern machine learning (ML) solutions for real-world applications, including healthcare, cybersecurity, remote sensing, and surveillance systems. In the rapidly evolving landscape of artificial intelligence (AI) and autonomous systems, the integration of ML techniques has paved the way for unprecedented advancements across various domains. The robustness, safety, and reliability of AI systems have emerged as pivotal requirements. The scope of her research includes developing innovative techniques to ensure the robustness, safety, and reliability of AI systems by integrating Bayesian theory and statistical signal processing foundations into modern ML frameworks. This research highlights the intricate connections between learning Bayesian uncertainty in ML models and their robustness and safety awareness to dynamically changing environments and systems failure. This research advances theoretical and algorithmic knowledge that will transcend traditional ML and AI systems toward safe and reliable deployment of AI models in high-risk real-world applications.

Areas of Specialization: Machine Learning, Autonomous Systems, Bayesian Theory, Robustness, Safety and Reliability of AI Systems.

Christopher Kanan co-founded RIT’s Center for Human-aware AI (CHAI) and served as its Associate Director for four years. Kanan heads a deep learning lab at the University of Rochester that focuses on taking steps toward artificial general intelligence, with a focus on continual lifelong learning in neural networks, overcoming dataset bias, and multi-modal large language models. Many of the methods are inspired by cognitive science. He has also done applied deep learning projects across medical computer vision, invasive species monitoring, and machine learning for scientific discovery.

Areas of Specialization: Human-aware AI, Machine Learning, Continual Deep Learning, Dataset Bias, Cognitive Science

Carl Salvaggio

Professor
585-475-6380

Carl Salvaggio heads the Digital Imaging and Remote Sensing laboratory. His research leverages the use of machine learning to solve remote sensing problems like volume estimation of targets that are void of texture (e.g. condensed water vapor plumes) using real and synthetical-generated data, the investigation of application specificity for pre-training data, automated calibration of small unmanned aircraft data, semantic labeling of 3-dimensional point clouds, rooftop damage assessment, image quality and MTF estimation using deep learning, identification of seismically vulnerable unreinforced masonry buildings from street-view imagery and tax records, theoretical research in batch-normalized recurrent highway networks, and general semantic segmentation of remote sensing data. Salvaggio and RIT are co-holders of a U.S. patent, with Kodak Alaris, on “System and method for batch-normalized recurrent highway networks” (US 2018/0322368 A1).

Areas of Specialization: Remote Sensing, Image Processing, Computer Vision

Gabriel Diaz

Associate Professor
585-475-6215

Dr. Diaz is leveraging AI to improve the mobile eye-tracking technology that may make its way into your next VR or AR headset. The team’s most recent work is focused on an exciting new type of sensor: the event sensor. Unlike conventional “frame-based” cameras that form images only after sampling all pixels within the duration of the global or rolling shutter, a pixel in an event camera can be triggered independently of its neighbors when the log intensity change in sampled scene luminance exceeds a threshold. This unique feature provides several advantages over conventional sensors, including high dynamic range (HDR) (~120dB), high temporal rate (~10,000Hz), low latency (<1ms), and low power draw (~10mW). Despite their potential, the asynchronous, temporally dense, and spatially sparse nature of events poses challenges to event processing and interpretation. This is because most advanced image processing and computer vision algorithms are designed to work with conventional image formats and not with temporally dense streams of asynchronous pixel events (i.e., the event stream). To solve this problem, the team has turned to spiking neural networks, which naturally complement the spatially sparse, temporally dense, and asynchronous nature of the event stream and have demonstrated superior computational efficiency, lower energy consumption, and lower latency than traditional neural networks.

Areas of Specialization: Eye Tracking, Machine Learning, VR/AR, Neural Networks

Emmett Ientilucci

Associate Professor
585-475-7778

Dr. Ientilucci is the Gerald W. Harris Endowed Professor and has been active in the field of remote sensing since 2000, specifically in the area of spectral image analysis since 2004. His research interests include AI/ML/CV techniques for remote sensing, spectral image processing and exploitation, hyperspectral target detection, shadow detection and mitigation, radiative transfer, radiometric hardware and calibration, and atmospheric compensation. Dr. Ientilucci’s team focuses on the collection, preparation, and open-access publications of novel imaging benchmark datasets, recently focusing on industrial and mining applications. One of his current research projects focuses on real-time video segmentation using deep learning architectures and multi-view data fusion to improve the robustness of segmentation algorithms. 

Areas of Specialization: Remote Sensing, Hyperspectral Imaging, Spectral Image Analysis, Radiometric and Spectral Calibration, Atmospheric Compensation

The Swartzlander Group makes use of their dedicated computer system, composed of 8-A100 GPU boards, to conduct advanced imaging research. They have made breakthrough advances in solving a problem that has plagued imaging systems since the advent of the laser 65 years ago, namely, how to create high-fidelity images in the face of intense laser irradiation. Lasers produce undesirable effects ranging from dazzle, saturation, and damage, which depend on beam and sensor characteristics. Unlike traditional approaches that seek to introduce nonlinear optical materials to absorb or scatter light, our innovation makes use of a transparent linear phase mask, followed by the image reconstruction power of an end-to-end physics based trained network. Another recent project includes the training of a neural net to convert images that have been distorted by shift-variant (nonlinear) Seidel aberrations into unaberrated images.

Areas of Specialization: Advanced Imaging, Space Optics, Sensor Protection, Geometric Phase Optics

Jie Qiao

Associate Professor
585-475-5629

The Qiao group is developing a new optical differentiation wavefront sensor (ODWS) based on binary pixelated linear and nonlinear amplitude filtering in the far field. They trained and tested a Convolutional Neural Network that reconstructs a spatial phase map from nonlinear-filter-based ODWS data for which an analytic reconstruction algorithm is not available. It shows accurate retrieval over different magnitudes of wavefronts and on randomly shaped wavefronts. This work paves the way for the implementation of simultaneous sensitive, high dynamic range, and high-resolution wavefront sensing. Qiao’s group is also investigating AI-based femtosecond laser materials interaction for fabricating optics and photonics.

Areas of Specialization: Ultrafast Laser Photonics, 3D writing of integrated photonic devices, ultrafast-lasers-based-nano structuring, figuring, polishing, and bonding. Pump-probe imaging, wavefront sensing and optical beam control.

Anthony Vodacek

Professor
585-475-7816

Dr. Vodacek and collaborators are applying machine learning (ML) to imaging systems and sensor networks designed to learn more about both human-dominated and natural environments. One case is remote sensing for vehicle tracking, where ML methods extract information from the scene and are used to drive process models, while feedback from the model is used to dynamically adjust the ML method for improved accuracy. Another example where AI is applied is to the complex process of extracting information from the acoustic environment for biodiversity assessment via the Internet of Things networks of microphones. Acoustic data are particularly challenging for ML because bioacoustic data can be overlapping, there are few features to analyze, signals are frequently unknown, and there may be strong relationships among signals. Further, many potential applications in biodiversity or agricultural monitoring are best accomplished with processing “at the edge,” where low computational methods, such as TinyML, are required.

Areas of Specialization: Remote Sensing, Aquatic Optics, Dynamic Data Driven Application Systems, Machine Learning for IoT Sensor Systems

Amir Hassanzadeh, Aaron Gerace, and Eon Rehman collaborated to harness Boosting and Ensemble Learning algorithms for modeling total atmospheric water content. Leveraging Landsat 9 thermal bands (TIRS) data alongside MODIS satellite data as ground truth, they developed a procedure to support the implementation of the Split Window algorithm for Landsat surface temperature products. This innovative approach enables Landsat 9 to predict atmospheric water independently, separate from other satellite sources like MODIS. The algorithm is currently being implemented in the USGS workflow and will soon be available to the public. 

Areas of Specialization: Remote Sensing, Machine learning, AI, hyperspectral and thermal imaging, LiDAR, UAS

Jan van Aardt

Director of Carlson Center for Imaging Science
585-475-4229

Dr. Jan van Aardt implements AI approaches for assessment of natural resources, specifically crops and forests. In an effort spearheaded by Amir Hassanzadeh, they investigated voxel content estimation of forest scenes using AI techniques. Evaluating Kernel Point Convolutions (KPConv) and other advanced algorithms, they addressed challenges associated with analyzing forest structures using light detection and ranging (LiDAR) data. Leveraging a simulated Harvard Forest scene generated by DIRSIG software, they estimated normalized areas for various targets, including Bark, Leaf, Soil, Objects. This research provides a robust foundation for the application of transfer learning to real-world LiDAR datasets, particularly in scenarios involving limited samples and imbalanced learning challenges. Other project examples include assessment of crop disease, as well as yield modeling, working with PhD students in his research group.

Areas of Specialization: Forest structural assessment, species classification, biodiversity assessment, agricultural (crop) yield modeling, disease detection, and harvest scheduling using remote sensing (unmanned aerial system, UAS; airborne; and spaceborne systems)

Mekides Assefa Abebe

Visiting Assistant Professor
585-475-7189

The application of machine learning in color imaging and color science fields increased due to recent rapid advancements in AI technologies. In High Dynamic Range (HDR) imaging, machine learning techniques have been utilized for various applications, including image and video compression, denoising, tone reproduction, and reconstruction. Machine learning algorithms can reconstruct HDR images from a series of differently exposed low dynamic range (LDR) images. Deep learning models, such as convolutional neural networks (CNNs), can effectively map LDR images to corresponding HDR representations. Additionally, machine learning methods have been instrumental in developing advanced tone compression operators, ensuring the preservation of details and visual quality in HDR images when converted to standard dynamic range images (SDR). Our current study focuses on generating HDR images from single SDR images captured using legacy imaging devices. With the growing adoption of HDR display technologies, we recognize the need to enhance visual quality, dynamic range, and image details for improved user experience and computational performance. To achieve this, we are mostly using generative adversarial networks (GANs) and Transformer-based architectures to expand dynamic ranges, reconstruct over-exposed regions, and increase the resolution of the SDR images. Furthermore, we believe that HDR imaging, along with color science, can enhance the performance of machine learning techniques in other computational imaging and computer vision fields. A better understanding and representation of real-world scenes can contribute to the improvement of machine learning tasks such as scene classification, object detection, and semantic segmentation. Leveraging the richer information present in HDR images can lead to enhanced accuracy in tasks related to scene understanding and recognition. Therefore, our research also targets to investigate the unique color and dynamic range characteristics of HDR images and sequences and their impact on other computer vision applications.

Areas of AI Specialization: Machine Learning

Dr. Christopher Collison’s research  is centered on Machine Learning (ML), explainable AI and Inverse Design for enhancing the understanding and application of photoactive organic molecules, such as in solar cells.

Areas of AI Specialization: machine learning, inverse design

Emiliano Brini

Assistant Professor
585-475-2287

Dr. Emiliano Brini’s research integrates Molecular Dynamics, MELD (Modeling Employing Limited Data), and Artificial Intelligence to address the computational challenges for Crystal Structure Prediction (CSP). Dr. Brini and Dr. Collison are working together to explore new methods for balancing molecular flexibility and rigidity, crucial in accurately and efficiently predicting crystal structures for organic compounds.

Areas of AI Specialization: crystal structure prediction