Vision-based Cooperative Infrastructure Perception

Location

Golisano Hall (GOL/070) - Atrium 1940

This project is focused on enhancing real-time vehicle tracking at intersections by integrating multiple camera views into a unified spatial reference. The system leverages diverse methodologies to project different perspectives onto a common top-down view and robustly track vehicles as they move through the intersection. Motivation: Improving situational awareness in urban traffic environments requires reliable vehicle tracking across multiple viewpoints. This project employs a cooperative infrastructure perception strategy that aligns various camera feeds into a shared spatial domain. By combining advanced image processing techniques with robust tracking and data association methods, the project addresses challenges such as occlusion, perspective variation, and data calibration. Project Components: The key components of the project's pipeline are: * Calibrating our system: While humans, as visual learners, we can easily pinpoint location (say x) in all three cameras as being the same location in the intersection. But how can a computer do that? This is the calibration step where we allow the system to learn this with specific directions. * Object Detection: The system should be able to detect every car in the intersection. We use state-of-the-art Machine Learning techniques to allow the system to filter out irrelevant objects and focus its attention on the vehicles in the intersection. This will be shown in a video. * Object tracking: it is imperative for the system to track the path that every vehicle in the intersection is taking. This has multiple valuable applications like accident prevention, emergency situation assistance, etc. With the help of a video, we visually plot the path taken by each vehicle that enters and exits the intersection. * Fusion: Since there are multiple cameras looking at the same intersection, each camera treats every vehicle separately. Even though it is the same car, the system (seeing it through 3 different cameras) will treat it as three different cameras. We enable the system to understand that the car, even though visible from three different cameras, is the same car. Conclusion: This project is paving the way toward more intelligent traffic management systems. By unifying multiple camera perspectives into a cohesive spatial framework and integrating advanced tracking and data association techniques, the system is set to improve real-time vehicle tracking at intersections. The current progress highlights its potential to enhance situational awareness and pave the way for further advancements in cooperative infrastructure perception.

Location

Golisano Hall (GOL/070) - Atrium 1940

Topics

Exhibitor
Ananth Kamath
Dharmik Dineshkumar Patel
Hrishit Kotadia

Advisor(s)
Fawad Ahmad

Organization
We attempt an alternate implementation of an existing system.


Thank you to all of our sponsors!