Selected Publications

Recognizing abnormal events such as traffic violations and accidents in natural driving scenes is essential for successful autonomous and advanced driver assistance systems. However, most work on video anomaly detection suffers from one of two crucial drawbacks. First, it assumes cameras are fixed and videos have a static background, which is reasonable for surveillance applications but not for vehicle-mounted cameras. Second, it poses the problem as one-class classification, which relies on arduous human annotation and only recognizes categories of anomalies that have been explicitly trained. In this paper, we propose an unsupervised approach for traffic accident detection in first-person videos. Our major novelty is to detect anomalies by predicting the future locations of traffic participants and then monitoring the prediction accuracy and consistency metrics with three different strategies. To evaluate our approach, we introduce a new dataset of diverse traffic accidents, AnAn Accident Detection (A3D), as well as another publicly-available dataset. Experimental results show that our approach outperforms the state-of-the-art.
IROS2019

Predicting the future location of vehicles is essential for safety-critical applications such as advanced driver assistance systems (ADAS) and autonomous driving. This paper introduces a novel approach to simultaneously predict both the location and scale of target vehicles in the first-person (egocentric) view of an ego-vehicle. We present a multi-stream recurrent neural network (RNN) encoder-decoder model that separately captures both object location and scale and pixel-level observations for future vehicle localization. We show that incorporating dense optical flow improves prediction results significantly since it captures information about motion as well as appearance change. We also find that explicitly modeling future motion of the ego-vehicle improves the prediction accuracy, which could be especially beneficial in intelligent and automated vehicles that have motion planning capability. To evaluate the performance of our approach, we present a new dataset of first-person videos collected from a variety of scenarios at road intersections, which are particularly challenging moments for prediction because vehicle trajectories are diverse and dynamic.
ICRA2019

Motivated by the need to develop simulation tools for verification and validation of autonomous driving systems operating in traffic consisting of both autonomous and humandriven vehicles, we propose a framework for modeling vehicle interactions at uncontrolled intersections. The proposed interaction modeling approach is based on game theory with multiple concurrent leader-follower pairs, and accounts for common traffic rules. We parameterize the intersection layouts and geometries to model uncontrolled intersections with various configurations, and apply the proposed approach to model the interactive behavior of vehicles at these intersections. Based on simulation results in various traffic scenarios, we show that the model exhibits reasonable behavior expected in traffic, including the capability of reproducing scenarios extracted from real-world traffic data and reasonable performance in resolving traffic conflicts. The model is further validated based on the level-of-service traffic quality rating system and demonstrates manageable computational complexity compared to traditional multi-player game-theoretic models.
T-ITS

Recent Publications

Recent & Upcoming Talks

Projects

Egocentric on-road video anomaly detection

We are proposing an unsupervised/weakly-supervised approach for on-road anomaly detection from first-person videos and anomalous object localization.

Rooftop landing site identification by scene understanding

We are proposing a real-time rooftop landing site identification method by poly-lidar, image semantic segmentation and lidar-camera data fusion. HIL test has been executed with NVIDIA Jetson TX2

Game-theoretic traffic participant modeling at uncontrolled intersections

We are proposing a pair-wise leader-follower model for autonomous driving at uncontrolled intersections scenarios

Future object localization and interaction modeling

We are investigating future on-road object localization through modeling their appearance, motion, as well as their interactions with participants and scenes.

Coursework

I’ve takeing following courses in University of Michigan:

  • ROB550: Robotics System Lab
  • ROB501: Math for Robotics
  • EECS568: Mobileye Robotics
  • MECHENG542: Vehicle Dynamics
  • EECS545: Machine Learning
  • EECS592: Artificial Intelligence Foundations
  • EECS692: Advanced Artificial Intelligence
  • EECS542: Advanced Topics in Computer Vision
  • EECS598: Reinforcement Learning

Contact

  • brianyao@umich.edu
  • 1320 Beal Ave, Ann Arbor, Michigan, 48109, USA
  • Please email me whenever you want.