I was once a Research Fellow in EECS and NAME at the University of Michigan working on the academic side of a UM-Ford-State Farm project to develop next-generation automated vehicles. My research focuses on sensor fusion for autonomous vehicles, in particular, by combining imaging data with other sensor information to improve a vehicle’s perception of the world around it.
The following is a video featuring me acting as a safety driver in our autonomous car during a test run at Mcity, UofM’s one-of-a-kind facility for testing autonomous and connected vehicles:
My research at Johns Hopkins University focused on strategies to improve human-robot interaction for telerobotic on-orbit satellite servicing operations, as a part of NASA’s Robotic Refueling Mission. This involved the development and testing of visual and haptic user interfaces as well as control strategies that mitigate the effects of latency during teleoperation.
My doctoral research focused on strategies to improve human-robot interaction for teleoperated mobile manipulation tasks using advanced visualization techniques and novel manual interfaces. I also worked to characterize the fundamental limitations inherent in mobile teleoperation systems that stem overall performance, and developed strategies for efficiently mitigating such limitations. My dissertation leveraged this research to outline a system-level framework for increasing both the speed and ease with which teleoperated robot tasks can be performed. The following video provides a good introduction to the user interface aspects of my research: