Martin Jagersand, PhD
Professor, Faculty of Science - Computing Science
- M.Sc., Physics, Chalmers University (Sweden), 1991
- M.Sc., Computer Science, University of Rochester, 1994
- Engineering Licenciate, Chalmers 1996
- Ph.D., Computer Science, University of Rochester, 1997
- Post Doc, Computer Science, Yale University, 1998-99
- Computer Vision, Graphics
My group researches and develops real-time 2D and 3D video tracking, 3D modeling of geometry and appearance from camera vision, on-line camera-scene-robot kinematic model estimation, and vision guided motion control for robot arms and hands. Our aim is to make these methods robust so they work in unstructured natural environments such as outdoors or in human homes. Applications that we work on include first-response robotics in dangerous or remote areas, to assess damage, and turn off damaged infrastructure (shut fluid valves, turn off electricity breakers etc.); tele-robotic systems for on-orbit Satellite repair; assistance robotics for the elderly and disabled; better human-robot collaboration, so humans can work with robots, not separately from robot work cells. We have developed 3D modeling from video and images for both objects and scenes, and we have expertise in how to extract precise geometric measurements from images and video.
A human can easily pick up a visible object, and can even watch a whole task, learn, and transform the visual information into the necessary motor (muscle) movements to carry out the task. In the other direction a human can visually imagine the execution of a motor task. As a contrast, conventional robotics does not solve these tasks easily or reliably, nor can a human easily instruct the robot what to do. One pitfall is that typically all necessary information needs to be programmed a-priori into a single global metrically calibrated model.
The centerpiece of my research is to instead estimate from observations only several local models and transforms on-line as they are needed. During a complete task, the attention shifts between different objects and frames, and new models need to be quickly acquired. Visual structure and appearance change is parameterized in a motor frame instead of a purely visual basis as in traditional viewing models. An advantage is that it automatically captures the actual freedoms of the object, e.g. an arm is parameterized in its joint coordinates, not as separate links.
Algorithms and software paradigms for robot programming; mathematical modeling of robot arms and rovers including kinematics, and an introduction to dynamics and control; sensors, motors and their modeling; basics of image processing and machine vision; vision-guided motion control. Prerequisite: CMPUT 275. Corequisite: CMPUT 340 or 418, or ECE 240. Students having CMPUT 174, 175, 201, 204 may seek consent of the instructor.
Introduction to the geometry and photometry of the 3D to 2D image formation process for the purpose of computing scene properties from camera images. Computing and analyzing motion in image sequences. Recognition of objects (what) and spatial relationships (where) from images and tracking of these in video sequences. Prerequisites: CMPUT 201 or 275; one of CMPUT 340, 418, ECE 240, or equivalent knowledge; one of MATH 101, 115, 118, 136, 146 or 156, and one of MATH 102, 125, or 127.