Martin Jagersand, PhD

Professor, Faculty of Science - Computing Science


Professor, Faculty of Science - Computing Science




  • M.Sc., Physics, Chalmers University (Sweden), 1991
  • M.Sc., Computer Science, University of Rochester, 1994
  • Engineering Licenciate, Chalmers 1996
  • Ph.D., Computer Science, University of Rochester, 1997
  • Post Doc, Computer Science, Yale University, 1998-99



Computer Graphics
Computer Vision and Multimedia Communications


My group researches and develops real-time 2D and 3D video tracking, 3D modeling of geometry and appearance from camera vision, on-line camera-scene-robot kinematic model estimation, and vision guided motion control for robot arms and hands. Our aim is to make these methods robust so they work in unstructured natural environments such as outdoors or in human homes. Applications that we work on include first-response robotics in dangerous or remote areas, to assess damage, and turn off damaged infrastructure (shut fluid valves, turn off electricity breakers etc.); tele-robotic systems for on-orbit Satellite repair; assistance robotics for the elderly and disabled; better human-robot collaboration, so humans can work with robots, not separately from robot work cells. We have developed 3D modeling from video and images for both objects and scenes, and we have expertise in how to extract precise geometric measurements from images and video. 


A human can easily pick up a visible object, and can even watch a whole task, learn, and transform the visual information into the necessary motor (muscle) movements to carry out the task. In the other direction a human can visually imagine the execution of a motor task. As a contrast, conventional robotics does not solve these tasks easily or reliably, nor can a human easily instruct the robot what to do. One pitfall is that typically all necessary information needs to be programmed a-priori into a single global metrically calibrated model.

The centerpiece of my research is to instead estimate from observations only several local models and transforms on-line as they are needed. During a complete task, the attention shifts between different objects and frames, and new models need to be quickly acquired. Visual structure and appearance change is parameterized in a motor frame instead of a purely visual basis as in traditional viewing models. An advantage is that it automatically captures the actual freedoms of the object, e.g. an arm is parameterized in its joint coordinates, not as separate links.