SYLLABUS
Hand-Eye Coordination and Vision-based Interaction
Martin Jagersand
The course content covers four major areas:
- We start by studying the relationship between motion in the real world
and its image projection.
- Linearization of intensity variation, 6D general "optic flow".
- Couplings to motor models, dynamic systems, coordinate transforms.
hager
- Subspace and transform based methods (Fourier: Nelson and Alamoinos,
PCA:fleet,black, jepson, salgian, jag
- A comparison of the above to the human visual system.
- Vision and other sensory based control.
- A simple visual servoing example.
- Spatial transforms, kinematics, dynamics and formulation of
the general visual control problem.
- Image vs position based approaches.
- Learning spatial transforms and visual-motor models.
- Task function formalism. (An equational formulation of
motion specifications by Samson, Espiau, Chaumette etc.)
- Human motor control.
- Overview of basic anatomy, physiology and neurophysiology.
- Reference frames for human hand-eye coordination
- Neural coding of motion. Case study: Ocular-motor system.
(which we know a little bit about)
- Characterizing human motion, hopefully a guest lecture by Terry
and/or others from his group.
- Ongoing efforts to merge human and robotic hand-eye coordination:
UWO, TUM, ATR etc...
- User interfaces for robots and other machines.
- Closing the feedback loop over the human, simulating and
animating movement. (www.cs.yale.edu/~jag/ click on Perceptual Actions)
- Service robotics for elderly and handicapped.
- Surgical robotics (See WWW pages: cisstweb.cs.jhu.edu)
- Industrial applications (Remember Pierre's interview talk?)
- Vision and motion based HCI for everyday applications.
(Again see our WWW pages www.cs.jhu.edu/CIPS)