Projects

The project in this course is equivalent to approximately two labs, and represents a chance for you to develop and demonstrate your ability to do independent work. This is an opportunity, but also a responsibility and to take seriously and involves a significant time commitment. Start your work early. Plan a schedule and stick to it. You are encouraged to form groups of 2-4 students. You can do an individual project, but I recommend that you try to find a match in a group. In a group project each student has to have individually identifiable content, ie a particular technical part and the corresponding writeup in the proposal and report.

If you are unsure what would make a good project feel free to email me and the TA with your thoughts. You could briefly outline 1, 2 or 3 project in a few sentences each, and ideally provide links to relevant paper(s) or web pages. We will respond with feedback on each of the ideas asap.

Deliverables

Suggested topics

You can propose your own topics, or pick one of the below. See also the lecture notes on projects (lectures link on left side frame).

Geometry

  1. Analysis, editing and improvements of SFS geometry. This project is suitable for someone with some prior exposure to either computational geometry or graphics. You will study the particularities of geometries generated from SFS and how to improve these.
  2. Scene modeling from SFM: The 3D points obtained from a SFM and visual stereo system needs analysis, correcting and triangulation before it can be used in e.g. rendering. One can do this using ARC 3D for 3D point generation, meshlab for editing of the point cloud, and surface generation, and capgui for texturing. The project consists of analyzing what are cumbersome steps and algorithmic improvements in this chain. Try to improve those steps by either implementing better functionality or considering and analysing the performance of available software packages.
  3. Study manual agmentation of captured point sets. In many cases a model in terms of simple geometric primitives is desired (e.g. a house could be modeled using 3D solid geometry primities parallelepipedes for the bottom part and a prism for the roof). A task would be how to easily with some (but minimal) user interaction go from a 3D point set of a scene to a model represented by the composition (union) of 3D solid geometry primitives (spheres, cylinders, parallel epipedes, prisms etc). Alternatively one could use lines, planes and curves for surfaces.

Texturing and appearance:

Multi texturing: The texturing in the capgui is currently a form of multi-texturing: For a particular view a single texture is generated by blending a basis of 20 basis textures. These are currently generated by PCA over all the input textures.
  1. One could consider what is the best subset of view angles to generate these textures and partition the view sphere (or circle) into e.g. quadrants. Figure out how to handle overlap and the boundaries. Study references from graphics literature, e.g. "A Theory of Locally Low Dimensional Light Transport" by Dhruv Mahajan, Ira Kemelmacher Shlizerman, Ravi Ramamoorthi, and Peter Belhumeur
  2. In some cases (e.g. geometry very inaccurate) other (e.g. non-linear) subspace methods may perform better (e.g. isomap, kernel pca, etc). Try to find the properties of the texture manifold and matc h that to a suitable method.
  3. Write a texturing plugin using some method (e.g. original or one of the above) for Blender so captured objects can be integrated and rendered in blender scenes.
  4. Texture decomposition into low/high frequency.

    Single texturing:

  5. In some cases when using models in legacy SW that can only handle a single texture, one has to distill the in some measure "best" color w.r.t. all the input data for a particular object point. There are both heuristic methods for this ( Blending images for texturing 3D Models, Adam Baumberg BMVC 2002) and globally optimal (Spatio-temporal Image-based Texture Atlases for Dynamic 3-D Models Janko and Jean-Philippe Pons 3DIM2009, Seamless image-based texture atlases using multi-band blending, Allène, C. and Pons, J-P. and Keriven, R. ICPR 2008, Seamless Mosaicing of Image-Based Texture Maps Victor Lempitsky and Denis Ivanov CVPR.2007)
    Study these methods (we have some example code that can get you started) and see how they compare. See if you can integrate one into the CapGui
Tracking:
  1. Study a different tracking modality, e.g. hyperplanes (Juri and Dhome), Meanshift (Dorin Comaniciu, Peter Meer) integrate this tracking into XVison and compare performance with existing XVision trackers.
  2. Implement compositional tracking similar to the IFA system.

How to go about it

For example in the 3D capture project you would first study existing methods. The HZ The Szeliski book describes the state of the art and has paper refences. Try one or more of the exisiting systems to get a feel for how current methods work. Systems you can try include our capgui (will be shown in the lab), Arc3D, Photosynth. Try to find what type of models can be created, what are limitations (e.g. how doe the images have to be taken). the reading and trying is for the review part to go with the proposal P1 to be handed in first.

Then find something to research and implement yourself. This could be a subtopic such as geometric refinement or an applicatino where you capture and use vision based models for some effects in e.g. an animation or computer game.

In a tracking project you can similarly first try our XVision system and or other trackers you can download, then propose some extension or additional tracking modality to implement.