Exercise 1: Fundamental Matrix, Epipole, Triangulation (4)
Reference: slides 62-66, 44-49 and 25-35 of the tutorial.
Exercise 2: Epipolar Line (1)Take (or find) a pair of images of a scene with different view points. The script will provide you with a number of matching points between two images. Some may be wrong/noisy. Do exercise 6.1 from the exercise file. (select the 15 points for projection with the helper script yourself).
- Correction to part 6.1(a): normalized points x1c and x4c are not already computed - you need to compute these yourself.
Instead of using automatic matching, select the correspondences yourself. Which one is more accurate?. Try to select only 8 points from suitable positions and compare the results with automatic matching with larger number of matches. Repeat this exercise using live tracking on camera captured images:
- Select the points in the first image and track these as you move the camera slowly.
- Capture the first image and another one after the viewpoint has changed sufficiently.
- Try using fewer points if tracking 15 points in real time is too slow.
Reference: slide 35 of the tutorial.
Do exercise 6.2 from exercise file.
There is a typo in the description in that you are not provided with a script called kdemo7 - you need to write one to implement its functionality.Use the fundamental function that you wrote in exercise 1 in this part. You can modify the helper script to show the epipolar lines.
Exercise 4: 2D and 3D Augmented Reality (5)Do exercise 6.3 from exercise file.
Do exercise 6.4 from exercise file.
Warm up: Project a 2D image into a scene plane in a live video (1).
Track (or detect) 4 (or more) non-collinear points on a scene plane. Choices for this plane are as wide as your imagination. Some suggestions: your onecard, a poster on a wall, the whiteboard etc... Then choose an image to project on the tracked plane. E.g. Smiley face on your one card, latest movie poster instead of that old tomb raider poster on your wall or homography factorization on the white board. You can use your implementation for computing the homography matrix from lab 4.(bonus) Instead of projecting a still image, try to project a video onto the plane (1).
Project a 3D object into multiple images taken from a scene (4).
To do so, track (or detect) at least 8 (non-coplanar) points in the images and find the fundamental matrix and camera projection matrix. Then re-project your desired objects into all the frames. Your point cloud 3D objects in the second lab is a good start.(Grads) Make it live - reproject your 3D object into the live camera feed.
(bonus) Try more interesting 3D objects than a point cloud. You can use any other framework (opencv, unity, artoolkit, ...) that you are more familiar with to obtain the 3D model (1).
(bonus) Higher degrees of freedom trackers can be used to track planes instead of points. Checkout MTF to get these type of trackers (2).
Tips: Matlab has a fast and robust chessboard detection. It can be used instead of tracking points. (note that we are NOT using its physical dimensions to calibrate the camera). Feature matching can be used with care. The scene should be chosen carefully to provide good matching candidates for the method.