CMPUT  412, Winter 2015 

Soccer Robot Project

Michael Feist, Artem Chikin, Brad Simons

Motivation

For this project, we designed and implemented a differential drive mobile robot capable of locating a colored ball, acquiring it and kicking it into the net. The robot locates both the ball and the net with an attached camera using computer vision techniques. The robot uses a 1-DOF arm to kick the ball into the net. The project’s inspiration was to emulate, on a small scale, the participants of RoboCup - an international robot soccer competition with a strong focus on autonomy of the robot. The project overall, was a success. Our chosen task represents a multitude of real-life problems in the field of robotics, tackling which was a valuable learning experience.

The Robot

Design

We utilize the LEGO Mindstorms EV3 kit to implement a mobile differential drive robot. The platform utilizes the maximum available four motor ports: two for the drive, one for the camera attachment, giving it the ability to look up and down, and one for a 1-DOF arm that functions as both a device to grab the ball and to kick the ball, as seen in figure 1. The full system also includes a host computer. The camera attached to the robot is tethered to the host computer via a USB cable; the robot itself is also tethered to the host computer by a USB cable. The necessity for a separate, more powerful machine arises from computational complexity of the computer vision techniques employed by the robot and a relative weakness of the EV3 hardware. Moreover, the presence of a host machine allows us to offload other tasks from the EV3 brick, such as behavior state processing.

Demo

 

 

Supplementary

Presentation
Report

References

[1] RoboCup 2014, 'Robocup - www.robocup.org', 2015. [Online]. Available: http://www.robocup2014.org/?page_id=238. [Accessed: 27- Feb- 2015].

[2]A. Treptow and A. Zell, 'Real-time object tracking for soccer-robots without color information', Robotics and Autonomous Systems, vol. 48, no. 1, pp. 41-48, 2004.

[3] Coding-robin.de, 'Coding Robin', 2015. [Online]. Available: http://coding-robin.de/2013/07/22/train-your-own-opencv-haar-classifier.html. [Accessed: 27- Feb- 2015].

[4] wikipedia.com, ‘Hough Transform’, 2015 [Online]. Available: http://en.wikipedia.org/wiki/Hough_transform [Accessed: 11- Apr - 2015].

[5] wikipedia.com, ‘Haar-like Features’, 2015 [Online]. Available:           http://en.wikipedia.org/wiki/Haar-like_features [Accessed: 11- Apr - 2015].

[6] P. Voila and M. Jones, ‘Rapid Object Detection using a Boosted Cascade of Simple Features’, Accepted Conference on Computer Vision and Pattern Recognition, 2001.

[7] docs.opencv.org, ‘Cascade Classification’, 2015 [Online]. Available: http://docs.opencv.org/modules/objdetect/doc/cascade_classification.html#viola01 [Accessed: 11- Apr - 2015].

[8] Mark Fairchild, "Color Appearance Models: CIECAM02 and Beyond." Tutorial slides for IS&T/SID 12th Color Imaging Conference

[9] Robots.ox.ac.uk, 'Parallel Tracking and Mapping for Small AR Workspaces (PTAM)', 2015. [Online]. Available: http://www.robots.ox.ac.uk/~gk/PTAM/. [Accessed: 27- Feb- 2015].

[10] Wiki.ros.org, 'ptam - ROS Wiki', 2015. [Online]. Available: http://wiki.ros.org/ptam. [Accessed: 27- Feb- 2015].