Fork me on GitHub


Vision Applications on Mobile using OpenCV

ECCV 2012 tutorial

Sunday October 7th, first half day


Gary Bradski, co-founder and CTO of Industrial Perception
Vincent Rabaud, Research Engineer at Willow Garage
Andrey Pavlenko, Sr SW Engineer at Itseez
Alexander Shishkov, Sr SW Engineer at Itseez


It is forecast that in 2012, 450 Million smart phones with cameras will be sold, increasing to 650 Million units in 2013. Those with interests in commercial applications of computer vision simply cannot afford to ignore this growth in “smart cameras” enabled by mobile devices.  This tutorial is intended to be hands on. We will:

  • Review OpenCV, the Open Source Computer Vision Library.
  • Cover some of the tools for developing vision applications on mobile devices with focus on OpenCV.
  • We will then show you step by step how to implement vision applications on
    • Android and
    • iOS.
  • We will go through the implementation of a classical face detection sample.
  • This application can serve as a stub which attendees can modify for their own applications. As time permits, we will guide/advise attendees in starting their own applications.



Pre-requisites for the practice session

If you plan to attend the practice session, it is advised to setup the development environment in advance, because this can take time and require good connection. Below are the links to the tutorials for Android and iOS developers.

iOS part pre-requisities

  1. bring a mac with OSX 10.7 and Xcode 4.3 or later. OSX 10.8DP and Xcode 4.5 beta might work too. That would be enough to run some of the samples on iOS Simulator. Camera is not available in the simulator.
  2. In order to run sample apps on a device, you should enroll yourself as iOS developer at (which costs $99 per year) and then register your device using Xcode organizer:
  3. to check that you did everything correctly, download, build and run on your device the sample GLCameraRipple:
  4. bring this registered device and the USB<=>Dock connector wire.

Android part pre-requisities

  1. bring a laptop (running Windows 7 or MacOS X; Linux may work too) with Java Development Kit installed ( We’d also suggest you pass the “Introduction into Android Development” tutorial that provides a step-by-step instructions on setup of Android development environment. It will be enough to run some of the samples on the emulator.
  2. bring your device and USB cable for it. It would be great if you learn how to run standard Android samples on a device before the tutorial, otherwise, because of a huge variety of different devices, it may or may not work during the tutorial.

Speaker biographies:

Dr. Gary Rost Bradski recently founded and is working at Industrial Perception Inc., a spin off from Willow Garage aimed at bringing 2D+3D perception to industrial robots in distribution and manufacturing.  He holds a joint appointment as Consulting Professor in Stanford University’s Computer Sciences Department where he teaches a course in Robot Perception. He has 69 publications and 32 patents. Dr. Bradski founded and still directs the Open Source Computer Vision Library (OpenCV), now a non-profit foundation, that is used globally in research, government and commercial applications with over 5M downloads to date. Dr. Bradski led the computer vision team for Stanley, the Stanford robot that won the $2M DARPA Grand Challenge and more recently he helped in the founding of the Stanford Artificial Intelligence Robot (STAIR) project under the leadership of Professor Andrew Ng. Dr. Bradski published a book for O’Reilly Press: Learning OpenCV: Computer Vision with the OpenCV Library which has been the best selling text in computer vision and machine learning for 3 years now.

Dr. Vincent Rabaud joined Willow Garage in January 2011 as a research engineer in computer vision. With a background in structure from motion his current focus is to teach a robot to recognize objects for grasping. Among other things, he is working on acquiring a database of household objects, developing 3D object recognition and fast feature detection on cellphones. His research interests include 3D, tracking, face recognition and anything that involves underusing CPU’s by feeding them very fast algorithms. Dr. Rabaud completed his PHD at UCSD, advised by Professor Belongie. He also holds a MS in space mechanics and space imagery from SUPAERO and a BS/MS in optimization from the Ecole Polytechnique.

Andrey Pavlenko is working in computer vision field for the last two years. He has developed Java API for OpenCV library, made key contribution to OpenCV for Android  development, Android samples and tutorials. Before joining Itseez Andrey worked at Intel Nizhny Novgorod Lab during 13 years in various projects including video codecs, debugging and performance analysis  tools. He also holds a MS in Computer Science from Nizhny Novgorod State University.

Alexander Shishkov is working in computer vision field for the last five years. He has developed technologies of video people counting systems, object detection and image retrieval systems. He developed a continuous integration system for OpenCV and all OpenCV web resources. He also holds a MS in computational mathematics from Nizhny Novgorod State University.