Interfacing with the Drone
Thanks to the efforts of psykokwak, there is a driver for the AR.Drone that interfaces with URBI, the Universal Robot Body Interface.
The ARDrone object has been compiled for for Mac OS X, so I didn't need to install a new operating system on my laptop. This ARDrone object allows you to control the drone with commands in UrbiScript.
There was a bit of hackery involved in deciphering the video feed - it is stored as a UBinary
object,
and documentation is sparse. Conviniently, it turns out that the binary format is the same as an
OpenCV IplImage
- various header things and an array of uchars for pixel data. This means that we
can create an IplImage
header, then set its ImageData to the URBI image data location - no copying
is required!
Cylindrical Coordinates & Rotation Matching
In order to make a panorama larger than 180 degrees, the images need to be projected into cylindrical coordinates - otherwise, the panorama will blow up to infinite height. Unfortunately, before projecting into cylindrical coordinates, we have to normalize the roll rotation between the two images - horizon lines in each image should be horizontal.
To accomplish this tricky task, I assume that the start image has zero roll, then go through the following set of steps:
Calculate features in first and second images.
Convert first image feature locations to cylindrical mapping.
For a range of rotation angles, go through the following steps:
- Rotate second image feature locations by this angle
- Convert second image feature locations into cylindrical mapping.
- Calculate homography to map second image onto first image
- Check distortion of resulting homography
The best rotation is the angle that minimizes resulting homography distortion.
To check the distortion of a homography, I transform a rectangle of the same size as the image into some kind of quadrilateral, then measure what fraction of the rectangle's bounding box is filled by the quadrilateral. If there is very little distortion, the quadrilateral will be almost rectangular, so this proportion will be high.
Image Combination Algorithm
The process of combining images uses the following steps:
Load in a set of images and undistort them to remove camera distortions.
Extract features from start image; convert their locations into cylindrical mapping
For every image in the list
- Extract features
- Find matches between extracted features and stored features
- Calculate best rotation and homography
- Combine images
- Transform extracted feature locations based on rotation and homography
- Copy extracted features into sest of stored features
Keep going until all images have been combined
Normalize & save combined image
In the end, I decided to start the process from the center image, then build out both left and right. This reduces the total distortion at the edges of the panorama.
Conclusions
As you can see, the project was generally successful. Even when blown off-course partway through the image collection process, the system manages to make a reasonable panorama without major distortion.
Once I got it working, the AR.Drone was a good robotics platform to work with. Figuring out the URBI-to-OpenCV flow was a bit tricky, but turned out to be fairly simple in the end.
If I were to do the project again, I would do it pretty much the same way. The only thing I would have added (if time permitted) is more complex blending between images, to make the transitions smoother.
Code Overview
This project was implemented using C++ and OpenCV. In the interest of speed, I used OpenCV functions for feature extraction and homography finding - the functions that I wrote myself for a previous assignment were not as fast, and the homography-finding operation needs to be run many times in the rotation matching process. Overall, the code is lacking in big-picture comments, but it's fairly clean.
There's a makefile to make compilation automatic. It will try to build two things: an URBI object
named ArPan.so
, and an executable named autoStitch
. They share a lot of the same features, but the
URBI object has the code to connect to an URBI ARDrone object, while autoStitch
takes in a set of
images filenames from the command line.
In order to get the code to run, you need URBI installed on your system, and the makefile needs
to know where the URBI root directory is located. You also need OpenCV, and if you want to interface
with URBI, this needs to be a 32-bit build. You can install a 32-bit build using HomeBrew. My setup
is a bit unique, because I wanted both 32 and 64-bit versions installed simultaneously; you might
need to change opencv32
to opencv
in the makefile if you want it to work.
Assuming you've gotten the code to compile, which is a non-trivial task, most of the interesting
stuff is in the stitchPan
files. The functions in these files implement the image combination
algorithm described above. All of the other files contain helper functions and classes - to extract
features, find matches, handle masks, distort & reproject images, etc.
Download
AR.Pan source and objects (zip file, 20.2 MB)