Vision Controlled Quadcopter Flight

Posted on December 9, 2015

8


Just completed (8th Dec, 2015) the Aerial Robotics course in HKUST (ELEC6910P) by Prof. SHEN Shaojie (my PhD supervisor). Most of the course credit were on the completion of the projects. Course TAs were Su Kunyue and Yang Zhengfei. My project partners were Sun Lin and Sun Ting

The projects eventually connected together to make a vision controlled autonomous aerial system. Below shows some videos/photos during the project. There is a downloads section to view the source-code. Finally, I am giving one can see high level details on the phases of the project.

Gallery

5ad4e9cb-53c0-45c8-9d69-23f6cc9de5eb

A hover flight. Feedback controller on the Quadrotor using feedback from Optitrack.

Project 1

To build a 2nd order PD controller for the quadrotor. The controller takes the desired position (and velocity and acceleration)  and current positions (and velocity and acceleration) as input and produces as output the thrust that is to be applied on each of the rotors of a quadcopter to follow the desired trajectory. The PD gains were to be tuned. A simulation was done in Matlab. The dynamics of the system were derived by the Newton-Euler equations for the motion of the quadcopter. Red is followed trajectory and Blue are the desired trajectory. See [1] for details on the controller. helix.png

Project 2

Generating Optimal Trajectories. The target of this project was to generate smooth trajectories. The basic idea is to setup waypoints and then fit polynomials through the way points such that the each of the polynomials are continuous and differential. Although this relatively simple approach works it is not a very optimal one for the quadrotor to follow.

Minimum Snap (4th order derivative of position) trajectories tend to give trajectories which are energy efficient. The way to go about it is to form the energy functional of the 4th derivatives of the polynomial trajectories and minimize it to get the polynomial coefficients of each segment. This gives a QP problem (Quadratic Programming). For more details see [2].

Green is the generated minimum-snap trajectory. Blue is the trajectory followed using the controller from Project-1.

path6.png

Project 3

Tag detector from RGB camera and Camera pose estimation. A large grid of AR markers (similar to shown below) was placed on the ground. The ArUco library provides an efficient way to generate a grid of tags (for print job) and a C++ interface to detect the imaged tag (imaged by a camera).

Screenshot from 2015-12-09 14:06:42.png

Having known the 3D co-ordinates of the tags and their imaged points, we can solve the PnP (Perspective-n-point) problem in Computer Vision to determine the pose of the camera relative to the ground frame. We implemented a simple linear algorithm. However for real applications (like quadrotor control) it is reasonable to use the non-linear version of the algorithm. We used the one provided by OpenCV2. cv::solvePnP().

The image on the left is the image captured by the camera viewing the AR tags grid. In the 3d visualization green points show the estimated poses by the implemented linear algorithm. The red points show the estimated poses by cv::solvePnP(). The plot of the right shows the re-projection error for each captured image. Here, blue is due to linear algorithm and red due to cv::solvePnP(). Note that, using visual measurements we can generate the pose estimates at no greater than frame rate of the camera (25 Hz).

Screenshot from 2015-11-12 11:55:30.png

 

 

Project 4

EKF (Extended Kalman Filter) fusion of visual and IMU pose information. IMU (Inertial Measuring Unit) provides the instantaneous linear acceleration and angular velocity of the body (here, the quadcopter). These measurements can be used to also determine the motion of the quadcopter. IMUs provide measurements at 200Hz. However this motion estimation is often unreliable in long term, although it is accurate in short term (1 sec).

A Kalman filter (EKF) provides for a simple, fast and reliable framework for fusing the information from IMU and the camera. The state variable is the position, orientation, linear velocity and IMU biases. The IMU provides for the prediction step. Periodic observations from the camera provide for the update step of the Kalman filter. The mean and covariances are maintained for the state. The major point of this step is to provide state estimation feedback to the controller at a high frame rate (200Hz) as 25 Hz is not sufficient for the flight control).

For details see standard theory on Kalman filter/EKF and then [1].

Below, red points show the trajectory estimated by cv::solvePnP(). Notice the occasional errors and low framerate of 25Hz. Blue points show the trajectory upon fusion of IMU and camera information. This generate pose information at 200 Hz.

Screenshot from 2015-12-07 12:46:54.png

 

References

[1] N. M, M. Daniel, L. Quentin, and K. Vijay, “The grasp multiple micro-uav testbed,” IEEE Robotics and Automation Magazine, vol. 17, no. 3, pp. 56–65, 2010.

[2] Richter, Charles, Adam Bry, and Nicholas Roy. “Polynomial trajectory planning for aggressive quadrotor flight in dense indoor environments.” Proceedings of the International Symposium on Robotics Research (ISRR). 2013.

 

 

 

 

 

 

 

 

 

 

Advertisements