IMU Preintegration Theory

A while ago, I had explored the theory of IMU preintegration in the context of sensor fusion using a factor graph. The basic concept is that to use imu measurement as between factors for optimization we need to know the previous state. This is not convenient, however, a mathematical trick helps us make the between factor between two adjacent states that rely only on the imu measurements between the two timestamps. Yet another way to use imu measurements in sensor fusion context is to add states at every imu measurement. This causes the problem size to explode faster as the imu measurements are at approx 100hz.

In this presentation, I try to summarize my understanding in a byte-sized ppt. I also try to show you how to make use of all this in your application using GTSAM. There are also some pointers on how to implement it from scratch as well. The material is drawn mainly from [2]. Hope this helps, any comments/corrections are welcome..!


#pragma once
/** @file This consists of collection of functions related to
          initialization of the IMU pose. Given only the accelerometer
          measurement it in possible to infer FLU_T_imu0. This assume,
          the imu is at rest.

          There are methods which can recover FLU_T_imu when imu is moving.
          Currently I do not have an understanding of this.
          May be in the future when i have usecase for this, i will understand
          and implement this.

          Date: June 2020
          Author: Manohar Kuse <>

#include <iostream>
using namespace std;

#include <Eigen/Dense>
#include <Eigen/Geometry>
using namespace Eigen;

#include "ImuPoseUtils.h"

class ImuInitialPose
    /// @brief Given the accelerometer measurement, computes the initial pose
    ///        of the imu in FLU frame. FLU = x:front y:left z:up
    static Matrix3d compute_FLU_R_imu( const Vector3d lin_acc_measurement, bool _debug=false )
        Vector3d lin_acc_normed = lin_acc_measurement.normalized();
        Vector3d up; up << 0.0, 0.0, 1.0;

        Matrix3d FLU_R_imu0 = Eigen::Quaterniond::FromTwoVectors(lin_acc_normed, up).toRotationMatrix();

        // zero out the yaw
        Vector3d _rpy = ImuPoseUtils::rotation2EulerZXY( FLU_R_imu0 );
        _rpy(2) = 0.0;
        FLU_R_imu0 = ImuPoseUtils::eulerZXY2Rotation( _rpy );

        // in a general sense, 2 angles solutions are possible.
        // also the above way of zeroing the yaw is prone to gimbal lock.
        // TODO: deal with the above issue

        if( _debug )
            cout << "---ImuInitialPose::compute_FLU_R_imu debug ---"<< __FILE__ << ":" << __LINE__ << "\n";
            cout << "\tinput lin_acc_measurement=" << ImuPoseUtils::prettyPrintVector(lin_acc_measurement) << "\t norm=" << lin_acc_measurement.norm() << endl;
            cout << "\tFLU_R_imu0=" << ImuPoseUtils::prettyPrint( FLU_R_imu0 ) ;
            cout << endl;

            Vector3d FLU_g = Vector3d( 0, 0, -9.805 ); // I use a Z-up co-ordinate frame.
            Vector3d FLU_a = Vector3d( 0,0,0 );
            cout << "\tFLU_g: " << ImuPoseUtils::prettyPrintVector(FLU_g) << endl;
            cout << "\tFLU_R_imu0.transpose() * ( FLU_a - FLU_g ) : "
                 << ( FLU_R_imu0.transpose() * ( FLU_a - FLU_g ) ).transpose()
                 << endl;
            cout << "\t  ^^^^ should be about the same as input linear_acc\n";
            cout << "---END ImuInitialPose::compute_FLU_R_imu debug ---\n";


        return FLU_R_imu0;


    static Matrix4d compute_FLU_T_imu( const Vector3d lin_acc_measurement, bool _debug=false )
        Matrix3d R = compute_FLU_R_imu( lin_acc_measurement, _debug );
        Matrix4d T = Matrix4d::Identity();
        T.topLeftCorner(3,3) = R;
        return T;



  1. Lupton, Todd, and Salah Sukkarieh. “Visual-inertial-aided navigation for high-dynamic motion in built environments without initial conditions.” IEEE Transactions on Robotics 28.1 (2011): 61-76.
  2. Forster, Christian, et al. “On-Manifold Preintegration for Real-Time Visual–Inertial Odometry.” IEEE Transactions on Robotics 33.1 (2016): 1-21.
  3. Qin, Tong, Peiliang Li, and Shaojie Shen. “Vins-mono: A robust and versatile monocular visual-inertial state estimator.” IEEE Transactions on Robotics 34.4 (2018): 1004-1020.

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s