Skip to content

Many improvements regarding priori information (to speed up optimization and allow usage of prior estimates)#41

Open
peci1 wants to merge 12 commits intoUnsigned-Long:masterfrom
peci1:knots_priori
Open

Many improvements regarding priori information (to speed up optimization and allow usage of prior estimates)#41
peci1 wants to merge 12 commits intoUnsigned-Long:masterfrom
peci1:knots_priori

Conversation

@peci1
Copy link
Copy Markdown
Contributor

@peci1 peci1 commented Oct 13, 2025

This PR adds several improvements. They are mostly independent, but for simplicity, I've chained them all on a single branch. However, each improvement is a separate commit, so it should be easy to cherry pick if you only want some of the improvements.

Improvements:

  • This also includes Added support for Ouster scans with uint16_t ring fields. #40.
  • Allow optimizing IMU nonlinearity.
  • Fix for cases when RefIMU is given as Sen1 in SpatTempPriori (the RefImu extrinsics should never be optimized)
  • Allow specifying priors on gravity direction.
  • Allow specifying lower bound on visual scale for pos cameras
  • Allow specifying weight for intrinsics priors so that the optimized values do not diverge too far
  • Added the option to load already computed knots from a previous run and use these knots either as just initialization, or also as a constraint.

This PR brings a few changes to the config and spat-temp-priori YAML files. They have all been documented and all YAMLs in this repo were fixed.

It also changes the format of knots.yaml by adding the initial timestamp so that the knots can be successfully merged with different knots (this is mostly needed because of topic alignment).

Signed-off-by: Martin Pecka <peckama2@fel.cvut.cz>
The optimization of nonlinearities will only be enabled in the last batch opt.
If the priors of all extrinsics and time offset are given for a sensor, there is no need to initialize the extrinsics and time offset using visual odometry or other approaches.

If you want the priors to be non-absolute (i.e. still allow their optimization), set them as RefImu-Sen2 or Sen1-Sen2 priors. If the priors should be absolute (non-optimizable), set them as Sen1-RefImu.
This speeds up optimization and prevents wrongly estimated visual scale if you know the optimization sometimes tend to squeeze the camera using 0.001 scale.
This can be used e.g. for a multi-stage optimization if you know that adding a sensor will confuse the optimization. You can first optimize with other sensors, save the splines, and then run a second stage with the confusing sensor, fixing the splines so that they cannot be confused.
@364700045-prog
Copy link
Copy Markdown

Hi, I’d like to ask: after applying your modifications and introducing prior extrinsic constraints, is the resulting calibration error high?
Also, thank you for your high-quality contributions!

@peci1
Copy link
Copy Markdown
Contributor Author

peci1 commented Apr 11, 2026

Hmm, that's a good question. I definitely don't remember a number. I don't even know where to look for one. But I have the folder with results saved, so if it's stored somewhere, I can look it up.

I was mostly checking the results visually via the built TF tree and reprojection errors...

The thing is that without these improvements, I couldn't run iKalibr at all for our robot which has 3 IMUs, one lidar and 11 or so cameras. Not only did the whole thing take ages, but even if I let it work for as much time as it needed, it always diverged.

I ended up doing the calibration in stages, starting with a simple set of sensors, fixing them, and then adding other sensors with the already estimated knots as priors. This was the only way to get the whole thing to converge.

Most problematic (divergent) sensors were the IMU in Ouster lidar, a sky-looking camera (which was however the same model as other cameras which had no problem) and a thermal camera (I used some false-color output for the calibration). This is why I needed the lower bounds on scale, otherwise it always converged to 0.0.

I remember I was missing one more thing that could help with the optimization - when calibrating a stereo camera like Luxonis OAK, there is already a quite good extrinsics estimate from the camera maker. It would be great if this calibration could be input into iKalibr as a fixed one, i.e. fixed cam-to-cam offsets and only seeking the transform of the whole camera body to the rest of the sensors.

@364700045-prog
Copy link
Copy Markdown

Using prior extrinsic parameters, will the final calibration results be more objective? And in your team’s practice, do you ultimately adopt the calibrated results obtained in this way? How accurate are they, and is the time offset estimation reliable?

In my case, I am trying to calibrate a system that includes an IMU, a camera, a MID360 LiDAR, a solid-state LiDAR, and a millimeter-wave radar. I would really appreciate any suggestions you might have.

If possible, could we exchange contact information for further academic discussion? It would be great to communicate more about calibration as well as follow-up experiments after calibration.

Thank you very much for your reply!

@364700045-prog
Copy link
Copy Markdown

Hi, could you provide a Docker image for this project?
I attempted to build it from source, but some required third-party modules (e.g., CTraj and veta) are not available, so the build process cannot be completed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants