Download tiny dataset at first;
curl -s https://tri-ml-public.s3.amazonaws.com/github/packnet-sfm/datasets/KITTI_tiny.tar | tar -xv -C /data/datasets/
Adopting vidar and CamViz to visualization demo.
python thirdparty/vidar/scripts/launch.py demo_configs/camviz_demo.yaml
Using vidar, pre-train the depth, ego-motion, and intrinsics as:
python thirdparty/vidar/scripts/launch.py demo_configs/selfsup_resnet18_vo_calib.yaml
Then, the checkpoint file will be store at /data/checkpoints/vo_demo/<DATE>/models/###.ckpt
Set the above checkpoint path to SELFSUP_CKPT_OVERRIDE=
of demo_selfsup_vo_integration.sh, then run
./shells/demo_selfsup_vo_integration.sh