How to run ORB SLAM with your own data in Ubuntu 16.04
1. Take your video at a resolution of 640*480(VGA).
2. Save each frame of your video as a separate frame in the PNG format.
sudo apt install ffmpeg
ffmpeg -i testvideo.mp4 frame%d.png
My video name is testvideo.mp4 and I wanted to convert every frame of it to frame%d.png files. Those png files will be made in the same folder as the video.
3. Generate a text file like the picture below. The file contains the timestamps and filename of each image(frame). You can generate your own timestamps. There is no ideal way to set this but small time gap makes the slam run fast.
on of the text file of the TUM datasets |
frame_num = 235 f = open('rgb.txt', 'w+') for i in range(frame_num): f.write('%f rgb/frame%d.png\n' %(0.4*(i+1), i+1)) f.close()
I made the 'rgb.txt' file like above.
4. The images should be saved in a folder named 'rgb' in the main folder(let us name it 'test') and the text file should be named 'rgb.txt' and be saved in the 'test' folder.
file location |
5. Go to your SLAM folder and run.
./Examples/Monocular/mono_tum Vocabulary/ORBvoc.txt Examples/Monocular/TUM1.yaml ./test
Result |
We did the example with our video.
ReplyDeleteBut it can't catch the keyframe.
Why?