Posts

Tuesday, 19 February 2019

RC car 일단 저장

일단 저장

Pixhawk2 Calibration

  1. Mission Planner 설치 (윈도우) / Qgroundcontrol (우분투)
  2. Firmware 설치
    • 인터넷에서 다른 firmware 다운로드 옵션을 통해 FMUv3 용의 ardurover firmware을 다운로드 후 설치 (Pixhawk1 버전은 안됨)
  3. Mission Planner의 지시에 따라 가속도계, 나침반, radio, esc (필수 x) 를 calibration
    • radio calibration에서 mission planner는 최소 4개의 채널(poll, pitch, yaw, throttle)을 요구하지만, 사용하고 있는 컨트롤러는 기본적으로 2개의 채널만 사용.
    • AUX1~3 채널을 임의로 할당하여 사용(기본 2개의 채널을 rover 제어에 필요한 poll, throttle에 할당)
    • 기본적으로 mission planner에서는 channel 3의 신호를 throttle 제어에 사용하도록 설정되어 있으므로, parameter 변경을 통해 channel 2가 throttle을 제어할 수 있도록 한다.

TK1에 MAVROS 설치

  1. TK1 에 ros-indigo 설치
    1. roswiki 참고
    2. TK2 에 MAVROS 설치
sudo apt-get install python-catkin-tools
sudo apt-get install ros-indigo-mavros ros-indigo-mavros

Serial, USB

그림1

  1. TK1 보드에서 Pixhwak를 제어하기 위해 Pixhwak의 TELEM2 포트를 USBtoTTL 장치를 통해 그림 1과 같이 연결
  2. Qgroundcontrol 에서 SYS_COMPANION 파라메터를 “Companion Link(921600 baudrate 8N1)으로 수정(Mission Planner에서는 기본적으로 57600)
ls /dev/ttyUSB*
sudo chmod 666 /dev/ttyUSB* #for permission
roscore
rosrun mavros mavros_node _fcu_url:="/dev/ttyUSB*:921600"
    • 이때 TK1의 경우 serial to usb 드라이버가 설치되어 있지 않음(보통은 우분투 운영체제 기본 내장이다)
    • USBtoTTL 장치의 제조 회사에 따라 드라이버가 다름(이 경우는 CP210X 드라이버를 설치해야 한다)
제조사에서 드라이버 다운로드 후
make
Cp cp210x.ko /lib/modules/<kernel-version>/kernel/drivers/usb/serial
Inmode /lib/modules/<kernel-version>/kernel/drivers/usb/serial/usbserial.ko #이 경우에는 없었음
Insmod cp210.ko # 드라이버 실행 코드
make 할 때 tk1 에서는 genmask 매크로가 정의되어 있지 않아 에러가 발생(genmask는 커널 버전 3.19 부터 정의되어 지는데 tk1은 3.13이다)
(따라서 에러가 발생하는 파일에 genmask를 정의 해주어야 한다)
(genmask의 코드는 인터넷에 나온다)

뒤                                                                       앞

Arming
Pixhawk를 작동시키기 위해 pixhawk 를 arming 하는 과정이 필요하다. 이때 오류가 날 경우 parameter 중 arming단계에서 preflight (시동전 점검)단계를 생략하는 parameter를 설정한다.

roscore
rosrun mavros mavros_node _fcu_url:="/dev/ttyUSB0:57600"
rosservice call /mavros/set_mode “base_mode: 0 cumtom_mode:”GUIDED””  #GUIDED(외부 보드에서 pixhawk를 제어할 수 있는 모드)로 변환
rosservice call /mavros/cmd/arming “value: Ture” #arming
# Arming 스위치를 누른다(깜박임 -> 유지)
# 이후 여러 topic을 통해 제어 할 수 있다.



Monday, 18 February 2019

[ORB SLAM] What is in the code

[ORB SLAM] What is in the code


ORB_SLAM2/Exampels/Monocular


I read 'mono_tum.cc' but 1~7 are all same in every cc files.

  1. Retrieve paths to images.
  2. Create a SLAM system. It initializes all system and gets ready to process frames.
  3. Create a vector for tracking time statistics.
  4. Main loop
    1. Read an image from file
    2. Pass the image to the SLAM system
    3. Wait to load the next frame
    4. repeat!
  5. Stop all threads
  6. Tracking time statistics
  7. Save keyframe trajectory. After running, the keyframe trajectory will be saved as 'KeyFrameTrajectory.txt' so save it as a different name or move it to another directory if you want to keep it.

# There is a part to skip three lines in the first of 'rgb.txt' file. There is no part like that in 'mono_kitti.cc'.

In '.yaml' file, there are camera parameters includes camera calibration and distortion parameters, camera frames per second, color order of the images. Also, there are ORB parameters for SLAM like the number of features per image, the scale factor between levels in the scale pyramid and the number of levels in the scale pyramid.


Wednesday, 13 February 2019

Ubuntu에서 pip intsall을 한 후에 ImportError 발생

Ubuntu에서 pip intsall을 한 후에 ImportError 발생

Traceback (most recent call last):   File "/usr/bin/pip", line 9, in <module>
from pip import main
ImportError: cannot import name main

terminal에서
hash -d pip
하니 해결되었다.

땡큐

Tuesday, 12 February 2019

How to run ORB SLAM with KITTI Dataset

How to run ORB SLAM with KITTI Dataset



Monocular Examples

1. Download the dataset (grayscale images) from http://www.cvlibs.net/datasets/kitti/eval_odometry.php

2. Execute the following command. Change KITTIX.yamlby KITTI00-02.yaml, KITTI03.yaml or KITTI04-12.yaml for sequence 0 to 2, 3, and 4 to 12 respectively. Change PATH_TO_DATASET_FOLDER to the uncompressed dataset folder. Change SEQUENCE_NUMBER to 00, 01, 02,.., 11.


./Examples/Monocular/mono_kitti Vocabulary/ORBvoc.txt Examples/Monocular/KITTIX.yaml PATH_TO_DATASET_FOLDER/dataset/sequences/SEQUENCE_NUMBER



Map I got

In KeyFrameTrajectory.txt file, every row has 8 entries containing time stamp (in seconds), position and orientation: 'timestamp x y z q_x q_y q_z q_w'
So to achieve what you want to do, you could for example load the file as a table (similar to a .csv file) and then the columns 2 to 4 are your x, y, z values (or 1 to 3 if you count from 0)

Stetro Examples


1. Download the dataset above.

2. Execute the following command. Change KITTIX.yamlto KITTI00-02.yaml, KITTI03.yaml or KITTI04-12.yaml for sequence 0 to 2, 3, and 4 to 12 respectively. Change PATH_TO_DATASET_FOLDER to the uncompressed dataset folder. Change SEQUENCE_NUMBER to 00, 01, 02,.., 11.

./Examples/Stereo/stereo_kitti Vocabulary/ORBvoc.txt Examples/Stereo/KITTIX.yaml PATH_TO_DATASET_FOLDER/dataset/sequences/SEQUENCE_NUMBER

How to convert ogv to mp4 in Ubuntu16.04

How to convert ogv file to mp4 file

I am using Ubuntu16.04 and RecordMyDesktop to record my screen. This program makes .ogv file as its output so I wanted to convert it to mp4 file.

wget http://ffmpeg.gusari.org/static/32bit/ffmpeg.static.32bit.latest.tar.gz
tar xzvf ffmpeg.static.32bit.latest.tar.gz

Get ffmpeg like above, or use sudo like below.

sudo apt get install ffmpeg

Then run it. 'out-1.ogv' is the input file. The value 5 after -crf controls the quality. The larger number, lower quality. If you want to resize the video file, control this number.

ffmpeg -i input.ogv -aq 80 -vcodec libx264 -preset slow -crf 5 -threads 0 output.mp4



Friday, 8 February 2019

How to run ORB SLAM with your own data in Ubuntu 16.04

How to run ORB SLAM with your own data in Ubuntu 16.04



1. Take your video at a resolution of 640*480(VGA).

2. Save each frame of your video as a separate frame in the PNG format.

sudo apt install ffmpeg
ffmpeg -i testvideo.mp4 frame%d.png
My video name is testvideo.mp4 and I wanted to convert every frame of it to frame%d.png files. Those png files will be made in the same folder as the video.

3. Generate a text file like the picture below. The file contains the timestamps and filename of each image(frame). You can generate your own timestamps. There is no ideal way to set this but small time gap makes the slam run fast. 
on of the text file of the TUM datasets 
frame_num = 235
f = open('rgb.txt', 'w+')

for i in range(frame_num):
    f.write('%f rgb/frame%d.png\n' %(0.4*(i+1), i+1))

f.close()

I made the 'rgb.txt' file like above.

4. The images should be saved in a folder named 'rgb' in the main folder(let us name it 'test') and the text file should be named 'rgb.txt' and be saved in the 'test' folder.

file location
I set files like above.

5. Go to your SLAM folder and run.

./Examples/Monocular/mono_tum Vocabulary/ORBvoc.txt Examples/Monocular/TUM1.yaml ./test
Result

















Thursday, 7 February 2019

How to install ORB_SLAM2 and test it on ubuntu16.04

< What I did >

How to install ORB_SLAM2 and test it on Ubuntu 16.04

1. Create a new folder

mkdir ORB_SLAM
cd ORB_SLAM

2. Install the prerequisite software

(1) Update apt library
sudo apt-get update
(2) Install git
sudo apt-get install git
(3) Install cmake
sudo apt-get install cmake
(4) Install Pangolin
installation dependencies:
Opengl / GLEW :
sudo apt-get install libglew-dev
Boost :
sudo apt-get install libboost-dev libboost-thread-dev libboost-filesystem-dev
Python2/Python3 :
sudo apt-get install libpython2.7-dev
compile the base library :
sudo apt-get install build-essential
How to build Pangolin :
git clone https://github.com/stevenlovegrove/Pangolin.git
cd Pangolin
mkdir build
cd build
cmake ..
cmake --build .
(5) Install Eigen
cd ~/ORB_SLAM
sudo apt install mercurial
hg clone https://bitbucket.org/eigen/eigen/
cd eigen
mkdir build
cd build
cmake ..
make
sudo make install
(6) Install the BLAS and LAPACK library
sudo apt-get install libblas-dev
sudo apt-get install liblapack-dev
(7) Install OpenCV
# Install Dependencies

sudo apt-get update
sudo apt-get install -y build-essential
sudo apt-get install -y cmake
sudo apt-get install -y libgtk2.0-dev
sudo apt-get install -y pkg-config
sudo apt-get install -y python-numpy python-dev
sudo apt-get install -y libavcodec-dev libavformat-dev libswscale-dev
sudo apt-get install -y libjpeg-dev libpng-dev libtiff-dev libjasper-dev
sudo apt-get -qq install libopencv-dev build-essential checkinstall cmake pkg-config yasm libjpeg-dev libjasper-dev libavcodec-dev libavformat-dev libswscale-dev libdc1394-22-dev libxine2 libgstreamer0.10-dev libgstreamer-plugins-base0.10-dev libv4l-dev python-dev python-numpy libtbb-dev libqt4-dev libgtk2.0-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libtheora-dev libvorbis-dev libxvidcore-dev x264 v4l-utils

# Download opencv-2.4.11

wget http://downloads.sourceforge.net/project/opencvlibrary/opencv-unix/2.4.11/opencv-2.4.11.zip
unzip opencv-2.4.11.zip
cd opencv-2.4.11
mkdir release
cd release
cd ~/ORB_SLAM
git clone https://github.com/opencv/opencv_extra.git
cd opencv-2.4.11/release
Please change -DOPENCV_TEST_DATA_PATH line below as yours.
  • Set OPENCV_TEST_DATA_PATH environment variable to <path to opencv_extra/testdata>.

cmake \
      -DBUILD_EXAMPLES=ON                                                     \
      -DBUILD_OPENCV_JAVA=OFF                                                 \
      -DBUILD_OPENCV_JS=ON                                                    \
      -DBUILD_OPENCV_NONFREE=ON                                               \
      -DBUILD_OPENCV_PYTHON=ON                                                \
      -DCMAKE_BUILD_TYPE=RELEASE                                              \
      -DCMAKE_INSTALL_PREFIX=$INSTALL_PATH                                    \
      -DCMAKE_LIBRARY_PATH=$CUDA_PATH/lib64/stubs/                            \
      -DCUDA_CUDA_LIBRARY=$CUDA_PATH/lib64/stubs/libcuda.so                   \
      -DCUDA_FAST_MATH=ON                                                     \
      -DCUDA_TOOLKIT_ROOT_DIR=$CUDA_PATH                                      \
      -DENABLE_CCACHE=ON                                                      \
      -DENABLE_FAST_MATH=ON                                                   \
      -DENABLE_PRECOMPILED_HEADERS=OFF                                        \
      -DINSTALL_C_EXAMPLES=ON                                                 \
      -DINSTALL_PYTHON_EXAMPLES=ON                                            \
      -DINSTALL_TESTS=ON                                                      \
      -DOPENCV_EXTRA_MODULES_PATH=$DOWNLOAD_PATH/opencv_contrib/modules/      \
      -DOPENCV_ENABLE_NONFREE=ON                                              \
      -DOPENCV_TEST_DATA_PATH= ~/ORB_SLAM/opencv_extra/testdata/              \
      -DWITH_CUBLAS=ON                                                        \
      -DWITH_CUDA=ON                                                          \
      -DWITH_FFMPEG=ON                                                        \
      -DWITH_GDAL=ON                                                          \
      -DWITH_GSTREAMER=ON                                                     \
      -DWITH_LIBV4L=ON                                                        \
      -DWITH_NVCUVID=ON                                                       \
      -DWITH_OPENCL=ON                                                        \
      -DWITH_OPENGL=ON                                                        \
      -DWITH_OPENMP=ON                                                        \
      -DWITH_QT=ON                                                            \
      -DWITH_TBB=ON                                                           \
      -DWITH_V4L=ON                                                           \
      -DWITH_VTK=ON                                                           \
      -DWITH_XINE=ON                                                          \
..

make all -j12 #12 cores
sudo make install
sudo apt-get install python-opencv
I got the cmake error like down below and the solution is on that link.

Error :
CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
CUDA_nppi_LIBRARY (ADVANCED)

Solution :
https://stackoverflow.com/questions/46584000/cmake-error-variables-are-set-to-notfound

And then, I did cmake again and it worked!


3. Install ORB_SLAM

(1) Clone the repository:
git clone https://github.com/raulmur/ORB_SLAM2.git ORB_SLAM2
(2) Compile:
cd ORB_SLAM2
chmod +x build.sh
./build.sh
I got the error after this, so I added '#include<unistd.h>' in 'system.h' file.

Error:
‘usleep’ was not declared in this scope usleep(3000);

Solution:
Modify ORB_SLAM2-master/include/system.h, add "#include <unistd.h>" in header.

And then, I deleted every build file in ORB_SLAM2, DBoW, g2o and run 'build.sh' again.
It worked!

4. Test ORB_SLAM2

A test data set for a monocular camera was downloaded for testing.

(1) Download the test data set
Download a sequence from http://vision.in.tum.de/data/datasets/rgbd-dataset/download and uncompress it.
wget https://vision.in.tum.de/rgbd/dataset/freiburg1/rgbd_dataset_freiburg1_xyz.tgz

(2) Execution command:
Execute the following command. Change TUMX.yaml to TUM1.yaml, TUM2.yaml or TUM3.yaml for freiburg1, freiburg2 and freiburg3 sequences respectively. Change PATH_TO_SEQUENCE_FOLDER to the uncompressed sequence folder.

./Examples/Monocular/mono_tum Vocabulary/ORBvoc.txt Examples/Monocular/TUMX.yaml PATH_TO_SEQUENCE_FOLDER


The result is OK, and some research records will be made on the ORB-SLAM algorithm in the future. Stay tuned.



Wednesday, 30 January 2019

Use Korean in Ubuntu 16.04

Use Korean in Ubuntu 16.04

 sudo apt-get install fonts-nanum*  
 sudo apt-get install nabi  
 sudo apt-get install im-config  
 im-config  

Change 'im-config' setting to Hangul and reboot!

Setting to use my gpu : NVIDIA graphic drivers / CUDA


Install NVIDIA Graphic Drivers

Find your NVIDIA driver on NVIDIA website and download it. I downloaded NVIDIA-Linux-x86_64-410.93. It should be installed without lightDM. You should open the TTY with ctrl + alt +f1 before you stop the lightDM. TTY's are text-only terminals commonly used as a way to get access to the computer to fix things, without actually logging into a possibly b0rked desktop.

 sudo service lightdm stop  
 chmod +x ./NVIDIA-Linux-x86_64-410.93  
 sudo ./NVIDIA-Linux-x86_64-410.93  
 sudo reboot  

Install CUDA 9.0 and cuDNN 7.0 on Ubuntu 16.04

Find CUDA version you want on NVIDIA website or if you are looking for the same version with me just follow me from the first row. If you downloaded a different one, start from the second row and change the file name as yours.

 # Uninstall Old Version  
 sudo apt-get purge cuda  
 sudo apt-get purge libcudnn6  
 sudo apt-get purge libcudnn6-dev  

 # Install CUDA toolkit 9.0 and cuDNN 7.0  
 wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_9.0.176-1_amd64.deb  
 wget http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64/libcudnn7_7.0.5.15-1+cuda9.0_amd64.deb  
 wget http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64/libcudnn7-dev_7.0.5.15-1+cuda9.0_amd64.deb  
 wget http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64/libnccl2_2.1.4-1+cuda9.0_amd64.deb  
 wget http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64/libnccl-dev_2.1.4-1+cuda9.0_amd64.deb  

If you just formatted your desktop, you need to fo this before dpkg.

 sudo apt-key adv --fetch-keys http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/7fa2af80.pub  

 sudo dpkg -i cuda-repo-ubuntu1604_9.0.176-1_amd64.deb  
 sudo dpkg -i libcudnn7_7.0.5.15-1+cuda9.0_amd64.deb  
 sudo dpkg -i libcudnn7-dev_7.0.5.15-1+cuda9.0_amd64.deb  
 sudo dpkg -i libnccl2_2.1.4-1+cuda9.0_amd64.deb  
 sudo dpkg -i libnccl-dev_2.1.4-1+cuda9.0_amd64.deb

 sudo apt-get update  
 sudo apt-get install cuda=9.0.176-1  
 sudo apt-get install libcudnn7-dev  
 sudo apt-get install libnccl-dev  

Modify the PATH. Open '.bashrc' file and end two lines below.

 gedit .bashrc
 export PATH=/usr/local/cuda-9.0/bin${PATH:+:${PATH}}  
 export LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
 source .bashrc  

And then reboot your desktop.

 reboot  

Verify CUDA installation.

 jihyo@jihyo-desktop:~$ nvidia-smi  
 Wed Jan 30 15:19:03 2019      
 +-----------------------------------------------------------------------------+  
 | NVIDIA-SMI 410.48         Driver Version: 410.48          |  
 |-------------------------------+----------------------+----------------------+  
 | GPU Name    Persistence-M| Bus-Id    Disp.A | Volatile Uncorr. ECC |  
 | Fan Temp Perf Pwr:Usage/Cap|     Memory-Usage | GPU-Util Compute M. |  
 |===============================+======================+======================|  
 |  0 GeForce RTX 2070  Off | 00000000:01:00.0 On |         N/A |  
 | 0%  49C  P8  25W / 215W |  737MiB / 7944MiB |   2%   Default |  
 +-------------------------------+----------------------+----------------------+  
 +-----------------------------------------------------------------------------+  
 | Processes:                            GPU Memory |  
 | GPU    PID  Type  Process name               Usage   |  
 |=============================================================================|  
 |  0   1021   G  /usr/lib/xorg/Xorg              550MiB |  
 |  0   1581   G  compiz                    125MiB |  
 |  0   1980   G  ...uest-channel-token=15398769083618213155  59MiB |  
 +-----------------------------------------------------------------------------+  
 jihyo@jihyo-desktop:~$ nvcc -V  
 nvcc: NVIDIA (R) Cuda compiler driver  
 Copyright (c) 2005-2015 NVIDIA Corporation  
 Built on Tue_Aug_11_14:27:32_CDT_2015  
 Cuda compilation tools, release 7.5, V7.5.17  







Super Easy Way to Install Chrome on Ubuntu


Super Easy Way to Install Chrome on Ubuntu


 wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
sudo dpkg -i google-chrome-stable_current_amd64.deb

That's it.

Friday, 18 January 2019

Understanding Kalman Filter in case of 1D! (Super Easy)

Previous Story...

Kalman Filter

Kalman Filter is a type of Bayes Filter. It can be used only that has the estimator for the linear Gaussian case and the optimal solution for linear models and Gaussian distribution. Kalman Filter calculates the current position of the robot by recursive state prediction and correction. In state prediction, it is calculating the robot state with the state right before and its motion input. In the correction step, it checks the state prediction is reasonable or not with the sensor observation.

Kalman Filter in 1 Dimension

It is easy to understand in the case of 1 dimension. The mean and the covariance of the distribution can be gotten by just addition. 


In the prediction step, predict the next state by calculating with the robot's motion. Because it is only one dimension, it is already linear so the prediction can be gotten by just addition. Next, in the correction step, calculate new state with the observation of the sensor. This gaussian addition is quite difficult, you can read more here(The product of two Gaussian pdfs is not a pdfs, but it is Gaussian(a.k.a. loving algebra)).

Python Code

I made a simple code of these two steps in python.

# Kalman Filter 1Ddef correction(mean, var, sensor_mean, sensor_var):
    new_mean = (sensor_mean*var + mean*sensor_var)/(var + sensor_var)
    new_var = var*sensor_var/(var + sensor_var)
    return new_mean, new_var

def prediction(mean, var, motion_mean, motion_var):
    new_mean = mean + motion_mean
    new_var = var + motion_var
    return new_mean, new_var

There are two functions named 'correction' and 'prediction'. In 'prediction', input value 'mean' and 'var' are the previous state of the robot. Input value 'motion_mean' and 'motion_var' are the robot's motion.  In 'correction', input value 'mean' and 'var' will be the predicted value return by the function 'prediction'. Input value 'sensor_mean' and 'sensor_var' are the observation value by the sensor. The contents are same with above.

sensor = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
sensor_var = 2
motion = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
motion_var = 1
mean = 0var = 100

I set values like above.

for i in range(len(sensor)):
    # Step 1: State Prediction    mean, var = prediction(mean, var, motion[i], motion_var)
    print i
    print "- prediction -"    print "mean:%d var:%d" % (mean, var)
    # Step 2 : Correction    mean, var = correction(mean, var, sensor[i], sensor_var)
    print "- correction -"    print "mean:%d var:%d" % (mean, var)
    print '-------------------------------------------'


0
- prediction -
mean:1 var:101
- correction -
mean:1 var:1
-------------------------------------------
1
- prediction -
mean:2 var:2
- correction -
mean:2 var:1
-------------------------------------------
2
- prediction -
mean:3 var:2
- correction -
mean:3 var:1
-------------------------------------------
3
- prediction -
mean:4 var:2
- correction -
mean:4 var:1
-------------------------------------------
4
- prediction -
mean:5 var:2
- correction -
mean:5 var:1
-------------------------------------------
5
- prediction -
mean:6 var:2
- correction -
mean:6 var:1
-------------------------------------------
6
- prediction -
mean:7 var:2
- correction -
mean:7 var:1
-------------------------------------------
7
- prediction -
mean:8 var:2
- correction -
mean:8 var:1
-------------------------------------------
8
- prediction -
mean:9 var:2
- correction -
mean:9 var:1
-------------------------------------------
9
- prediction -
mean:10 var:2
- correction -
mean:10 var:1
-------------------------------------------

Process finished with exit code 0


Tuesday, 15 January 2019

Introduction to the Bayes Filter and Related Models

Previous Story ...

Introduction to the Bayes Filter and Related Models


As I mentioned above, the estimate the state x of a system given observations z and robot controls u. The Bayes filter is a framework for recursive state estimation.


There are two steps in Bayes filter, prediction and correction step.


In the prediction step, thanks to the recursive form, the state(t) can be estimated from the state(t-1). In the correction model, sensor - observation - will check that prediction is reasonable. We called the probabilistic models in the prediction model as a motion model and the one in the correction model as a sensor or an observation model. The Bayes Filter is a framework for recursive state estimation. So it is good to use the specialized Bayes Filter in each problem. Think about is it linear or nonlinear? Gaussian distribution only? Parametric or non-parametric? like those.



Motion Models

Robot motion is inherently uncertain. How can we model this uncertainty? We will specify a posterior probability that action u carries the robot from x to x'.


In practice, one often finds two types of motion models, odometry-based and velocity-based. 

Odometry Model

Odometry model is for systems that are equipped with wheel encoders. It is also called to the rotation-transformation-rotation model because the position of the robot is explained in terms of coordinates in cartesian coordinate and the angle. The robot position is (x_bar, y_bar, theta_bar) and it moves to (x_bar_prime, y_bar_prime, theta_bar_prime). The odometry information u is (delta_rot1, delta_trans, delta_rot2). Because it is on the cartesian coordinate, the odometry information can be explained like formula below.


Do not forget we are using probability distribution! So the noise in odometry is also shown as the probability distribution, for example, Gaussian noise.


Velocity-Based Model

Velocity-Based model is usually used when no wheel encoders are available. The robot position is (x_bar, y_bar, theta_bar) and it moves to (x_bar_prime, y_bar_prime, theta_bar_prime) and the velocity information u is (v, w).


But there is a problem in Velocity-based model. The robot has the final orientation because it moves on a circle. I introduce an additional noise term on the final orientation to fix this problem.

Sensor Model

The sensor model is used in the correction step to check the predicted belief is reasonable with the observation that the sensor detected.
  • Model for Laser Scanners
  • Beam-Endpoint Model
  • Ray-Cast Model
  • Model for Perceiving Landmarks with Range-Bearing Sensors

Introduction to SLAM

Let's start with a simple introduction to SLAM.

Wednesday, 19 September 2018

Introduction to SQL #1


IBM Sequel language developed as part of System R project at the IBM San Jose Research Laboratory. It renamed as SQL short for Structured Query Language. But still, it can be called as Sequel. Commercial systems offer most, if not all, SQL-92 features, plus varying feature sets from later standards and special proprietary features. So not all examples here may work on your particular system. FYI, I will use SQLite from now on. :)

The SQL data-definition language(DDL) allows the specification of information about relations, including the schema for each relation, the domain of values associated with each attribute, integrity constraints and other information such as the set of indices to be maintained for each relation, security and authorization information for each relation and the physical storage structure of each relation on disk.


Domain Types in SQL

  • char(n): Fixed-length character string, with user-specified length n.
  • varchar(n): Variable length character strings, with user-specified maximum length n.
  • int: Integer - a finite subset of the integers that is machine-dependent -
  • smallest: Small integer - a machine dependent subset of the integer domain type -
  • numeric( p, d): Fixed point number. with the user-specified precision of p digits, with d or the p digits to the right of decimal point.
    • ex) numeric( 3, 1): 44.5 to be stored exactly, but, neither 444.5 nor 0.32 can be stored exactly
  • real, double precision: Floating point and double-precision floating point numbers, with machine-dependent precision.
  • float(n): floating point number, with a user-specified precision of at least n digits.
  • ...

Create Table Construct

An SQL relation is defined using the create table command: 


ex) create table instructor ( ID char(5), name varchar(20), dept_name varchar(20), salary numeric(8, 2))

If you want to insert the data into the table using this command:
ex) insert into instructor values ( '10211', 'Smith', 'Biology', 66000)


And * means all, so the command - select * from 'table name' - will show us the whole table. I ran the insert row twice so there are 2 Davids in the table.

If you want to delete the table, use 'drop' command.
ex) drop table instructor

Integrity Constraints in Create Table

Command [ not null ] guarantee the variable cannot be the null value.
Also, there are commands about primary key and foreign key. For example, declare dept_name as the primary key for 'department'. Primary key declaration on an attribute automatically ensures not null. In case of the foreign key, the values of attributes (Am, ..., An) for any tuple in the relation must correspond to values of the primary key attribute of some tuple in relation r. For example, declare dept_name as the foreign key for 'instructor'. For each 'instructor' tuple, the department name specified in the tuple must exist in the primary key attribute (dept_name) of the department relation.


Thursday, 13 September 2018

Relational Model #2


Relational Query Languages


  • Selection of tuples
  • Relation r
    after selection


    • If I want to select tuples with A=B and D>5 in relation r,
    • σA=B and D>5 (r)

  • Selection of Attributes
    • Projection
    • Π A, C (r)
after projection

  • Cartesian Product - Cross product!
relations r and s

    •  Cartesian product returns every possible pair in the result.

r ╳ s

  • Union of two relations
relations r and s

    • r ∪ s =

Union of r and s
  • Set difference between two relations
relations r and s
    • r - s = 

    differences
    • Set intersection between two relations
    relations r and s
      • r ∩ s =
    intersection

    • Natural Join
      • Let r and s be relations on schemas R and S respectively. Then, the 'natural join' of relations R and S is a relation on schema R∪ S obtain as follows:
        1. Consider each pair of tuples tr from r and tform s.
        2. If tr and thave the same value on each of the attributes in R∩ S, add a tuple t to the result, where
          • t has the same value as tr on r
          • t has the same value as ts on s
    relations r and s

      •  r ⋈ s

    Natural join r and s


    Relational Model #1



    Attribute Types


    • The set of allowed values for each attribute is called the domain of the attribute.
    • Attribute values are (normally) required to be atomic; indivisible.
      • The important issue is not what the domain itself is, but rather how we use domain elements in our database.
        • ex) Phone number
          • not splitting the value into a country code, an area code, and a local number,
          • treating it as a nonatomic value
    • The special value null is a member of every domain
    • The null value causes complications in the definition of many operations.

    Relation Schema and Instance
    • If A1, A2, ..., An are attributes, R=(A1, A2, ..., An) is a relation schema.
      • ex) Instructor = (ID, name, dept_name, salary)
    Formally, given sets D1, D2, ..., Dn a relation r is a subset of D1*D2*D3...*Dn. Thus, a relation is a set of n-tuples (a1, a2, a3, ..., an) where ai∈Di
    • The current values (relation instance) of a relation are specified by a table
    • An element t of r is a tuple. represented by a row in a table


    Database

    • A database consists of multiple relations
    • Information about an enterprise is broken up into parts - instructor, student, advisor
    • Bad design: univ(instructor-ID, name, dept_name, salary, student_ID, ...) results in
      • repetition of information
      • the need for null values
    • Good schemas? => Normalization theory in Chapter 7!
    Keys
    • Let K⊆R
    • K is a superkey of R if values for K are sufficient to identify a unique tuple of each possible relation r(R)
      • ex) {ID} and {ID, name} are both superkeys of instructor.
    • Superkey K is a candidate key if K is minimal
      • ex) {ID} is a candidate ket for the relation named Instructor
    • One of the candidate keys is selected to be the primary key.
      • determined by the database administrator
    • Foreign key
      • A relation r1 may include among its attributes the primary key of another relation r2. This attributes the primary key of another relation r2. This attribute is called a foreign key from r1, referencing r2.
        • ex) If there are 2 relations Instructor( ID, name, dept_name, salary) and Department( dept_name, building, budget), dept_name is a foreign key from Instructor, referencing department, since dept_name is the primary key of Department.
      • In the schema diagram, foreign key dependencies appear as arrows from the foreign key attributes of the referencing relation to the primary key of the referenced relation.

    Schema Diagram for University Data

    • classroom( building, room number, capacity)
    • department(dept_name, building, budget)
    • course(course_ID, title, dept_name, credits)
    • instructor(ID, name, dept_name, credits)
    • section(course_id, sec_id, semester, year, building, room_number, time_slot_id)
    • teaches(ID, course_id, sec_id, semester, year, grade)
    • student(ID, name, dept_name, tot_cred)
    • takes(ID, name, dept_name, semester, year, grade)
    • advisors(s_ID, i_ID)
    • time_slot(time_slot_id, day, start_time, end_time)
    • prereq(course_id, prereq_id)





    [ new blog ]

    new blog https://jihyo-jeon.github.io/