jeston nano 下调用csi摄像头来运行orbslam2

it2024-06-25  42

1.首先对摄像头进行标定

​ 正确安装好摄像头之后,可以在终端下输入,摄像头就会开启

nvgstcapture-1.0

​ 想关掉摄像头的话,直接在终端输入q再按回车; ​ 想获图片的话,在终端输入j再按回车,图片将保存当前目录下;

利用张正友标定法对相机进行标定:

​ 首先打印一张用于标定的棋盘格,然后按照上述方式,从不同角度不同具体拍摄棋盘格照片(大约20张左右),然后用Matlab工具箱进行标定。

参考:https://blog.csdn.net/heroacool/article/details/51023921

2.重写一个接口,用来调用ORBSLAM2

​ 对于数据集上的示例,ORB-SLAM2 会首先读取数据集中的图像,再放到 SLAM 中处理。那么对于我们自己的摄像头,同样可以这样处理。所以最方便的方案是直接将我 们的程序作为一个新的可执行程序,加入到 ORB-SLAM2 工程中。

cd Examples/Monocular/ vim myslam.cc vim myslam.yaml

在myslam.cc 中写入以下内容

// 该文件将打开你电脑的摄像头,并将图像传递给ORB-SLAM2进行定位 // 需要opencv #include <opencv2/opencv.hpp> // ORB-SLAM的系统接口 #include "System.h" #include <string> #include <chrono> // for time stamp #include <iostream> using namespace std; int main(int argc, char **argv) { //传入参数文件和词典文件 string parameterFile = argv[1]; string vocFile = argv[2]; // 声明 ORB-SLAM2 系统 ORB_SLAM2::System SLAM(vocFile, parameterFile, ORB_SLAM2::System::MONOCULAR, true); // 获取相机图像代码 cv::VideoCapture cap(0); // change to 1 if you want to use USB camera. // 分辨率设为640x480 cap.set(CV_CAP_PROP_FRAME_WIDTH, 640); cap.set(CV_CAP_PROP_FRAME_HEIGHT, 480); // 记录系统时间 auto start = chrono::system_clock::now(); while (1) { cv::Mat frame; cap >> frame; // 读取相机数据 auto now = chrono::system_clock::now(); auto timestamp = chrono::duration_cast<chrono::milliseconds>(now - start); SLAM.TrackMonocular(frame, double(timestamp.count())/1000.0); } return 0; }

在myslam.yaml 中写入自己标定的相机内参

%YAML:1.0 #-------------------------------------------------------------------------------------------- # Camera Parameters. Adjust them! #-------------------------------------------------------------------------------------------- # Camera calibration and distortion parameters (OpenCV) # 主要是在这里写入自己的标定参数,这个文件可以通过copy 数据集的yaml文件进行修改即可 Camera.fx: Camera.fy: Camera.cx: Camera.cy: Camera.k1: Camera.k2: Camera.p1: Camera.p2: Camera.k3: # Camera frames per second Camera.fps: 30.0 # Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale) Camera.RGB: 0 #-------------------------------------------------------------------------------------------- # ORB Parameters #-------------------------------------------------------------------------------------------- # ORB Extractor: Number of features per image ORBextractor.nFeatures: 1000 # ORB Extractor: Scale factor between levels in the scale pyramid ORBextractor.scaleFactor: 1.2 # ORB Extractor: Number of levels in the scale pyramid ORBextractor.nLevels: 8 # ORB Extractor: Fast threshold # Image is divided in a grid. At each cell FAST are extracted imposing a minimum response. # Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST # You can lower these values if your images have low contrast ORBextractor.iniThFAST: 10 ORBextractor.minThFAST: 5 #-------------------------------------------------------------------------------------------- # Viewer Parameters #-------------------------------------------------------------------------------------------- Viewer.KeyFrameSize: 0.05 Viewer.KeyFrameLineWidth: 1 Viewer.GraphLineWidth: 0.9 Viewer.PointSize: 2 Viewer.CameraSize: 0.08 Viewer.CameraLineWidth: 3 Viewer.ViewpointX: 0 Viewer.ViewpointY: -0.7 Viewer.ViewpointZ: -1.8 Viewer.ViewpointF: 500

并修改ORM_SLAM2文件夹下的CMakeLists.txt, 在最后一行加上

add_executable(myslam Examples/Monocular/myslam.cc) target_link_libraries(myslam ${PROJECT_NAME})

到这一步其实已经写好了接口,但是在jeston nano上是不能直接使用cv::VideoCapture cap(0)的,这个时候编译也会通过,但是实际调用时会出现摄像头显示为纯灰色,不能正确工作。板载csi摄像头的开启,需要使用到GSTREAMER,如果opencv编译的时候没有开启该选项,那么是不能打开摄像头的。

3.编译opencv+GStreamer

安装依赖项

sudo apt-get install -y build-essential cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev sudo apt-get install -y libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev sudo apt-get install -y python2.7-dev python3.6-dev python-dev python-numpy python3-numpy sudo apt-get install -y libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev sudo apt-get install -y libv4l-dev v4l-utils qv4l2 v4l2ucp sudo apt-get install -y curl

下载opencv源码,注意这里同时需要opencv_contrib-3.4.0文件夹,将其与opencv-3.4.0并列放在一起(注意版本号的对应)

对源码进行编译:

cd opencv-3.4.0/ sudo mkdir build cd build/ cmake -D WITH_CUDA=ON -D CUDA_ARCH_BIN="5.3" -D CUDA_ARCH_PTX="" -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib-3.4.0/modules -D WITH_GSTREAMER=ON -D WITH_LIBV4L=ON -D BUILD_opencv_python2=ON -D BUILD_opencv_python3=ON -D BUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_EXAMPLES=OFF -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local .. make -j3 sudo make install
编译时出现了libGL.so的链接问题:

1.搜索libGL.so文件路径: 比如,本机中路径为:/usr/lib/libGL.so

2.建立symlink: sudo ln -s /usr/lib/libGL.so.1 /usr/lib/aarch64-linux-gnu/libGL.so

(之所以链接到libGL.so.1而不是libGL.so可能是为了便于区分)

3.如果出现错误: ln: failed to create symbolic link ‘/usr/lib/aarch64-linux-gnu/libGL.so‘ : File exists

则删除已有链接: sudo rm /usr/lib/aarch64-linux-gnu/libGL.so

4.重新执行步骤2建立symlink

参考:http://www.mamicode.com/info-detail-2779089.html
板载摄像头的测试:

github有一个写好的测试脚本

git clone https://github.com/JetsonHacksNano/CSI-Camera.git cd CSI-Camera python3 simple_camera.py
参考:https://www.squirrelzoo.com/archives/1805

同时测试一下c++的支持

g++ -std=c++11 -Wall -I/usr/lib/opencv simple_camera.cpp -L/usr/lib -lopencv_core -lopencv_highgui -lopencv_videoio -o simple_camera ./simple_camera

可以看到在c++代码中成功开启了csi摄像头,让我们去查看一下simple_camera.cpp代码

// simple_camera.cpp // MIT License // Copyright (c) 2019 JetsonHacks // See LICENSE for OpenCV license and additional information // Using a CSI camera (such as the Raspberry Pi Version 2) connected to a // NVIDIA Jetson Nano Developer Kit using OpenCV // Drivers for the camera and OpenCV are included in the base image #include <iostream> #include <opencv2/opencv.hpp> #include <opencv2/videoio.hpp> #include <opencv2/highgui.hpp> std::string gstreamer_pipeline (int capture_width, int capture_height, int display_width, int display_height, int framerate, int flip_method) { return "nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)" + std::to_string(capture_width) + ", height=(int)" + std::to_string(capture_height) + ", format=(string)NV12, framerate=(fraction)" + std::to_string(framerate) + "/1 ! nvvidconv flip-method=" + std::to_string(flip_method) + " ! video/x-raw, width=(int)" + std::to_string(display_width) + ", height=(int)" + std::to_string(display_height) + ", format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink"; } int main() { int capture_width = 1280 ; int capture_height = 720 ; int display_width = 1280 ; int display_height = 720 ; int framerate = 60 ; int flip_method = 0 ; std::string pipeline = gstreamer_pipeline(capture_width, capture_height, display_width, display_height, framerate, flip_method); std::cout << "Using pipeline: \n\t" << pipeline << "\n"; cv::VideoCapture cap(pipeline, cv::CAP_GSTREAMER); if(!cap.isOpened()) { std::cout<<"Failed to open camera."<<std::endl; return (-1); } cv::namedWindow("CSI Camera", cv::WINDOW_AUTOSIZE); cv::Mat img; std::cout << "Hit ESC to exit" << "\n" ; while(true) { if (!cap.read(img)) { std::cout<<"Capture read error"<<std::endl; break; } cv::imshow("CSI Camera",img); int keycode = cv::waitKey(30) & 0xff ; if (keycode == 27) break ; } cap.release(); cv::destroyAllWindows() ; return 0; }

可以看到主要是写了一个pipeline,然后使用了cv::VideoCapture cap(pipeline, cv::CAP_GSTREAMER);

这就给我们提供了一个思路,用同样的方式对之前的myslam.cc进行修改;

4.编译好之后重写接口

重写myslam.cc 为

#include <opencv2/opencv.hpp> #include "System.h" #include <string> #include <chrono> #include <iostream> #include <opencv2/videoio.hpp> #include <opencv2/highgui.hpp> using namespace std; using namespace cv; std::string gstreamer_pipeline (int capture_width, int capture_height, int display_width, int display_height, int framerate, int flip_method) { return "nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)" + std::to_string(capture_width) + ", height=(int)" + std::to_string(capture_height) + ", format=(string)NV12, framerate=(fraction)" + std::to_string(framerate) + "/1 ! nvvidconv flip-method=" + std::to_string(flip_method) + " ! video/x-raw, width=(int)" + std::to_string(display_width) + ", height = (int)" + std::to_string(display_height) + ", format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink"; } int main(int argc,char** argv) { string parameterFile = argv[1]; string vocFile = argv[2]; ORB_SLAM2::System SLAM(vocFile,parameterFile,ORB_SLAM2::System::MONOCULAR,true); int capture_width = 640 ; int capture_height = 480 ; int display_width = 640 ; int display_height = 480 ; int framerate = 30 ; int flip_method = 0 ; std::string pipeline = gstreamer_pipeline(capture_width, capture_height, display_width, display_height, framerate, flip_method); std::cout << "Using pipeline: \n\t" << pipeline << "\n"; VideoCapture cap(pipeline,cv::CAP_GSTREAMER); if(!cap.isOpened()) { std::cout << "failed to open cam" << endl; return -1; } auto start = chrono::system_clock::now(); while(1) { Mat frame; cap.read(frame); if(frame.empty()){ cerr << "error ,blank frame." << endl; break; } auto now = chrono::system_clock::now(); auto timestamp = chrono::duration_cast<chrono::milliseconds>(now - start); SLAM.TrackMonocular(frame,double(timestamp.count())/1000.0); } cap.release(); SLAM.Shutdown(); return 0; }

然后再进行重新编译即可。

6.运行测试

./Examples/Monocular/myslam Examples/Monocular/myslam.yaml Vocabulary/ORBvoc.txt

最新回复(0)