基于ROS检测图像中圆心三维坐标

it2025-11-01  2

基于ROS检测图像中圆心三维坐标

算法思路以及整体流程一、实时检测二维圆心xy坐标的方法二、引入深度信息三、移植到ROS中总结


算法思路以及整体流程

整体而言是基于opencv的方式,首先从摄像头中检测圆心的二维坐标。然后再引入深度信息。所有传感器是小觅深度相机50版。

一、实时检测二维圆心xy坐标的方法

首先,使用opencv进行颜色判断,将彩色图像转移到LAB空间,然后将LAB三通道单独分离出来,发现,B通道对黄蓝颜色特别敏感,故所针对的圆是选取为黄色的圆。之后,对B通道进行阈值分割。设定B通道阈值范围为(160,250)在这个范围内,可以完美的滤出除了黄色以外的所有背景。之后得到黄色的圆。然后进行开闭操作。得到黑白图像。使用candy得到边缘,然后使用findContours函数,描绘出边缘。然后再使用最小二乘对圆进行拟合。获取圆的XY坐标。 代码如下(示例):

void image_process(cv::Mat img,cv::Mat& depth) { cv::Mat imgLAB; vector<Mat> hsvSplit; std::vector<Mat> channels; Mat imageLChannel; Mat imageAChannel; Mat imageBChannel; cvtColor(img, imgLAB, COLOR_BGR2Lab);//RGB->LAB Mat imgThresholded; cv::split(imgLAB,channels); imageLChannel = channels.at(0); imageAChannel = channels.at(1); imageBChannel = channels.at(2); cv::threshold(imageBChannel,imgThresholded,160,250,THRESH_BINARY); Mat element = getStructuringElement(MORPH_RECT, Size(5, 5)); morphologyEx(imgThresholded, imgThresholded, MORPH_OPEN, element);//开操作 morphologyEx(imgThresholded, imgThresholded, MORPH_CLOSE, element);//闭操作 cv::Mat edgeImage; cv::Canny(imgThresholded, edgeImage, 10, 20); //cv::imshow("111",edgeImage); //cv::waitKey(0); std::vector<std::vector<cv::Point> > contours; std::vector<cv::Vec4i> hierarchy; cv::findContours(edgeImage, contours, hierarchy, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_NONE); std::vector<cv::Point> centers; centers.resize(contours.size()); std::vector<double> r; r.resize(contours.size()); cv::threshold(imageBChannel,imgThresholded,160,255,THRESH_BINARY); //inRange(show, Scalar(92, 244,201), Scalar(65, 40, 134), imgThresholded); //Threshold the image //Mat element = getStructuringElement(MORPH_RECT, Size(5, 5)); morphologyEx(imgThresholded, imgThresholded, MORPH_OPEN, element);//开操作 morphologyEx(imgThresholded, imgThresholded, MORPH_CLOSE, element);//闭操作 for (int i = 0; i < contours.size(); i++) { cv::Rect rect=cv::boundingRect(contours[i]); if((rect.width*1.0f)/rect.height<=1.3&&(rect.width*1.0f)/rect.height>=0.8) { bool flag=LeastSquaresCircleFitting(contours[i], centers[i], r[i]); if (!flag) continue; std::cout << "x: " << centers[i].x << " y: " << centers[i].y <<" r: "<<r[i]<<std::endl; if(centers[i].x<0||centers[i].x>=imgThresholded.cols||centers[i].y<0||centers[i].y>=imgThresholded.rows) continue; cv::circle(imgThresholded, centers[i], r[i], cv::Scalar(0, 0, 255), 2); cv::circle(imgThresholded, centers[i], 1, cv::Scalar(0, 0, 255), 2); } } cv::imshow(INPUT, img); cv::imshow(OUTPUT, imgThresholded); cv::waitKey(100); } };

最小二乘拟合圆 代码如下(示例):

bool LeastSquaresCircleFitting(vector<cv::Point> &m_Points, cv::Point &Centroid, double &dRadius) { if (!m_Points.empty()) { int iNum = (int)m_Points.size(); if (iNum < 3) return false; double X1 = 0.0; double Y1 = 0.0; double X2 = 0.0; double Y2 = 0.0; double X3 = 0.0; double Y3 = 0.0; double X1Y1 = 0.0; double X1Y2 = 0.0; double X2Y1 = 0.0; vector<cv::Point>::iterator iter; vector<cv::Point>::iterator end = m_Points.end(); for (iter = m_Points.begin(); iter != end; ++iter) { X1 = X1 + (*iter).x; Y1 = Y1 + (*iter).y; X2 = X2 + (*iter).x * (*iter).x; Y2 = Y2 + (*iter).y * (*iter).y; X3 = X3 + (*iter).x * (*iter).x * (*iter).x; Y3 = Y3 + (*iter).y * (*iter).y * (*iter).y; X1Y1 = X1Y1 + (*iter).x * (*iter).y; X1Y2 = X1Y2 + (*iter).x * (*iter).y * (*iter).y; X2Y1 = X2Y1 + (*iter).x * (*iter).x * (*iter).y; } double C = 0.0; double D = 0.0; double E = 0.0; double G = 0.0; double H = 0.0; double a = 0.0; double b = 0.0; double c = 0.0; C = iNum * X2 - X1 * X1; D = iNum * X1Y1 - X1 * Y1; E = iNum * X3 + iNum * X1Y2 - (X2 + Y2) * X1; G = iNum * Y2 - Y1 * Y1; H = iNum * X2Y1 + iNum * Y3 - (X2 + Y2) * Y1; a = (H * D - E * G) / (C * G - D * D); b = (H * C - E * D) / (D * D - G * C); c = -(a * X1 + b * Y1 + X2 + Y2) / iNum; double A = 0.0; double B = 0.0; double R = 0.0; A = a / (-2); B = b / (-2); R = double(sqrt(a * a + b * b - 4 * c) / 2); Centroid.x = A; Centroid.y = B; dRadius = R; return true; } else return false; return true; }

二、引入深度信息

由于使用的相机是小觅相机深度50版本,其相机本身已经自己标定好了好了,所以在ROS下使用,只需要订阅其深度图像的节点就可以了。然后在图像处理函数的中间加上:

unsigned short d = depth.ptr<unsigned short>(centers[i].y)[centers[i].x]; cv::Point3f p; p.z = static_cast<float>(d) / 1000.0; //p.x = (n - cam_in_.cx) * p.z / cam_in_.fx; //p.y = (m - cam_in_.cy) * p.z / cam_in_.fy; std::cout<<"d:"<<p.z<<endl;

以上这一段代码加在imge_progess中的if(!flag)continue;的后面

三、移植到ROS中

这个过程,其实要用到关于ROS系统的一些基本知识了。 首先引入我们需要的头文件

#include<ros/ros.h> //ros标准库头文件 #include<iostream> //C++标准输入输出库 #include"stdint.h" /* cv_bridge中包含CvBridge库 */ #include<cv_bridge/cv_bridge.h> /* ROS图象类型的编码函数 */ #include<sensor_msgs/image_encodings.h> /* image_transport 头文件用来在ROS系统中的话题上发布和订阅图象消息 */ #include<image_transport/image_transport.h> #include<sensor_msgs/Image.h> //OpenCV2标准头文件 #include<opencv2/core/core.hpp> #include<opencv2/highgui/highgui.hpp> #include<opencv2/imgproc/imgproc.hpp> #include<vector>//容器 #include <message_filters/sync_policies/approximate_time.h>//时间同步所需要的函数库 #include <message_filters/time_synchronizer.h> #include <message_filters/subscriber.h> #include <boost/thread/thread.hpp>//boost线程 #include <boost/bind.hpp>//捆绑所要用的头文件 using namespace cv; using namespace std; using namespace message_filters;

接下来我们创建一个需要的类,代码如下:

class RGB_GRAY { public: RGB_GRAY(); ros::NodeHandle nh_; //定义ROS句柄 image_transport::ImageTransport it_; //定义一个image_transport实例 //使用message_filter来同时创建两个订阅者,一个叫img_depth_sub_,另一个叫img_color_sub_ message_filters::Subscriber<sensor_msgs::Image> img_depth_sub_; message_filters::Subscriber<sensor_msgs::Image> img_color_sub_; typedef message_filters::sync_policies::ApproximateTime<sensor_msgs::Image,sensor_msgs::Image> syncpolicy; typedef message_filters::Synchronizer<syncpolicy> Sync; boost::shared_ptr<Sync> sync_; ~RGB_GRAY() //析构函数 { }

在其中,我们需要在构造函数中初始化一些东西,代码如下:

RGB_GRAY::RGB_GRAY() :it_(nh_) //构造函数 { //分别订阅小觅相机的彩色图像信息和深度信息 img_depth_sub_.subscribe(nh_,"/mynteye/depth/image_raw",1); img_color_sub_.subscribe(nh_,"/mynteye/left/image_color",1); sync_.reset(new Sync(syncpolicy(10),img_color_sub_,img_depth_sub_)); sync_->registerCallback(boost::bind(&RGB_GRAY::convert_callback,this,_1,_2)); }

接下来的回调函数的部分,同样既然引入了图像信息和深度信息,那么回调的时候也要引用两个部分的信息。

void RGB_GRAY::convert_callback(const sensor_msgs::ImageConstPtr& msg,const sensor_msgs::ImageConstPtr& depth_msg) { cv_bridge::CvImagePtr cv_ptr; // 声明一个CvImage指针的实例 cv_bridge::CvImagePtr depth; try { cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::TYPE_8UC3); //将ROS消息中的图象信息提取,生成新cv类型的图象,复制给CvImage指针 depth = cv_bridge::toCvCopy(depth_msg, sensor_msgs::image_encodings::TYPE_16UC1); } catch(cv_bridge::Exception& e) //异常处理 { ROS_ERROR("cv_bridge exception: %s", e.what()); return; } image_process(cv_ptr->image,depth->image); //得到了cv::Mat类型的图象,在CvImage指针的image中,将结果传送给处理函数 cv::waitKey(5); }

最后是主函数部分

int main(int argc, char** argv) { ros::init(argc, argv, "RGB"); RGB_GRAY obj; ros::spin(); }

总结

文章可能写的比较粗略,之后会后续继续详细写一些理论来展开讲解各个模块的作用。

最新回复(0)