”悉灵杯”课题研究报告-基于RGB-D相机的机械臂物料分拣系统研究
本课题在Windows11平台进行开发,以鞋底、彩色积木、水果为实验对象,使用海康RGB-D相机获取视野中多个无序堆叠摆放的物料点云数据,利用算法对其行处理,获取抓取物体的坐标与姿态,通过socket建立上位机与机械臂的通信,利用6轴协作机械臂对物料进行分拣。

一. 课题背景

机械臂引导:3D视觉引导机械臂方案被越来越多的行业认可,如:快递分拣供包、汽车配件上下料、仓储拆码垛、仓配货品拣选。通过使用3D视觉系统,可以实时获取目标物体的三维位置和姿态信息,这使得机械臂能够准确地定位和操作目标物体,实现高精度的任务执行。

二. 环境介绍

(一)积木分拣

1、PCL 1.11.1

2、OpenCV 

3、Visual Studio 2019

4、VTK 8.2

5、QT 5.13.2

(二)鞋底分拣/水果分拣

1、Halcon 21.05

2、OpenCV Sharp

3、Visual Studio 2019

4、Winform

三. 手眼标定

手眼标定是为了获得相机坐标系与机器人坐标系之间的转换关系,这里使用简单粗暴的光平面法获得相机坐标与机器人坐标系的转换关系;

首先准备一个长方形物体沿机械臂X轴和Y轴运动方向摆放(其中长方形的长边对应机械臂的X轴方向,短边对应机械臂的Y轴方向),使用RGB-D相机获取点云如下图所示

           

对点云进行滤波,去除无关背景提取出长方形物体点云,使用halon将其拟合为一个光平面如下图所示:

                   
原始相机坐标系与长方形光面点云的位置关系如下所示

                                                      
计算光平面的三个主轴,得到一个以点的平均值为中心的姿势,对应于姿势的坐标系具有沿第一主轴的x轴、沿第二主轴的y轴和沿第三主轴的z轴。

                                            
使用该Pose对点云模型进行刚体变换,得到变换后的光平面点云ObjectModel3DStandard,计算光平面点云ObjectModel3DStandard的最小外接矩形,对最小外接矩形的姿态进行反转得到pose1 ,通过pose1对ObjectModel3DStandard进行刚体变换

                                       
变换后如下图所示

                                                                      

由于长方形的长边对应机器人的X轴,短边对应机器人的Y轴,此时相机坐标系的X轴平行于长方形物体的长边,长方形短边平行于相机坐标系Y轴,相机坐标系与机器人坐标系坐标轴的方向已经一致, pose0和pose1为相机坐标系与机器人坐标系的变换关系,接下来自定义两个姿态将Pose0和pose1保存下来

create_pose (-43.9651,19.352,1031.71,359.001,359.198,185.332,'Rp+T', 'gba', 'point', PoseA)
create_pose (-70.6744,-70.6666,2061.22,181.31,1.74528,169.493,'Rp+T', 'gba', 'point', PoseB)

利用PoseA和PoseB对原始长方形点云进行刚体变换,将其变换到机械臂坐标系下

rigid_trans_object_model_3d (ObjectModel3D, PoseA, A)
rigid_trans_object_model_3d (A, PoseB,ObjectModel3DResult)
write_object_model_3d (ObjectModel3DResult,'ply', 'D:/项目程序/项目程序/海康3D/jingsai/BasicDemo_Callback/bin/win64/debug/point/unionpoint.ply',[],'false')

当前相机坐标系与机械臂坐标系的坐标轴虽然方向一致,但是在空间中还存在一定的偏移,需要计算偏移量对其进行补偿;

获取标记点的坐标(X,Y,Z),使用示教器将机器人移动到该点获得该点在机器人坐标系下的真实坐标(jx,jy,jz)

                            

jx:=744.50
jy:=304.77
jz:=-38.90
cx:=105.976
cy:=6.337556
cz:=4.744299
px:=jx-cx
py:=jy-cy
pz:=jz-cz

其中(px,py,pz)为偏移,此时我们就已经得到相机坐标系与机器人坐标系完整的转换关系。

四. 鞋底分拣/水果分拣

(一)解析图像与点云数据

由于SDK提供的C#接口不支持同时显示深度图与RGB图,所以对RGB图与深度进行解析,通过winform的picbox控件进行显示

解析RGB图并显示

Marshal.Copy(stFrameData.stImageData[1].pData, m_pcDataBuf,0,(int)stFrameData.stImageData[1].nDataLen);
                    int m = (int)m_stImageInfo.nHeight; 
                    int n = (int)m_stImageInfo.nWidth; 
                    Mat image = new Mat(new OpenCvSharp.Size(n, m), MatType.CV_8UC3);
                    Mat result = image.CvtColor(ColorConversionCodes.RGB2BGR);
                    for (int y = 0; y < m; y++)
                    {
                        for (int x = 0; x < n; x++)
                        {
                            image.Set(y, x, new Vec3b(m_pcDataBuf[x + y * n + 2 * m * n], m_pcDataBuf[x + y * n + m * n], m_pcDataBuf[x + y * n]));
                        }
                    }
                    System.Drawing.Bitmap bitmap = OpenCvSharp.Extensions.BitmapConverter.ToBitmap(image);
                    System.Drawing.Bitmap bitmaps = OpenCvSharp.Extensions.BitmapConverter.ToBitmap(image);
                    pictureBox2.Image = bitmap;

解析深度图并显示

 Mat depthData = new Mat((int)stFrameData.stImageData[0].nHeight, (int)stFrameData.stImageData[0].nWidth, MatType.CV_16UC1, stFrameData.stImageData[0].pData);
                    double scaleFactor = 255.0 &amp;#x2F; 1000.0;
                    Mat normalizedDepth = new Mat();
                    Cv2.ConvertScaleAbs(depthData, normalizedDepth, scaleFactor);
                    Mat depthImage8Bit = new Mat();
                    normalizedDepth.ConvertTo(depthImage8Bit, MatType.CV_8U);
                    double thresholdValue = 128;
                    double maxValue = 255;
                    Cv2.Threshold(depthImage8Bit, depthImage8Bit, thresholdValue, maxValue, ThresholdTypes.Binary);
                    Bitmap bmp1 = new Bitmap(depthImage8Bit.Cols, depthImage8Bit.Rows, (int)depthImage8Bit.Step(), System.Drawing.Imaging.PixelFormat.Format8bppIndexed, depthImage8Bit.Data);
                    pictureBox3.Image = bmp1;
                    pictureBox3.SizeMode = PictureBoxSizeMode.StretchImage;

解析点云数据,并将其转换为halcon的HTuple数据类型,使用hWindowControl控件对点云进行显示

                    Mv3dRgbdSDK.MV3D_RGBD_MapDepthToPointCloud(m_handle, stFrameData.stImageData[0], pointcloud);                    Marshal.Copy(pointcloud.pData, pointData, 0, (int)pointcloud.nDataLen);
                    float[] floatArray = new float[pointData.Length / sizeof(float)];
                    for (int g = 0; g < floatArray.Length; g++)
                    {
                        floatArray[g] = BitConverter.ToSingle(pointData, g * sizeof(float));
                    }
                    int nPointNum = (int)pointcloud.nDataLen / (sizeof(float) * 3);
                    List<float> vecX = new List<float>(nPointNum);
                    List<float> vecY = new List<float>(nPointNum);
                    List<float> vecZ = new List<float>(nPointNum);
                    for (int nPntIndex = 0; nPntIndex < nPointNum; ++nPntIndex)
                    {
                        vecX.Add(floatArray[nPntIndex * 3 + 0]);
                        vecY.Add(floatArray[nPntIndex * 3 + 1]);
                        vecZ.Add(floatArray[nPntIndex * 3 + 2]);
                    }
                    HTuple a = new HTuple();
                    HTuple b = new HTuple();
                    HTuple c = new HTuple();
                    for (int r = 0; r < nPointNum; r++)
                    {
                        a[r] = vecX[r];
                        b[r] = vecY[r];
                        c[r] = vecZ[r];
                    }
                    HOperatorSet.GenObjectModel3dFromPoints(a, b, c, out hv_selectObjectModel3D);
                    HOperatorSet.WriteObjectModel3d(hv_selectObjectModel3D, "ply", "oud_zzss.ply", new HTuple(), "false");

                                            
(二)点云预处理

sample_object_model_3d (cloud_a, 'fast', 1.5, [], [], SampledObjectModel3D)
get_object_model_3d_params (cloud_a, 'num_points', GenParamValuess)
get_object_model_3d_params (SampledObjectModel3D, 'num_points', GenParamValue1aa)
select_points_object_model_3d (SampledObjectModel3D, 'point_coord_z',500, 760, ObjectModel3D)
select_points_object_model_3d (ObjectModel3D, 'point_coord_x', -500, 325, ObjectModel3D1)
select_points_object_model_3d (ObjectModel3D1, 'point_coord_y', -350, 200, ObjectModel3D2)
connection_object_model_3d (ObjectModel3D2, 'distance_3d',2.2, ObjectModel3DConnected)
get_object_model_3d_params (ObjectModel3DConnected, 'num_points', GenParamValue1)
select_object_model_3d (ObjectModel3DConnected, 'num_points', 'and',2000,15000, ObjectModel3DSelected)
write_object_model_3d (ObjectModel3D2, 'ply', 'D:/项目程序/项目程序/海康3D/jingsai/BasicDemo_Callback/bin/win64/debug/point/g3.ply', [], 'false')

(三)读取鞋底点云模板

由于鞋底分为左右脚和正反面,所以建立4个模板,分别为左脚正面、左脚反面、右脚正面、右脚反面

read_object_model_3d ('C:/Users/93733/Desktop/分拣鞋/新建文件夹/hkmodel/zuozheng.ply', 'm', [], [], zuo_zheng, Status)
read_object_model_3d ('C:/Users/93733/Desktop/分拣鞋/新建文件夹/hkmodel/zuo_fan.ply', 'm', [], [], zuo_fan, Status)
read_object_model_3d ('C:/Users/93733/Desktop/分拣鞋/新建文件夹/hkmodel/youzheng.ply', 'm', [], [], you_zheng, Status)
read_object_model_3d ('C:/Users/93733/Desktop/分拣鞋/新建文件夹/hkmodel/you_fan.ply', 'm', [], [], you_fan, Status)
rigid_trans_object_model_3d (zuo_zheng, PoseA, ObjectModel3DStandard2)
rigid_trans_object_model_3d (ObjectModel3DStandard2, PoseB, ObjectModel3D_zuozheng)
rigid_trans_object_model_3d (zuo_fan, PoseA, ObjectModel3DStandard2)
rigid_trans_object_model_3d (ObjectModel3DStandard2, PoseB, ObjectModel3D_zuo_fan)
rigid_trans_object_model_3d (you_zheng, PoseA, ObjectModel3DStandard2)
rigid_trans_object_model_3d (ObjectModel3DStandard2, PoseB, ObjectModel3D_you_zheng)
rigid_trans_object_model_3d (you_fan, PoseA, ObjectModel3DStandard2)
rigid_trans_object_model_3d (ObjectModel3DStandard2, PoseB, ObjectModel3D_you_fan)
create_pose (743, 158.23,-131.77, 179, 0, 0, 'Rp+T', 'gba', 'point', Pose_youzheng)
create_pose (751, 218.35,-131.66, 179, 0, 0, 'Rp+T', 'gba', 'point', Pose_youfan)
create_pose (726.22, 239.44,-131.15, 179, 0, 0, 'Rp+T', 'gba', 'point', Pose_zuozheng)
create_pose (689.30,216.93,-138.92, 179, 0, 0, 'Rp+T', 'gba', 'point', Pose_zuofan)

(四)3D表面匹配

根据得到的相机与机械臂转换关系,通过刚体变换将目标点云变换到机器坐标系下,将四个模板点云分别与目标点云进行3D表面匹配,提取Scores最大的目标作为抓取目标,得到姿态Pose,利用Pose将模板变换到目标点云上,计算质心以及抓取姿态

jx:=813.54
jy:=33.9
jz:=-48.53
offset_x:=jx-(109.878)
offset_y:=jy-(5.18)
offset_z:=jz-(1.9068)
grap_pose:=[]
grap_score:=[]
grap_obk:=[]
grap_point:=[]
intput_object:=[ObjectModel3D_zuozheng,ObjectModel3D_zuo_fan,ObjectModel3D_you_zheng,ObjectModel3D_you_fan]
pp_pose:=[Pose_zuozheng,Pose_zuofan,Pose_youzheng,Pose_youfan]
for IndexX := 0 to |intput_object|-1 by 1
    
      union_object_model_3d (ObjectModel3DSelected, 'points_surface', UnionObjectModel3D)
      surface_normals_object_model_3d (intput_object[IndexX], 'mls', [], [], ObjectModel3DNormals)
      create_surface_model (ObjectModel3DNormals, 0.03, [], [], SurfaceModelID)
      *模板SurfaceModelID
      find_surface_model (SurfaceModelID, UnionObjectModel3D,0.005, 0.005, 0.5, 'true', 'num_matches', 4, Pose_find, Score, SurfaceMatchingResultID)
      tuple_length (Score, A)
      if(A==0)
       Score:=0
      endif
      tuple_max (Score, Max)
      tuple_find (Score, Max, Indices)
      selectNUM:=Indices
      if(Max>0.7)
         selectPose:=Pose_find[selectNUM*7:selectNUM*7+6]
         pose_compose (selectPose,pp_pose[IndexX*7:IndexX*7+6],  PoseCompose)
        * pose_compose (selectPose, Pose_zuofan, PoseCompose)
         convert_pose_type (PoseCompose, 'Rp+T', 'abg', 'point', PoseOut1)
         pose_invert (PoseCompose, PoseResult)
         rigid_trans_object_model_3d (intput_object[IndexX], selectPose, ObjectScene3DRigidTrans)
          grap_pose:=[grap_pose,PoseOut1]
          grap_score:=[grap_score,Max]
          grap_obk:=[grap_obk,ObjectScene3DRigidTrans]
          grap_point:=[grap_point,Pose_find]
        *  disp_object_model_3d (WindowHandle,[ObjectScene3DRigidTrans,UnionObjectModel3D], [], [],['color_0','color_1','alpha_1','disp_pose'], ['green','gray',0.5,'true'])
      elseif(Max<0.7)
          grap_pose:=[grap_pose,zero_pose]
           grap_point:=[grap_point,zero_pose]
          grap_score:=[grap_score,0]
          grap_obk:=[grap_obk,0]   
      endif   
  endfor
      
   tuple_max (grap_score, grap_score_Max)
   tuple_find (grap_score, grap_score_Max, Indices_Max)
   if(grap_score_Max!=0)
   final_Pose:=grap_pose[Indices_Max*7:Indices_Max*7+6]
   get_object_models_center (grap_obk[Indices_Max], CenterS)
   x:=CenterS[0]+offset_x
   y:=CenterS[1]+offset_y
   z:=CenterS[2]+offset_z
   rx:=final_Pose[3]
   ry:=final_Pose[4]
   rz:=final_Pose[5] 
   shose_flag:=Indices_Max
   StartPose:=[x,y,z,0,0,0,0]
   gen_sphere_object_model_3d (StartPose,20, StartPoint)
*  visualize_object_model_3d (WindowHandle,[grap_obk[Indices_Max],UnionObjectModel3D,StartPoint],[],[],['color_0','color_1','color_2','alpha_1','disp_pose'], ['green','gray','red',0.5,'true'], [], [], [], PoseOut)

   disp_object_model_3d (WindowHandle,[grap_obk[Indices_Max],UnionObjectModel3D,StartPoint],[],[],['color_0','color_1','color_2','alpha_1','disp_pose'], ['green','gray','red',0.5,'true'])
   endif
  

(五)水果分拣

create_pose (-43.9651,19.352,1031.71,359.001,359.198,185.332,'Rp+T', 'gba', 'point', PoseA)
create_pose (-70.6744,-70.6666,2061.22,181.31,1.74528,169.493,'Rp+T', 'gba', 'point', PoseB)
pose_to_hom_mat3d (PoseA, HomMat3DA)
pose_to_hom_mat3d (PoseB, HomMat3DB)

rigid_trans_object_model_3d (cloud_a, PoseA, A)  
rigid_trans_object_model_3d (A, PoseB, B) 

sample_object_model_3d (B, 'fast', 1.5, [], [], SampledObjectModel3D)
get_object_model_3d_params (cloud_a, 'num_points', GenParamValuess)
get_object_model_3d_params (SampledObjectModel3D, 'num_points', GenParamValue1aa)
*visualize_object_model_3d (WindowHandle,cloud_a, [], [], ['red_channel_attrib','green_channel_attrib','blue_channel_attrib'], ['red','green','blue'], [], [], [], PoseOut)
select_points_object_model_3d (SampledObjectModel3D, 'point_coord_z',0, 100, ObjectModel3D)
select_points_object_model_3d (ObjectModel3D, 'point_coord_x', -130, 240, ObjectModel3D1)
select_points_object_model_3d (ObjectModel3D1, 'point_coord_y', -120, 120, ObjectModel3D2)
connection_object_model_3d (ObjectModel3D2, 'distance_3d',2.3, ObjectModel3DConnected)
get_object_model_3d_params (ObjectModel3DConnected, 'num_points', GenParamValue1)
select_object_model_3d (ObjectModel3DConnected, 'num_points', 'and',1000,5000, ObjectModel3DSelected)
union_object_model_3d (ObjectModel3DSelected, 'points_surface', UnionObjectModel3D1)

*visualize_object_model_3d (WindowHandle, ObjectModel3DSelected, [], [], ['red_channel_attrib','green_channel_attrib','blue_channel_attrib'], ['red','green','blue'], [], [], [], PoseOut)
jx:=744.50
jy:=304.77
jz:=-38.90
cx:=105.976
cy:=6.337556
cz:=4.744299
offset_x:=cx-cx
offset_y:=jy-cy
offset_z:=jz-cz
grap_flag:=[]
x:=[]
y:=[]
z:=[]
pp:=[]
if(|ObjectModel3DSelected|>0)
for Index := 0 to |ObjectModel3DSelected|-1 by 1
    get_object_model_3d_params (ObjectModel3DSelected[Index], 'num_points', va)
    if(va>500)
    smallest_bounding_box_object_model_3d (ObjectModel3DSelected[Index], 'oriented', PoseBox,Length1, Length2, Length3)
    get_object_models_center (ObjectModel3DSelected[Index], Center)
    point:=[Center[0],Center[1],Center[2], 0,0,0,0]
    gen_sphere_object_model_3d (point, 4, pPoint)
    pp:=[pp,pPoint]
    x:=Center[0]+offset_x
    y:=Center[1]+offset_y
    z:=Center[2]+offset_z
    disp_object_model_3d (WindowHandle,[UnionObjectModel3D1,ObjectModel3DSelected[Index]], [], [],['color_0','color_1','alpha_1','disp_pose'], ['green','red',0.5,'true'])
    if(Length1>=70 and Length2>=70)
        grap_flag:=0
       break
    else
       grap_flag:=1
       break
    endif
    endif
endfor
endif


将上述halcon代码封装为外部函数,通过halcon引擎对其进行调用;


                                     

(六)上位机程序

    机械臂作为客户端,上位机作为服务器,当机械臂离开规定区域时,客服端会发送拍照信号,在上位机开启一个线程来监听客户端是否发送拍照信号,当监听到客户端发送的拍照信号时,相机进行拍照,并将点云数据传送到halcon引擎中,进行点云处理获取抓取坐标和姿态,当机械臂复位时,发送下一次的抓取坐标和姿态

示教器端程序如图所示:

                                                                       

上位机与机器人通讯程序

void Robot_Init()
        {
            robot_socket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
            try
            {
                IPAddress ip = IPAddress.Parse("192.168.1.200");
                IPEndPoint point = new IPEndPoint(ip, 6000);
                robot_socket.Bind(point);
                robot_socket.Listen(1);
                listBox1.Items.Add("服务器创建成功");
            }
            catch
            {
                listBox1.Items.Add("请检查robot的IP和端口");
            }
            Thread listen_th = new Thread(listenA);
            listen_th.IsBackground = true;
            listen_th.Start(robot_socket);
        }

        private void listenA(object o)
        {
           // listBox1.Items.Clear();
            Socket socketwatch = o as Socket;
            try
            {
                while (true)
                {
                    mysocket = socketwatch.Accept();
                    //  MessageBox.Show("有客户端接入");
                    listBox1.Items.Add("有客户端接入");

                    Thread receiveer = new Thread(receive);
                    receiveer.IsBackground = true;
                    receiveer.Start(mysocket);
                }
            }
            catch
            {
                listBox1.Items.Add("错误");
            }
        }
        private void receive(object o)
        {
            string str = "";
            byte[] buffer = new byte[1024];
            int len = 0;
            int time_m = 0;
            Socket mysocket = o as Socket;
            bool fg = false;
            while (true)
            {
                len = mysocket.Receive(buffer, 0, buffer.Length, SocketFlags.None);
                str = Encoding.UTF8.GetString(buffer, 0, len);
                if (str == "6")
                {
                   robot_flag = true;
                }
                if (str == "success")
                {
                    label3.Text = "success";
                }
            }
        }

由于在halcon中得到的姿态表示方式为欧拉角,而我们使用的协作机械臂姿态表达方式为轴角,所以需要将欧拉角转换为轴角,转换过程如下所示:

public static double[,] matrix_multiply(double[,] a, double[,] b)
        {
            int n = a.GetLength(0);
            int m = b.GetLength(1);
            int q = a.GetLength(1);
            double[,] result = new double[n, m];
            if (a.GetLength(1) != b.GetLength(0))
            {
                throw new Exception("matrix1 的列数与 matrix2 的行数不相等");
            }
            for (int i = 0; i < n; i++)
            {
                for (int j = 0; j < m; j++)
                {
                    for (int k = 0; k < q; k++)
                    {
                        result[i, j] += a[i, k] * b[k, j];
                    }
                }
            }
            return result;
        }
        public static double[,] Rx(double r)
        {
            double r1 = r / 180 * Math.PI;
            double[] x1 = { 1, 0, 0 };
            double[] x2 = { 0, Math.Cos(r1), -Math.Sin(r1) };
            double[] x3 = { 0, Math.Sin(r1), Math.Cos(r1) };
            double[,] data = new double[3, 3];
            for (int i = 0; i < 3; i++)
            {
                data[0, i] = x1[i];
                data[1, i] = x2[i];
                data[2, i] = x3[i];
            }
            return data;
        }
        public static double[,] Ry(double r)
        {
            double r1 = r / 180 * Math.PI;
            double[] x1 = { Math.Cos(r1), 0, Math.Sin(r1) };
            double[] x2 = { 0, 1, 0 };
            double[] x3 = { -Math.Sin(r1), 0, Math.Cos(r1) };
            double[,] data = new double[3, 3];
            for (int i = 0; i < 3; i++)
            {
                data[0, i] = x1[i];
                data[1, i] = x2[i];
                data[2, i] = x3[i];
            }
            return data;
        }
        public static double[,] Rz(double r)
        {
            double r1 = r / 180 * Math.PI;
            double[] x1 = { Math.Cos(r1), -Math.Sin(r1), 0 };
            double[] x2 = { Math.Sin(r1), Math.Cos(r1), 0 };
            double[] x3 = { 0, 0, 1 };
            double[,] data = new double[3, 3];
            for (int i = 0; i < 3; i++)
            {
                data[0, i] = x1[i];
                data[1, i] = x2[i];
                data[2, i] = x3[i];
            }
            return data;
        }

        public static double[] RotationMatrixTo_Vec(double[,] R)
        {
            Double[] vec = new Double[3];
            double x, y, z, theta;      // x,y,z 表示轴向量,theta表示角度
            double epsilon = 1E-12;
            double v = (R[0, 0] + R[1, 1] + R[2, 2] - 1.0f) / 2.0f;  //获得角度
            if (Math.Abs(v) < 1 - epsilon)
            {
                theta = Math.Acos(v);
                x = 1 / (2 * Math.Sin(theta)) * (R[2, 1] - R[1, 2]);
                y = 1 / (2 * Math.Sin(theta)) * (R[0, 2] - R[2, 0]);
                z = 1 / (2 * Math.Sin(theta)) * (R[1, 0] - R[0, 1]);
            }
            else
            {
                if (v > 0.0f)
                {
                    // \theta = 0, diagonal elements approaching 1
                    theta = 0;
                    x = 0;
                    y = 0;
                    z = 1;
                }
                else
                {

                    theta = Math.PI;
                    if (R[0, 0] >= R[1, 1] && R[0, 0] >= R[2, 2])
                    {
                        x = Math.Sqrt((R[0, 0] + 1) / 2);
                        y = R[0, 1] / (2 * x);
                        z = R[0, 2] / (2 * x);
                    }
                    else if (R[1, 1] >= R[0, 0] && R[1, 1] >= R[2, 2])
                    {
                        y = Math.Sqrt((R[1, 1] + 1) / 2);
                        x = R[1, 0] / (2 * y);
                        z = R[1, 2] / (2 * y);
                    }
                    else
                    {
                        z = Math.Sqrt((R[2, 2] + 1) / 2);
                        x = R[2, 0] / (2 * z);
                        y = R[2, 1] / (2 * z);
                    }
                }
            }
            vec[0] = x * theta;
            vec[1] = y * theta;
            vec[2] = z * theta;
            return vec;
        }

通过点击radionbutton按键 选择分拣鞋底 还是分拣水果

  if (radioButton1.Checked)
                    {
                        point_cloud_shose_select(hv_selectObjectModel3D);
                    }

                    else if (radioButton3.Checked)
                    {
                       point_cloud_fruit(hv_selectObjectModel3D);
                    }
 private void button5_Click(object sender, EventArgs e)
        {
            _zidong_img = new Thread(z_d_camera);
            _zidong_img.Start();
        }<br/>
  void z_d_camera()
        {

            while (cam_flag == true)
            {
                if ((firstcamera == true))
                {
                    robot_flag = false;
                    firstcamera = false;
                    m_bGrabbing = true;
                    _zidong_img = new Thread(ReceiveThreadProcess);
                    _zidong_img.Start();
                }

                else if ((firstcamera == false && robot_flag == true) && ca_fl == true)
                {
                    ca_fl = false;
                    robot_flag = false;
                    m_bGrabbing = true;
                    _zidong_img = new Thread(ReceiveThreadProcess);
                    _zidong_img.Start();
                }
            }
        }

通过halcon引擎调用外部函数

  public void point_cloud_shose_select(HTuple pCloud)
        {

            HEngine.SetProcedurePath(System.AppDomain.CurrentDomain.BaseDirectory + @"halcon");
            Window1 = hWindowControl2.HalconWindow;
            setwindow(Window1);
            //new_duo_math
            var sm1 = new HDevProcedureCall(new HDevProcedure("hk_math"));
            sm1.SetInputCtrlParamTuple("cloud_a", pCloud);
            sm1.SetInputCtrlParamTuple("WindowHandle", Window1);
            sm1.Execute();
            data_x = sm1.GetOutputCtrlParamTuple("x");
            data_y = sm1.GetOutputCtrlParamTuple("y");
            data_z = sm1.GetOutputCtrlParamTuple("z");
            x = data_x / 1000;
            y = data_y / 1000;
            z = data_z / 1000;
            data_rx = sm1.GetOutputCtrlParamTuple("rx");
            data_ry = sm1.GetOutputCtrlParamTuple("ry");
            data_rz = sm1.GetOutputCtrlParamTuple("rz");
            try
            {
                //欧拉角转轴角
                double[] angle = Rzyx_to_vec(data_rx, data_ry, data_rz);
                string txt = "(" + x.ToString() + "," + y.ToString() + "," + z.ToString() + "," + angle[0].ToString() + "," + angle[1].ToString() + "," + angle[2].ToString() + ")";
                showTxt.Text = "x: " + x.ToString() + "\r\n" + "y: " + y.ToString() + "\r\n" + "z: " + z.ToString() + "\r\n " + "rx:"+angle[0].ToString() + "\r\n" + "ry:" + angle[1].ToString() + "\r\n"+ "rz:" + angle[2].ToString() + "\r\n";
                hv_selectObjectModel3D.Dispose();
                mysocket.Send(Encoding.UTF8.GetBytes(txt));

            }
            catch
            {
               ca_fl = true;
            }

        }
<br/>
  public void point_cloud_fruit(HTuple pCloud)
        {
            HEngine.SetProcedurePath(System.AppDomain.CurrentDomain.BaseDirectory + @"halcon");
            Window1 = hWindowControl2.HalconWindow;
            setwindow(Window1);
            //new_duo_math
            var sm1 = new HDevProcedureCall(new HDevProcedure("fruite"));
            sm1.SetInputCtrlParamTuple("cloud_a", pCloud);
            sm1.SetInputCtrlParamTuple("WindowHandle", Window1);
            sm1.Execute();
            data_x = sm1.GetOutputCtrlParamTuple("x");
            data_y = sm1.GetOutputCtrlParamTuple("y");
            data_z = sm1.GetOutputCtrlParamTuple("z");
            x = data_x / 1000;
            y = data_y / 1000;
            z = data_z / 1000;
            data_rx = sm1.GetOutputCtrlParamTuple("rx");
            data_ry = sm1.GetOutputCtrlParamTuple("ry");
            data_rz = sm1.GetOutputCtrlParamTuple("rz");
            area_flag = sm1.GetOutputCtrlParamTuple("grap_flag");
            string fruit_a;
            if (area_flag == 0)
            {
                fruit_a = "苹果";
            }
            else
            {
                fruit_a = "橘子";
            }
            try
            {
                string txt = "(" + area_flag.ToString()+"," +x.ToString() + "," + y.ToString() + "," + z.ToString() + "," +"0" + "," + "3.14" + "," +"0" + ")";
                showTxt.Text = "水果种类:"+ fruit_a + "\r\n"  + "x: " + x.ToString() + "\r\n" + "y: " + y.ToString() + "\r\n" + "z: " + z.ToString() + "\r\n " ;
                hv_selectObjectModel3D.Dispose();
                mysocket.Send(Encoding.UTF8.GetBytes(txt));

            }
            catch
            {

                ca_fl = true;
            }

        }
  
上位机界面如下图所示,图四绿色区域为当前抓取的的鞋底

五. 积木分拣

HK_Robot_Materials_Sorting.h文件
#pragma once
#include <QtWidgets/QWidget>
#include "ui_HK_Robot_Materials_Sorting.h"
#include "common/common.hpp"
#include <vtkFileOutputWindow.h>
#include <vtkOutputWindow.h>
#include <QDebug>
#include <QColorDialog>
#include <QMessageBox>
#include <QFileDialog>
#include <QTime>
#include <QDir>
#include <QFile>
#include <QtMath>
#include <QDirIterator>
#include <pcl/io/io.h>
#include <pcl/io/pcd_io.h>
#include <pcl/io/ply_io.h>
#include <pcl/PolygonMesh.h>
#include <pcl/common/io.h>
#include <pcl/surface/gp3.h>
#include <pcl/io/vtk_lib_io.h>
#include <pcl/point_types.h>
#include <pcl/point_cloud.h>
#include <pcl/visualization/pcl_visualizer.h>
#include <vtkRenderWindow.h>
#include <pcl/filters/passthrough.h>
#include <pcl/filters/statistical_outlier_removal.h>
#include <pcl/io/obj_io.h>
#include <pcl/surface/gp3.h>
#include <pcl/features/normal_3d.h>
#include <pcl/kdtree/kdtree_flann.h>
#include <pcl/surface/mls.h>
#include <pcl/common/transforms.h>
#include <pcl/common/common.h>
#include <pcl/sample_consensus/method_types.h>
#include <pcl/sample_consensus/model_types.h>
#include <pcl/segmentation/sac_segmentation.h>
#include <pcl/segmentation/extract_clusters.h>
#include <pcl/features/moment_of_inertia_estimation.h>
#include <vector>
#include <list>
#include <cstring>
#include "MyRobot.h"
#define	QT_ASSERT_OK(x)  (int(x)!=MV3D_RGBD_OK)
using namespace std;
class HK_Robot_Materials_Sorting : public QWidget
{
    Q_OBJECT
public:
    SOCKET robot_socket; // 定义socket变量
    HK_Robot_Materials_Sorting(QWidget *parent = nullptr);
    ~HK_Robot_Materials_Sorting();
    unsigned int nDevNum = 0;
    Mv3dRgbdDeviceType device_type;//设备连接方式
    std::vector<MV3D_RGBD_DEVICE_INFO> devs;//设备列表
    boost::shared_ptr<pcl::visualization::PCLVisualizer> viewer;
    boost::shared_ptr<pcl::visualization::PCLVisualizer> viewer1;
    pcl::PointCloud<pcl::PointXYZ>::Ptr cloud1;
    pcl::PointCloud<pcl::PointXYZ>::Ptr cloud2;
    pcl::PointCloud<pcl::PointXYZRGB> cloud;
    typedef pcl::PointXYZRGB PointT;
    typedef pcl::PointCloud<PointT>PointCloud;
    PointCloud::Ptr cloudRGB;
    typedef pcl::PointXYZ PointTA;
    typedef pcl::PointCloud<PointTA>PointCloudA;
       PointCloudA::Ptr cloudD;
private slots:
    void on_btnFinddevice_clicked();
    void on_btnOpenDevice_clicked();
    void on_btngrap_clicked();
    void on_btnSelectBegin_clicked();
    void on_pushButton_2_clicked();
    void on_btn_stop_clicked();
    void on_btn_auto_clicked();
    void onServerStatusChanged(const QString& statusMessage);
public:
    void grap_show(void * handle );
    void grap_show_auto(void* handle);
    void* handle = NULL;
    unsigned char* g_pRgbData = NULL;
    bool ConvertRGB8Planner2BGR8Packed(const unsigned char* pSrcData, int nWidth,int nHeight,unsigned char* pDstData);
    void VTK_Init();
    MV3D_RGBD_IMAGE_DATA imgCloud;
    void color_selece();
    Eigen::Matrix4f Matrix_A;
    Eigen::Matrix4f Matrix_B;
    bool robotflag;
    MyRobot* robot;

private:
    Ui::HK_Robot_Materials_SortingClass ui;
};
  由于我们需要在QT中利用VTK显示点云,所以这里需要对VTK进行初始化
       viewer.reset(new pcl::visualization::PCLVisualizer("viewer", false));
	viewer->setBackgroundColor(0, 0.3, 0.4);
	ui.qvtkWidget->SetRenderWindow(viewer->getRenderWindow());
	viewer->setupInteractor(ui.qvtkWidget->GetInteractor(), ui.qvtkWidget->GetRenderWindow());
	ui.qvtkWidget->update();
	viewer1.reset(new pcl::visualization::PCLVisualizer("viewer1", false));
	viewer1->setBackgroundColor(0, 0.3, 0.4);
	ui.qvtkWidget_2->SetRenderWindow(viewer1->getRenderWindow());
	viewer1->setupInteractor(ui.qvtkWidget_2->GetInteractor(), ui.qvtkWidget_2->GetRenderWindow());
	ui.qvtkWidget_2->update();

由于我们需要通过点云的颜色信息将彩色积木分见到不同的区域,所以需要解析RGB图和点云数据,合成带RGB信息点云
       g_pRgbData = (unsigned char*)malloc(stFrameData.stImageData[1].nDataLen);
	 memset(g_pRgbData, 0, stFrameData.stImageData[1].nDataLen);
         ConvertRGB8Planner2BGR8Packed(stFrameData.stImageData[1].pData, stFrameData.stImageData[1].nWidth, stFrameData.stImageData[1].nHeight, g_pRgbData);
	 cv::Mat  mCvmat = cv::Mat(stFrameData.stImageData[1].nHeight, stFrameData.stImageData[1].nWidth, CV_8UC3, g_pRgbData);
	 cv::Mat bgrImage;
	 cvtColor(mCvmat, bgrImage,cv::COLOR_RGB2BGR);
	 QImage Qmtemp = QImage((const unsigned char*)(bgrImage.data), bgrImage.cols, bgrImage.rows, bgrImage.step, QImage::Format_RGB888);
	 Qmtemp = Qmtemp.scaled(ui.label_2->gt;size(), Qt::IgnoreAspectRatio, Qt::SmoothTransformation);
	 ui.label_2->gt;setPixmap(QPixmap::fromImage(Qmtemp));
	 ui.label_2->gt;resize(Qmtemp.size());
	 ui.label_2->gt;show();  
        int nPointNum = imgCloud.nDataLen / (sizeof(float) * 3);
	  float* pSrcValue = (float*)imgCloud.pData;
	  uchar* rgbvalue = (uchar*)mCvmat.data;
	  PointCloud::Ptr cloud(new PointCloud);
	  for (int nPntIndex = 0; nPntIndex < nPointNum;++nPntIndex)
	  {
		  PointT p;
		  p.x = pSrcValue[nPntIndex*3+0];
		  p.y = pSrcValue[nPntIndex * 3 + 1];
		  p.z = pSrcValue[nPntIndex * 3 + 2];
		  p.r = rgbvalue[nPntIndex * 3 + 2];
		  p.g = rgbvalue[nPntIndex * 3 + 1];
		  p.b = rgbvalue[nPntIndex * 3 + 0];
		  cloud->points.push_back(p);
	  }
	  cloud->height = 1;
	  cloud->width = cloud->points.size();
	  cloudRGB = cloud;
	  viewer->removeAllPointClouds();
         viewer->removeAllShapes();
	  pcl::visualization::PointCloudColorHandlerRGBField<pcl::PointXYZRGB> rgb(cloudRGB);
	  viewer->addPointCloud<pcl::PointXYZRGB>(cloudRGB, rgb, "point_cloud");
	  viewer->setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 1, "point_cloud");
	  ui.qvtkWidget->update();<br/>                   

                                                      

由于点云中存在着大量的无关背景和离群点,所以通过直通滤波与统计滤波对点云进行预处理
    ui.textBrowser->clear();
	pcl::PassThrough<pcl::PointXYZRGB> pass;
	pass.setInputCloud(cloudRGB);
	pass.setFilterFieldName("z");
	pass.setFilterLimits(500, 750);
	pass.setFilterLimitsNegative(false);
	pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloud_out(new pcl::PointCloud<pcl::PointXYZRGB>());
	pass.filter(*cloud_out);
	pass.setInputCloud(cloud_out);
	pass.setFilterFieldName("x"); 
	pass.setFilterLimits(-330,70);
	pass.filter(*cloud_out); 
	pass.setInputCloud(cloud_out);
	pass.setFilterFieldName("y"); 
	pass.setFilterLimits(-400, 100); 
	pass.filter(*cloud_out); 
	pcl::io::savePCDFileASCII("xx.pcd", *cloud_out);
	 //统计滤波
	pcl::StatisticalOutlierRemoval<pcl::PointXYZRGB> outrem;
	outrem.setInputCloud(cloud_out);
	outrem.setMeanK(800);
	outrem.setStddevMulThresh(1);
	pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloud_out1(new pcl::PointCloud<pcl::PointXYZRGB>);
	outrem.filter(*cloud_out1);
由于需要将不同颜色的积木分拣到不同的区域,所以这里采用设定颜色阈值的分割不同颜色积木的点云

<xmp>pcl::PointCloud&lt;pcl::PointXYZRGB&gt;::Ptr redfilteredCloud(new pcl::PointCloud&lt;pcl::PointXYZRGB&gt;);
	pcl::PointCloud&lt;pcl::PointXYZRGB&gt;::Ptr greenfilteredCloud(new pcl::PointCloud&lt;pcl::PointXYZRGB&gt;);
	pcl::PointCloud&lt;pcl::PointXYZRGB&gt;::Ptr bluefilteredCloud(new pcl::PointCloud&lt;pcl::PointXYZRGB&gt;);
	pcl::PointCloud&lt;pcl::PointXYZRGB&gt;::Ptr orginfilteredCloud(new pcl::PointCloud&lt;pcl::PointXYZRGB&gt;);
	pcl::PointCloud&lt;pcl::PointXYZRGB&gt;::Ptr allpointcloud(new pcl::PointCloud&lt;pcl::PointXYZRGB&gt;);
	std::vector&lt;pcl::PointCloud&lt;pcl::PointXYZRGB&gt;::Ptr&gt; cloudList;
	Eigen::Vector4f centroid;					// 质心
	string  color_flag;
	pcl::PointXYZ center;
	for (size_t i = 0; i &lt; cloud_out1-&gt;points.size(); ++i) {
		const pcl::PointXYZRGB&amp; point = cloud_out1-&gt;points[i];
		if (point.r &gt;= 200 &amp;&amp; point.r &lt;= 255 &amp;&amp; point.g &gt;=60 &amp;&amp; point.g &lt;= 100 &amp;&amp; point.b &gt;= 10 &amp;&amp; point.b &lt;= 60) 
		{
			redfilteredCloud-&gt;points.push_back(point);
			allpointcloud-&gt;points.push_back(point);
		}
	}

	for (size_t i = 0; i &lt; cloud_out1-&gt;points.size(); ++i) {
		const pcl::PointXYZRGB&amp; point = cloud_out1-&gt;points[i];
		if (point.r &gt;= 140 &amp;&amp; point.r &lt;= 190 &amp;&amp; point.g &gt;= 210 &amp;&amp; point.g &lt;= 255 &amp;&amp; point.b &gt;= 80 &amp;&amp; point.b &lt;= 130) {
			greenfilteredCloud-&gt;points.push_back(point);
			allpointcloud-&gt;points.push_back(point);
		}
	}

	for (size_t i = 0; i &lt; cloud_out1-&gt;points.size(); ++i) {
		const pcl::PointXYZRGB&amp; point = cloud_out1-&gt;points[i];
		if (point.r &gt;= 80 &amp;&amp; point.r &lt;= 120 &amp;&amp; point.g &gt;= 180 &amp;&amp; point.g &lt;= 210 &amp;&amp; point.b &gt;= 230) {
			bluefilteredCloud-&gt;points.push_back(point);
			allpointcloud-&gt;points.push_back(point);
		}
	}

	for (size_t i = 0; i &lt; cloud_out1-&gt;points.size(); ++i) {
		const pcl::PointXYZRGB&amp; point = cloud_out1-&gt;points[i];
		if (point.r &gt;= 230 &amp;&amp; point.g &gt;= 160 &amp;&amp; point.g &lt;= 190 &amp;&amp; point.b &gt;= 60 &amp;&amp; point.b &lt;= 100) {
			orginfilteredCloud-&gt;points.push_back(point);
			allpointcloud-&gt;points.push_back(point);
		}
	}
	cloudList.push_back(redfilteredCloud);
	cloudList.push_back(greenfilteredCloud);
	cloudList.push_back(bluefilteredCloud);
	cloudList.push_back(orginfilteredCloud);       allpointcloud-&gt;width = allpointcloud-&gt;points.size();
	allpointcloud-&gt;height = 1;
	redfilteredCloud-&gt;width = redfilteredCloud-&gt;points.size();
	redfilteredCloud-&gt;height = 1;
	greenfilteredCloud-&gt;width = greenfilteredCloud-&gt;points.size();
	greenfilteredCloud-&gt;height = 1;
	bluefilteredCloud-&gt;width = bluefilteredCloud-&gt;points.size();
	bluefilteredCloud-&gt;height = 1;
	orginfilteredCloud-&gt;width = orginfilteredCloud-&gt;points.size();
	orginfilteredCloud-&gt;height = 1;&lt;br/&gt;
由于需要计算每个积木点云的质心坐标用于抓取,而通过颜色阈值分割的点云中可能存在多个相同颜色的色块,所以这里通过点的数量来判断每种颜色的点云中积木的数量是否大于一个,如果数量大于一个,通过聚类分割对该颜色的点云进行继续分割,分割出所有积木点云,并计算其质心坐标,通过vecflag来标记积木点云颜色种类;
for (size_t k = 0; k < cloudList.size(); ++k) {

		if(900<cloudList[k]->points.size()<=1400)
		{
			pcl::compute3DCentroid(*cloudList[k], centroid);	
			vecX.push_back(centroid(0));
			vecY.push_back(centroid(1));
			vecZ.push_back(centroid(2));
			if (k == 0)
			{
				vecflag.push_back('R');
			}
			else if (k == 1)
			{
				vecflag.push_back('G');
			}
			else if (k == 2)
			{
				vecflag.push_back('B');
			}
			else if (k == 3)
			{
				vecflag.push_back('O');
			}
		}
		else if(cloudList[k]->points.size()>1400)
		{
			pcl::search::KdTree<pcl::PointXYZRGB>::Ptr tree(new pcl::search::KdTree<pcl::PointXYZRGB>);
			tree->setInputCloud(cloud_out);
			std::vector<pcl::PointCloud<pcl::PointXYZRGB>::Ptr> result;
			std::vector<pcl::PointIndices> cluster_indices;
			pcl::EuclideanClusterExtraction<pcl::PointXYZRGB> ec;
			ec.setClusterTolerance(1.4); 
			ec.setMinClusterSize(650);   
			ec.setMaxClusterSize(2000);  
			ec.setSearchMethod(tree);     //设置点云的搜索机制
			ec.setInputCloud(cloud_out); //设置原始点云
			ec.extract(cluster_indices);  //从点云中提取聚类
			std::vector<pcl::PointCloud<pcl::PointXYZRGB>::Ptr> Eucluextra; //用于储存欧式分割后的点云

	for (std::vector<pcl::PointIndices>::const_iterator it = cluster_indices.begin(); it != cluster_indices.end(); ++it)
	{
		pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloud_cluster1(new pcl::PointCloud<pcl::PointXYZRGB>);
		for (std::vector<int>::const_iterator pit = it->indices.begin(); pit != it->indices.end(); pit++)
			cloud_cluster1->points.push_back(cloud_out->points[*pit]);
			cloud_cluster1->width = cloud_cluster1->points.size();
			cloud_cluster1->height = 1;
			cloud_cluster1->is_dense = true;
			Eucluextra.push_back(cloud_cluster1);
	}
	for (int t = 0; t < Eucluextra.size(); t++)
	{
		pcl::compute3DCentroid(*Eucluextra[t], centroid);	
		vecX.push_back(centroid(0));
		vecY.push_back(centroid(1));
		vecZ.push_back(centroid(2));
	}
	if (k == 0)
	{
		vecflag.push_back('R');
	}
	else if (k == 1)
	{
		vecflag.push_back('G');
	}
	else if (k == 2)
	{
		vecflag.push_back('B');
	}
	else if (k == 3)
	{
		vecflag.push_back('O');
	}		
	}	
	}<br/>

    采用机械臂作为客户端,上位机作为服务器,当用户点击自动运行按钮后,相机会进行拍照并通过PCL进行点云处理,并将得到的抓取坐标发送给机械臂,第一次抓拍之后的所有拍照操作由客户端控制,当机械臂第一次获取的所有坐标全部抓取完毕后,机械臂运动到最后一个物料的放料区域的同时会向上位机发送一个数字6,上位机接收到数字6会触发信号serverStatusChanged,并去执行对应的槽函数 ,抓拍点云数据并进行点云处理,将得到的抓取坐标发送给客户端,客户端会解析数据,引导机械臂进行抓取。

MyRobot.cpp
#include "MyRobot.h"
#include <qtextbrowser.h>

MyRobot::MyRobot(QObject* parent) : QObject(parent),
m_server(new QTcpServer(this))
{
    connect(m_server, &QTcpServer::newConnection, this, &MyRobot::onNewConnection);
}

void MyRobot::startServer()
{
    if (m_server->listen(QHostAddress("192.168.1.200"), 6000))
    {
        emit serverStatusChanged(u8"服务器创建成功\n");
        qDebug() << u8"服务器创建成功";
    }
    else
    {
        emit serverStatusChanged(u8"请检查robot的IP和端口\n");
        qDebug() << u8"请检查robot的IP和端口";
  
    }
}

void MyRobot::onNewConnection()
{
    QTcpSocket* client = m_server->nextPendingConnection();
    m_clients.append(client);

    connect(client, &QTcpSocket::readyRead, this, &MyRobot::onSocketReadyRead);

    qDebug() << u8"有客户端接入";
    emit serverStatusChanged(u8"有客户端接入\n");
}

void MyRobot::onSocketReadyRead()
{
    QTcpSocket* client = qobject_cast<QTcpSocket*>(sender());
    if (client)
    {
        QByteArray data = client->readAll();
        QString str = QString::fromUtf8(data);
        if (str == "6")
        {
            emit serverStatusChanged(u8"6");
     
        }
    }
}
void MyRobot::Send(QTcpSocket* socket, const QString& message)
{
    socket->write(message.toUtf8());
    socket->waitForBytesWritten();
}
void MyRobot::sendDataToClient(int clientIndex, const QString& message)
{
    if (clientIndex >= 0 && clientIndex < m_clients.size())
    {
        Send(m_clients.at(clientIndex), message);
    }
}
<br/>
void HK_Robot_Materials_Sorting::onServerStatusChanged(const QString& statusMessage)
{
	if (statusMessage == "6" && firstcamera == false && btn_flag == true && ca_fl == true)
	{
		grap_show_auto(handle);
	}
	ui.textBrowser_3->insertPlainText(statusMessage);
}

void HK_Robot_Materials_Sorting::on_btn_stop_clicked()
{
	ca_fl =false;

}
void  HK_Robot_Materials_Sorting::on_btn_auto_clicked()
{
	ca_fl == true;
	btn_flag = true;
	if ((firstcamera == true && btn_flag == true))
	{
		firstcamera = false;
		grap_show_auto(handle);
	}

}

将提取的质心坐标转换为用于机器人抓取的机械臂坐标

      Matrix_A = Eigen::Matrix4f::Identity();
	Matrix_A(0, 0) = -0.995575;
	Matrix_A(0, 1) = 0.0929176;
	Matrix_A(0, 2) = -0.139971;
	Matrix_A(0, 3) = -43.9651;
	Matrix_A(1, 0) = -0.0931555;
	Matrix_A(1, 1) = -0.995499;
	Matrix_A(1, 2) = 0.0174332;
	Matrix_A(1, 3) = 19.352;
	Matrix_A(2, 0) = -0.0123142;
	Matrix_A(2, 1) = 0.01866;
	Matrix_A(2, 2) = 0.99975;
	Matrix_A(2, 3) = 1031.71;
	Matrix_A(3, 0) = 0;
	Matrix_A(3, 1) = 0;
	Matrix_A(3, 2) = 0;
	Matrix_A(3, 3) = 1;
       Matrix_B = Eigen::Matrix4f::Identity();
	Matrix_B(0, 0) = -0.982777;
	Matrix_B(0, 1) = -0.182271;
	Matrix_B(0, 2) = -0.0304562;
	Matrix_B(0, 3) = -70.6744;
	Matrix_B(1, 0) = -0.181623;
	Matrix_B(1, 1) = -0.983103;
	Matrix_B(1, 2) = 0.0228512;
	Matrix_B(1, 3) = -70.6666;
	Matrix_B(2, 0) = -0.0341067;
	Matrix_B(2, 1) = 0.0169261;
	Matrix_B(2, 2) = -0.999275;
	Matrix_B(2, 3) = 2061.22;
	Matrix_B(3, 0) = 0;
	Matrix_B(3, 1) = 0;
	Matrix_B(3, 2) = 0;
	Matrix_B(3, 3) = 1;
	for (size_t i = 0; i < vecX.size(); ++i) {
		char color =vecflag[i];
		QString colorchinese;
		if (color == 'R')
		{
			colorchinese = u8"红色";
		}
		else if (color == 'G')
		{
			colorchinese = u8"绿色";
		}
		else if (color == 'B')
		{
			colorchinese = u8"蓝色";
		}
		else if (color == 'O')
		{
			colorchinese = u8"橙色";
		}
		pcl::PointXYZ o;
		 o.x = vecX[i];
		 o.y = vecY[i];
		 o.z = vecZ[i];
		viewer1->addSphere(o, 3, 255, 255, 101, std::to_string(i*3), 0);
		Eigen::Vector4f eigen_vector(vecX[i], vecY[i], vecZ[i], 0);
		Eigen::Vector4f resulat = Matrix_A* eigen_vector;
		Eigen::Vector4f resultS = Matrix_B * resulat;
		float jx = 744.50;
		float jy = 304.77;
		float jz = -38.90;
		float cx = 105.976;
		float cy = 6.337556;
		float cz = 4.744299;
		float px = jx - cx;
		float py = jy - cy;
		float pz = jz - cz;
		float x = resultS[0] + px;
	       float y= resultS[1] + py;
		float z = resultS[2] + pz;
	       QString canshu = QString(u8"颜色:""%1""   X:""%2""   Y:""%3""   Z:""%4\n").arg(colorchinese).arg(x).arg(y).arg(z);
		ui.textBrowser->insertPlainText(canshu);
		QString txt = "("+QString::number(color) + QString::number(x/1000) + "," + QString::number(y/1000) + "," + QString::number(z/1000) + QString::number(0) + ","+QString::number(3.14) + "," + QString::number(0) + ")";
		robot->sendDataToClient(0,txt);
}
	viewer1->addPointCloud(allpointcloud, std::to_string(8));
	viewer1->setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 1, std::to_string(8));
	ui.qvtkWidget_2->update();

研究进展链接

悉灵杯”课题研究进展(一)通过海康RGB-D相机获取带RGB信息的点云    https://www.v-club.com/home/article/2108
悉灵杯”课题研究进展(二)RGB-D相机引导机械臂分拣物料       https://www.v-club.com/home/article/2126
悉灵杯”课题研究进展(三)RGB-D相机引导机械臂分拣物料       https://www.v-club.com/home/article/2132

代码链接

鞋底分拣/水果分拣:https://gitee.com/zyw888/hk_shose_fruite

积木分拣https://gitee.com/zyw888/hk_jm

团队信息

团队名称:工大智能团

团队成员:张跃威、黄萌

演示视频请在附件中下载

附件:
版权声明:本文为V社区用户原创内容,转载时必须标注文章的来源(V社区),文章链接、文章作者等基本信息,否则作者和本社区有权追究责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件至:v-club@hikrobotics.com 进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容。
上一篇

悉灵杯”课题研究进展(四)RGB-D相机引导机械臂分拣物料

下一篇

第二届“悉灵杯”课题研究报告-动态手势控制音频播放器设计

评论请先登录 登录
全部评论 0
Lv.0
0
关注
7
粉丝
5
创作
57
获赞
所属专题
  • 使用3D相机MV-EB435i基于OpenCV的客流检测与异常识别的实现
  • 悉灵杯”课题研究进展(一)通过海康RGB-D相机获取带RGB信息的点云
  • “悉灵杯”课题研究-基于RGB_D相机的物品位置计算
  • “悉灵杯”课题研究-对RGB_D相机采集的三维点云处理
  • “悉灵杯”课题研究-RGB_D相机SDK集成及Open3d点云基本处理
  • “悉灵杯”课题研究-集成RGB_D相机SDK的Open3d点云可视化
  • “悉灵杯”课题研究-集成RGB_D相机SDK的Open3d点云功能UI项目开发(项目demo)
  • “悉灵杯”课题研究-RGB_D相机SDK三维点云存储
  • “悉灵杯”课题研究-OpenNI集成及示例代码开发
  • 悉灵杯”课题研究-MV-EB435i立体相机基于opencv图像处理使用yolov5的物体识别
  • “悉灵杯”课题研究-基于opecv的集成RGB_D相机SDK的基础小样物品颜色检测及人脸识别
  • OpenCV中利用knn进行数字(0-9)识别--RGB-D相机采集
  • “悉灵杯”课题研究-基于MV-EB435i的落差边缘检测算法开发记录
  • 悉灵杯”课题研究-LabVIEW集成及示例代码开发
  • “悉灵杯”课题研究-MV-EB435i立体相机集成Apriltags发布相机位姿
  • “悉灵杯”课题研究-MV-EB435i立体相机图像处理UI界面开发
  • “悉灵杯”课题研究-基于ROS1的RGB-D相机SDK集成及示例代码开发
  • 第二届“悉灵杯”课题移动机器人感知研究进展
  • “悉灵杯”课题研究—手眼标定方案
  • 第二届“悉灵杯”课题研究-基于RGB_D相机的室内环境建模
  • 悉灵杯”课题研究进展(二)-基于三维模型/场景点云的虚拟仿真数据生成
  • 悉灵杯”课题研究进展(一)-实例分割网络模型搭建与实验场景介绍
  • “悉灵杯”课题研究报告-基于RGB-D相机的2D和3D抓取定位方案研究
  • “悉灵杯”课题研究-基于点云配准算法GICP的3D抓取方案研究
  • “悉灵杯”课题研究-基于YOLO和GGCNN的物品平面抓取方案研究
  • 动态手势控制音频播放器-成果物
  • 第二届“悉灵杯”课题研究报告-动态手势控制音频播放器设计
  • 动态手势控制音频播放器(五)动态手势控制音频播放器exe
  • 动态手势控制音频播放器(四)动态手势识别系统的设计
  • 动态手势控制音频播放器(三)音频播放器设计
  • 动态手势控制音频播放器(二)动态手势识别系统的设计
  • 动态手势控制音频播放器(一)总体方案设计
  • 悉灵杯”课题研究进展(四)RGB-D相机引导机械臂分拣物料
  • 悉灵杯”课题研究进展(三)RGB-D相机引导机械臂分拣物料
  • 悉灵杯”课题研究进展(二)RGB-D相机引导机械臂分拣物料
  • ”悉灵杯”课题研究报告-基于RGB-D相机的机械臂物料分拣系统研究
  • 悉灵杯”课题研究报告-基于深度学习方法和虚拟仿真数据的机械臂引导方案
  • 第二届“悉灵杯”课题研究机械臂引导研究报告
  • 第二届“悉灵杯”课题研究机械臂引导研究进展(二)
  • 第二届“悉灵杯”课题研究机械臂引导研究进展(一)
相关阅读
  • VM4.4更新亮点
    2024-04-12 浏览 0
  • 第二届启智杯—光伏电池片质检视觉方案设计
    2024-04-15 浏览 0
  • 第二届启智杯-锂电外壳外观检测3D视觉方案设计
    2024-04-15 浏览 0
  • 第二届启智杯-无监督异常检测算法
    2024-04-16 浏览 0
  • 双车联动调试案例-华工中试基地
    2024-04-28 浏览 0

请升级浏览器版本

您正在使用的浏览器版本过低,请升级最新版本以获得更好的体验。

推荐使用以下浏览器