“悉灵杯”课题研究-OpenNI集成及示例代码开发
本文简单介绍了OpenNI2 的环境配置与使用,给出了OpenNI2示例Demo,并利用Mv3d SDK进行了OpenNI2框架的重新实现。

一.环境设置

1.     去Github中搜索RealSense,下载ZIP包,找到“librealsense/wrappers/openni2/”,在README处下载OpenNI2 SDK 和RealSense2 SDK,或者到官网下载OpenNI2 SDK (官网下载 此网址提供各个版本的SDK)

2.     不要改动目录直接安装两个SDK文件,然后根据README创建SDK 系统环境目录。

3.     下载CMake程序(桌面版),将下载的ZIP包解压,找到openni2下的CMkeLists.txt文件,使用CMake程序编译出vs工程。

4.     使用编译出的vs工程(Rs2driver.sln),编译所需驱动的动态库(Rs2driver.DLL)。

5.     安装好的OpenNI2 会有一个工程文件夹“OpenNI2\Samples\”,将动态库放置在所需位置(按照1中的README来)。


. OpenNI底层调用原理

    OpenNI2通过上文提到过的rs2Driver的接口来进行过程调用,从而让开发者只使用接口调用进行开发工作。

    rs2Driver底层实现了三个类,相关的类都放在命名空间oni::driver中,主要的类有DriverBase、DeviceBase、StreamBase;各个设备Driver都需要实现这些类,同时将DriverClass通过宏ONI_EXPORT_DRIVER导出,而这三个类都继承自父类Base,具体介绍请看《六,其他说明》中的3,4。

    Intel在实现rs2Driver的三个类时,利用IntelRealSense SDK进行了再封装。从而实现了简单易懂的OpenNI框架,所以我们的开发目标就是在原有OpenNI的基础上,利用Mv3d SDK替换掉IntelRealSense SDK,按功能需求重新实现相关函数即可。


. RS版到MV3D版OpenNI的重新实现

1.Rs2Driver类:此类继承了DriverBase类

    在RS中先通过版本号获取上下文环境,然后利用上下文环境作为回调参数查找在线设备,并利用上下文环境列举出在线设备。

OniStatus Rs2Driver::initialize(
	DeviceConnectedCallback connectedCallback, 
	DeviceDisconnectedCallback disconnectedCallback, 
	DeviceStateChangedCallback deviceStateChangedCallback, 
	void* cookie)
{
	rsTraceFunc("");

	for (;;)
	{
		Rs2ScopedMutex lock(m_stateMx);

		if (m_context)
		{
			rsTraceError("Already initialized");
			break;
		}

		if (DriverBase::initialize(connectedCallback, disconnectedCallback, deviceStateChangedCallback, cookie) != ONI_STATUS_OK)
		{
			rsTraceError("DriverBase::initialize failed");
			break;
		}

		Rs2Error e;        //通过获取相机的版本号创建上下文环境
		m_context = rs2_create_context(RS2_API_VERSION, &e);
		if (!e.success())
		{
			rsTraceError("rs2_create_context failed: %s", e.get_message());
			break;
		}
        //通过上下文环境列举在线设备
		enumerateDevices();
        //通过上下文环境调用回调查找在线设备
		rs2_set_devices_changed_callback(m_context, devicesChangedCallback, this, &e);
		if (!e.success())
		{
			rsTraceError("rs2_set_devices_changed_callback failed: %s", e.get_message());
			break;
		}

		rsLogDebug("Rs2Driver INITIALIZED");
		return ONI_STATUS_OK;
	}

	shutdown();
	return ONI_STATUS_ERROR;
}
<br/>void Rs2Driver::enumerateDevices()
{
	rsTraceFunc("");

	Rs2Error e;
	rs2_device_list* deviceList = rs2_query_devices(m_context, &e);
	if (!e.success())
	{
		rsTraceError("rs2_query_devices failed: %s", e.get_message());
	}
	else
	{
		devicesChanged(nullptr, deviceList);
		rs2_delete_device_list(deviceList);
	}
}

    在MV3D中利用1-1的函数获取版本号,目的是在调用OpenNI接口时,获取相机版本号,利用1-2初始化MV3D接口函数,1-3中使用MV3D获取在线设备函数获取所有在线设备列表,1-4使用回调函数,将获取的设备列表作为参数,目的是连接目标设备。

OniStatus Rs2Driver::initialize(
	DeviceConnectedCallback connectedCallback, 
	DeviceDisconnectedCallback disconnectedCallback, 
	DeviceStateChangedCallback deviceStateChangedCallback, 
	void* cookie)
{
	rsTraceFunc("");
	
	for (;;)
	{

		Rs2ScopedMutex lock(m_stateMx);

		if (DriverBase::initialize(connectedCallback, disconnectedCallback, deviceStateChangedCallback, cookie) != ONI_STATUS_OK)
		{
			rsTraceError("DriverBase::initialize failed");
			break;
		}
		//1-1 MV3D函数获取版本号
		MV3D_RGBD_VERSION_INFO stVersion;
		MV3D_RGBD_GetSDKVersion(&stVersion);
		//1-2 MV3D函数初始化
		MV3D_RGBD_Initialize();
		//1-3 中有MV3D获取在线设备列表信息函数
		enumerateDevices();

		if (devs.size() != nDevNum)
		{
			//1-4 利用MV3D获取的在线设备信息作为回调参数,连接目标设备
			devicesChangedCallback(nullptr, &devs, this);
		}
		rsLogDebug("Rs2Driver INITIALIZED");
		return ONI_STATUS_OK;
	}

	shutdown();
	return ONI_STATUS_ERROR;
}

void Rs2Driver::enumerateDevices()
{
	rsTraceFunc("");
	//2-1
	nDevNum = 0;
	//2-2 MV3D函数获取在线设备列表数
	MV3D_RGBD_GetDeviceNumber(DeviceType_Ethernet | DeviceType_USB, &nDevNum);
	//2-3
	devs.resize(nDevNum);
	//2-4 MV3D函数获取在线设备列表信息
	MV3D_RGBD_GetDeviceList(DeviceType_Ethernet | DeviceType_USB, &devs[0], nDevNum, &nDevNum);
	//2-5
	devicesChanged(nullptr, &devs);
	
	devs.clear();
	
}


2.Rs2Device类:此类继承了DeviceBase类

    在RS和MV3D中,initialize()的实现一样,区别是在initialize()函数中调用的其他函数的实现不同。下面是两个版本initialize()的对比

RS中:

OniStatus Rs2Device::initialize()
{
	rsTraceFunc("");

	for (;;)
	{
		Rs2ScopedMutex lock(m_stateMx);

		if (m_thread.get())
		{
			rsTraceError("Already initialized");
			break;
		}
        //查询目标设备是否存在
		if (queryDeviceInfo(m_device, &m_info) != ONI_STATUS_OK)
		{
			rsTraceError("queryDeviceInfo failed");
			break;
		}

		{
			Rs2ScopedMutex streamLock(m_streamsMx);
            //初始化相机中所有流的信息
			if (initializeStreams() != ONI_STATUS_OK)
			{
				rsTraceError("initializeStreams failed");
				break;
			}
		}

		m_configId = 0;
		m_runFlag = true;

		try {            //进入循环,获取相机数据流信息
			m_thread.reset(new std::thread(&Rs2Device::mainLoop, this));
		}
		catch (std::exception& ex) {
			rsTraceError("std::thread failed: %s", ex.what());
			break;
		}

		return ONI_STATUS_OK;
	}

	shutdown();
	return ONI_STATUS_ERROR;
}

MV3D中:

OniStatus Rs2Device::initialize()
{
	rsTraceFunc("");

	for (;;)
	{
		Rs2ScopedMutex lock(m_stateMx);

		if (m_thread.get())
		{
			rsTraceError("Already initialized");
			break;
		}
		//1-1 查询目标设备是否存在
		if (queryDeviceInfo(m_device, &m_info) != ONI_STATUS_OK)
		{
			rsTraceError("queryDeviceInfo failed");
			break;
		}
		
		{
			Rs2ScopedMutex streamLock(m_streamsMx);
			//1-2 初始化相机中所有流的信息
			if (initializeStreams() != ONI_STATUS_OK)
			{
				rsTraceError("initializeStreams failed");
				break;
			}
		}
		
		m_configId = 0;
		m_runFlag = true;

		try {
			//1-3进入循环,获取相机数据流信息
			m_thread.reset(new std::thread(&Rs2Device::mainLoop, this));
		}
		catch (std::exception& ex) {
			rsTraceError("std::thread failed: %s", ex.what());
			break;
		}

		return ONI_STATUS_OK;
	}
	//1-4
	shutdown();
	return ONI_STATUS_ERROR;
}

接下来将介绍三个函数的不同:

其中两个版本中的queryDeviceInfo(m_device, &m_info)的实现大同小异,详细的MV3D版实现请看《七.OpenNI2 示例开发》中的代码实现。

    1.RS中的initializeStreams()首先查询了传感器列表,计算传感器个数,根据传感器来进行流个数的统计,并将不同的数据流存入对象列表中。1-1 目的是循环查询流对象列表中各种流的传感器信息。

OniStatus Rs2Device::initializeStreams()
{
	rsTraceFunc("");

	std::map<int, rs2_stream> sensorStreams;
	
	Rs2Error e;    //查询传感器
	rs2_sensor_list* sensorList = rs2_query_sensors(m_device, &e);
	if (sensorList)
	{
		const int nsensors = rs2_get_sensors_count(sensorList, &e);
		//传感器个数
		for (int sensorId = 0; sensorId < nsensors; sensorId++)
		{
			rsLogDebug("SENSOR %d", sensorId);

			rs2_sensor* sensor = rs2_create_sensor(sensorList, sensorId, &e);
			if (sensor)
			{
				sensorStreams.clear();

				rs2_stream_profile_list* profileList = rs2_get_stream_profiles(sensor, &e);
				if (profileList)
				{
					const int nprofiles = rs2_get_stream_profiles_count(profileList, &e);
					//统计流个数
					for (int profileId = 0; profileId < nprofiles; profileId++)
					{
						const rs2_stream_profile* profile = rs2_get_stream_profile(profileList, profileId, &e);
						if (profile)
						{
							Rs2StreamProfileInfo spi;
							spi.profile = profile;
							spi.sensorId = sensorId;                            //获取流信息
							rs2_get_stream_profile_data(profile, &spi.streamType, &spi.format, &spi.streamId, &spi.profileId, &spi.framerate, &e);

							if (e.success() && isSupportedStreamType(spi.streamType) && isSupportedPixelFormat(spi.format))
							{
								rs2_get_video_stream_resolution(profile, &spi.width, &spi.height, &e);
								if (e.success())
								{
									#if 1
									rsLogDebug("\ttype=%d sensorId=%d streamId=%d profileId=%d format=%d width=%d height=%d framerate=%d", 
										(int)spi.streamType, (int)spi.sensorId, (int)spi.streamId, (int)spi.profileId, (int)spi.format, (int)spi.width, (int)spi.height, (int)spi.framerate);
									#endif

									m_profiles.push_back(spi);                                    //存储流对象
									sensorStreams[spi.streamId] = spi.streamType;
								}
							}
						}
					}
					rs2_delete_stream_profiles_list(profileList);
				}

				for (auto iter = sensorStreams.begin(); iter != sensorStreams.end(); ++iter)
				{
					rsLogDebug("UNIQ streamId (%d) -> type (%d)", iter->first, (int)iter->second);
				}

				for (auto iter = sensorStreams.begin(); iter != sensorStreams.end(); ++iter)
				{
					const OniSensorType oniType = convertStreamType(iter->second);

					std::vector<Rs2StreamProfileInfo> profiles;
					findStreamProfiles(&profiles, oniType, iter->first);

					if (addStream(sensor, oniType, sensorId, iter->first, &profiles) == ONI_STATUS_OK)
					{
						sensor = nullptr;
					}
				}

				if (sensor) { rs2_delete_sensor(sensor); }
			}
		}
		rs2_delete_sensor_list(sensorList);
	}

	rsLogDebug("FILL OniSensorInfo");    //1-1
	for (auto iter = m_streams.begin(); iter != m_streams.end(); ++iter)
	{
		Rs2Stream* stream = *iter;
		#if 1
		rsLogDebug("STREAM type=%d sensorId=%d streamId=%d", (int)stream->getRsType(), stream->getSensorId(), stream->getStreamId());
		#endif

		std::vector<Rs2StreamProfileInfo> profiles;
		findStreamProfiles(&profiles, stream->getOniType(), stream->getStreamId());

		OniSensorInfo info;
		info.sensorType = stream->getOniType();
		info.numSupportedVideoModes = (int)profiles.size();
		info.pSupportedVideoModes = nullptr;

		if (info.numSupportedVideoModes > 0)
		{
			info.pSupportedVideoModes = new OniVideoMode[info.numSupportedVideoModes];
			int modeId = 0;

			for (auto p = profiles.begin(); p != profiles.end(); ++p)
			{
				OniVideoMode& mode = info.pSupportedVideoModes[modeId];
				mode.pixelFormat = convertPixelFormat(p->format);
				mode.resolutionX = p->width;
				mode.resolutionY = p->height;
				mode.fps = p->framerate;
				modeId++;

				#if 1
				rsLogDebug("\ttype=%d sensorId=%d streamId=%d profileId=%d format=%d width=%d height=%d framerate=%d",
					(int)p->streamType, (int)p->sensorId, (int)p->streamId, (int)p->profileId, (int)p->format, (int)p->width, (int)p->height, (int)p->framerate);
				#endif
			}

			m_sensorInfo.push_back(info);
		}
	}

	return ONI_STATUS_OK;
}

    2.MV3D中2-1使用接口函数获取各种相机的属性参数,2-2为获取相机的流参数(本相机没有流的信息,所以直接将一帧中的其中三种图像视为三种流),2-3为查询流信息中是否参在目标流信息。2-4功能是查询各种流信息是否存在,存在则将其此类流对象存储起来,以便后续使用流消息是查询,2-5为查询流对象信息,2-6为将其流对象存储,2-7目的是循环查询流对象列表中各种流的传感器信息(intel相机中可能不同的流是不同的传感器输出的,悉灵相机没有流的概念,只有帧,但是这里这样写不会错,因为之前的代码把帧中每种图像当作了一种流)。2-9是判断获取的流的视频模式,2-10是将从流中获取的像素信息转换,并把各种信息传给此类流的视频模式结构体。更详细的实现请看《七,OpenNI2 示例开发》。

OniStatus Rs2Device::initializeStreams()
{
	rsTraceFunc("");
	std::map<int, Mv3dRgbdImageType> Mv3d_sensorStreams;
	
			RIFrameInfo MV3D_spi;
			
			//2-1
			MV3D_RGBD_PARAM pstValue;
			int nRet = MV3D_RGBD_OK;
			char * pParamName = MV3D_RGBD_INT_WIDTH;

			//获得宽
			memset(&pstValue, 0, sizeof(MV3D_RGBD_PARAM));
			nRet = MV3D_RGBD_GetParam(handle, MV3D_RGBD_INT_WIDTH, &pstValue);
			printf("宽= %d\n", pstValue.ParamInfo.stIntParam.nCurValue);
			if (MV3D_RGBD_OK != nRet)
			{
				return ONI_STATUS_ERROR;
			}
			MV3D_spi.nWidth = pstValue.ParamInfo.stIntParam.nCurValue;
			
			//获得高
			nRet = MV3D_RGBD_GetParam(handle, MV3D_RGBD_INT_HEIGHT, &pstValue);
			printf("高= %d\n", pstValue.ParamInfo.stIntParam.nCurValue);
			if (MV3D_RGBD_OK != nRet)
			{
				return ONI_STATUS_ERROR;
			}
			MV3D_spi.nHeight = pstValue.ParamInfo.stIntParam.nCurValue;
			
			//获得像素格式
			nRet = MV3D_RGBD_GetParam(handle, MV3D_RGBD_ENUM_PIXELFORMAT, &pstValue);
			printf("像素格式= %d\n", (Mv3dRgbdImageType)pstValue.ParamInfo.stIntParam.nCurValue);
			if (MV3D_RGBD_OK != nRet)
			{
				return ONI_STATUS_ERROR;
			}
			
			//获得帧率
			nRet = MV3D_RGBD_GetParam(handle, MV3D_RGBD_FLOAT_FRAMERATE, &pstValue);
			printf("帧率= %f\n", pstValue.ParamInfo.stFloatParam.fCurValue);
			if (MV3D_RGBD_OK != nRet)
			{
				return ONI_STATUS_ERROR;
			}
			MV3D_spi.framerate = pstValue.ParamInfo.stFloatParam.fCurValue;
			printf("打印帧的各种数值\n");

			//2-2
			//循环初始化不同的流
			for (int streamNumber = 0; streamNumber < 3; streamNumber++)
			{
				
				switch (streamNumber)
				{
					case 0: {
						//流类型
						MV3D_spi.streamType = ImageType_Depth;
						//流id
						MV3D_spi.streamId = 0;
						//传感器id
						MV3D_spi.sensorId = ImageType_Depth;
						break;
					}
					case 1: {
						MV3D_spi.streamType = ImageType_YUV422;
						MV3D_spi.streamId = 1;
						//传感器id
						MV3D_spi.sensorId = ImageType_YUV422;
						break;
					}
					case 2: {
						MV3D_spi.streamType = ImageType_Mono8;
						MV3D_spi.streamId = 2;
						//传感器id
						MV3D_spi.sensorId = ImageType_Mono8;
						break;
					}
				}
				
			//2-3
				if (isSupportedStreamType(MV3D_spi.streamType))
				{
					
						#if 1
					rsLogDebug("\ttype=%d sensorId=%d streamId=%d width=%d height=%d framerate=%d 2\n", 
						(int)MV3D_spi.streamType, (int)MV3D_spi.sensorId, (int)MV3D_spi.streamId, (int)MV3D_spi.nWidth, (int)MV3D_spi.nHeight,(int)MV3D_spi.framerate);
					#endif

					m_profiles.push_back(MV3D_spi);
					Mv3d_sensorStreams[MV3D_spi.streamId] = MV3D_spi.streamType;
					
				}
				
			}
			//2-4
			for (auto iter = Mv3d_sensorStreams.begin(); iter != Mv3d_sensorStreams.end(); ++iter)
			{
				const OniSensorType oniType = convertStreamType(iter->second);
				printf("\ttype=%d Mv3dRgbdImageType=%d 2\n", (int)oniType, iter->second);

				std::vector<RIFrameInfo> profiles;
				//2-5
				findStreamProfiles(&profiles, oniType, iter->first);
				//2-6
				if (addStream(oniType, iter->second, iter->first, &profiles) == ONI_STATUS_OK)
				{
					//printf("addStream success\n");
				}
			}

	rsLogDebug("FILL OniSensorInfo");
	//2-7
	for (auto iter = m_streams.begin(); iter != m_streams.end(); ++iter)
	{
		Rs2Stream* stream = *iter;
		#if 1
		rsLogDebug("STREAM type=%d sensorId=%d streamId=%d", (int)stream->getRsType(), stream->getSensorId(), stream->getStreamId());
		#endif

		std::vector<RIFrameInfo> profiles;
		//2-8
		findStreamProfiles(&profiles, stream->getOniType(), stream->getStreamId());

		OniSensorInfo info;
		info.sensorType = stream->getOniType();
		info.numSupportedVideoModes = (int)profiles.size();
		info.pSupportedVideoModes = nullptr;
		
		//2-9
		if (info.numSupportedVideoModes > 0)
		{
			info.pSupportedVideoModes = new OniVideoMode[info.numSupportedVideoModes];
			int modeId = 0;

			for (auto p = profiles.begin(); p != profiles.end(); ++p)
			{
				
				OniVideoMode& mode = info.pSupportedVideoModes[modeId];
				//2-10
				mode.pixelFormat = convertPixelFormat(p->streamType);
				
				mode.resolutionX = p->nWidth;
				mode.resolutionY = p->nHeight;
				mode.fps = p->framerate;
				modeId++;

				#if 1
				rsLogDebug("\ttype=%d sensorId=%d streamId=%d width=%d height=%d framerate=%d",
					(int)p->streamType, (int)p->sensorId, (int)p->streamId, (int)p->nWidth, (int)p->nHeight, (int)p->framerate);
				#endif
			}

			m_sensorInfo.push_back(info);
		}
	}

	return ONI_STATUS_OK;
}

    3.RS中获取帧的流程是先获取流,再获取流中的帧数据,由于流中有很多帧,因此用了Pipeline,使这些帧能够串行化处理

void Rs2Device::mainLoop()
{
	rsTraceFunc("");

	try
	{
		int configId = 0;
		while (m_runFlag)
		{
			const int curConfigId = m_configId;
			if (configId != curConfigId) // configuration changed since last tick
			{
				configId = curConfigId;                //将帧数据进行串行化
				restartPipeline();
			}

			if (m_pipelineProfile)
			{                //获取帧数据
				waitForFrames();
			}
			else
			{
				std::this_thread::sleep_for(std::chrono::milliseconds(WORKER_THREAD_IDLE_MS));
			}
		}
	}
	catch (...)
	{
		rsTraceError("Unhandled exception");
	}

}<span style="font-family: monospace, monospace; font-size: 1em;"><br/></span><span style="font-family: monospace, monospace; font-size: 1em;">void Rs2Device::waitForFrames()</span><br/>{
	SCOPED_PROFILER;

	Rs2Error e;
	rs2_frame* frameset;
	{
		NAMED_PROFILER("rs2_pipeline_wait_for_frames");
		frameset = rs2_pipeline_wait_for_frames(m_pipeline, WAIT_FRAMESET_TIMEOUT_MS, &e);
	}
	if (!e.success())
	{
		rsTraceError("rs2_pipeline_wait_for_frames failed: %s", e.get_message());
		return;
	}
    //计算获取的帧数
	const int nframes = rs2_embedded_frames_count(frameset, &e);
	//rsLogDebug("frameset %llu (%d)", m_framesetId, nframes);

	if (m_registrationMode == ONI_IMAGE_REGISTRATION_DEPTH_TO_COLOR)
	{
		rs2_frame_add_ref(frameset, &e);
		{
			NAMED_PROFILER("rs2_process_frame");
			rs2_process_frame(m_alignProcessor, frameset, &e);
			if (!e.success())
			{
				rsTraceError("rs2_process_frame failed: %s", e.get_message());
			}
		}
	}

	for (int i = 0; i < nframes; ++i)
    {
		rs2_frame* frame = rs2_extract_frame(frameset, i, &e);
		if (frame)
		{            //将不同的帧进行解析
			processFrame(frame);
			releaseFrame(frame);
		}
	}

	releaseFrame(frameset);
	++m_framesetId;

	if (m_registrationMode == ONI_IMAGE_REGISTRATION_DEPTH_TO_COLOR)
	{
		waitAlignedDepth();
	}
}

    4.MV3D中的mainloop()函数已经去掉了Pipeline,因为悉灵相机中在一帧数据中有四幅图,一条流的数据就只有一帧,2-1为调用mv3d SDK获取一帧数据,2-2代表一流里只含有一帧,2-3为解析一帧数据的内容,2-4为按分辨率模式,等待对齐的深度数据。

void Rs2Device::mainLoop()
{
	rsTraceFunc("");
	
	try
	{
		int configId = 0;
		while (m_runFlag)
		{
			
			if (m_runFlag)
			{
				//1-1
				waitForFrames();
				
			}
			else
			{
				
				std::this_thread::sleep_for(std::chrono::milliseconds(WORKER_THREAD_IDLE_MS));
			}
		}
	}
	catch (...)
	{
		rsTraceError("Unhandled exception");
	}

	stopPipeline();
}<span style="font-family: monospace, monospace; font-size: 1em;"><br/></span><span style="font-family: monospace, monospace; font-size: 1em;">void Rs2Device::waitForFrames()</span><br/>{
	
	SCOPED_PROFILER;
	
	MV3D_RGBD_FRAME_DATA frameset;
	{
		
		NAMED_PROFILER("rs2_pipeline_wait_for_frames");
		//2-1
		MV3D_RGBD_FetchFrame(handle, &stFrameData, 5000);
		
		memcpy(&frameset, &stFrameData, sizeof(frameset));
		
	}
	//2-2
	const int nframes = 1;

	for (int i = 0; i < nframes; ++i)
    {
		MV3D_RGBD_FRAME_DATA* frame = &frameset;;
		if (frame)
		{
			for (int framecount = 0; framecount < frame->nImageCount; ++framecount)
			{
				//2-3
				processFrame(frame, framecount);
			}
			
			//printf("releaseFrame success\n");
			//releaseFrame(frame);
		}
	}

	++m_framesetId;
	printf("++m_framesetId=%d \n", m_framesetId);
	//深度图对齐到color?
	if (m_registrationMode == ONI_IMAGE_REGISTRATION_DEPTH_TO_COLOR)
	{
		//2-4
		waitAlignedDepth();
	}
}


3.Rs2Stream类:此类继承了StreamBase类

    在RS和MV3D中对Stream类的实现相似,不同的是改变了一些像素图像格式,去除了一些不必要的函数等,具体的实现这里就不放了,详情请看(七. OpenNI2 示例开发)。


四.OpenNI2 API 调用过程

1.使用OpenNI2提供的initialize()函数加载API,如果要结束OpenNI2的调用需要使用shutdown()函数来关闭

Status rc = openni::OpenNI::initialize();
Status rc = openni::OpenNI::shutdown();


2.使用传感器与物理设备通信,通过Device类连接设备并获取设备的信息,当使用完设备时可以通过close()函数来断开设备

Device device;

device.open(ANY_DEVICE);

device.close();

当只有一个设备处于连接状态时,直接使用上方函数可以很好获取数据,当有多个设备处于连接时,可以使用OpenNI2提供的数组Array和enumerateDevices()获取所有设备的信息,根据设备信息(如URL)选取使用设备。

Device device

const char* deviceURI = openni::ANY_DEVICE;

openni::Array<openni::DeviceInfo> deviceInfoList;

openni::OpenNI::enumerateDevices(&deviceInfoList);

Device.open(deviceURI)
//Device.open(deviceInfoList[i].geturl());

device.close();


3.使用Open提供的VideoStream类来获取相机的数据流,不同的数据流需要创建不同的流对象,然后使用流对象启动相应流,摧毁流对象和停止流分别使用destroy()和stop()函数,本文提供深度流和彩色流示例

Depth:

VideoStream depth;
rc = depth.create(device, openni::SENSOR_DEPTH);
rc = depth.start();
depth.stop();
depth.destory();

Color:

VideoStream color;

rc = color.create(device, openni::SENSOR_COLOR)

rc = color.start();

color.stop();

color.destory();

4.检测最新的数据帧使用VideoStream,为了去掉延迟需要调用waitForAnyStream()函数,然后用VideoFrameRef创建一个新的帧对象,用readFrame()来获取新帧并对VideoFrameRef对象初始化,用getData()函数获取帧中深度数据。

VideoFrameRef frame;

int changedStreamDummy;

VideoStream* pStream = &depth;

rc= OpenNI::waitForAnyStream(&pStream,1,&changedStreamDummy,SAMPLE_READ_WAIT_TIMEOUT);

rc = depth.readFrame(&frame);

DepthPixel* pDepth = (DepthPixel*)frame.getData();

以上使用过程代码以及OpenNI2各种示例Demo在OpenNI2工程文件夹“OpenNI2\Samples\“ 下。


五. OpenNI使用示例Demo

#include <stdio.h>
#include <OpenNI.h>

#include "OniSampleUtilities.h"

#define SAMPLE_READ_WAIT_TIMEOUT 2000 //2000ms

using namespace openni;

int main()
{
	Status rc = OpenNI::initialize();
	if (rc != STATUS_OK)
	{
		
		printf("Initialize failed\n%s\n", OpenNI::getExtendedError());
		return 1;
		
	}
	printf("Initialize success");
	Device device;
	rc = device.open(ANY_DEVICE);
	if (rc != STATUS_OK)
	{
		printf("Couldn't open device\n%s\n", OpenNI::getExtendedError());
		return 2;
	}
	printf(" open device success*-*-*-*-*-*-*-*-*\n");

	VideoStream depth;

	//传感器类型
	if (device.getSensorInfo(SENSOR_DEPTH) != NULL)
	{
		rc = depth.create(device, SENSOR_DEPTH);
		if (rc != STATUS_OK)
		{
			printf("Couldn't create depth stream\n%s\n", OpenNI::getExtendedError());
			return 3;
		}
	}

	printf("depth.create success*-*-*-*-*-*-*\n");

	rc = depth.start();
	if (rc != STATUS_OK)
	{
		printf("Couldn't start the depth stream\n%s\n", OpenNI::getExtendedError());
		return 4;
	}
	printf("depth.start success*-*-*-*-*-*-*\n");


	VideoFrameRef frame;

	while (!wasKeyboardHit())
	{
		int changedStreamDummy;
		VideoStream* pStream = &depth;

		rc = OpenNI::waitForAnyStream(&pStream, 1, &changedStreamDummy, SAMPLE_READ_WAIT_TIMEOUT);
		if (rc != STATUS_OK)
		{
			printf("Wait failed! (timeout is %d ms)\n%s\n", SAMPLE_READ_WAIT_TIMEOUT, OpenNI::getExtendedError());
			continue;
		}
		//最大最小深度值
		int maxDepth = pStream->getMaxPixelValue();
		int minDepth = pStream->getMinPixelValue();

		rc = depth.readFrame(&frame);
		if (rc != STATUS_OK)
		{
			printf("Read failed!\n%s\n", OpenNI::getExtendedError());
			continue;
		}

		if (frame.getVideoMode().getPixelFormat() != PIXEL_FORMAT_DEPTH_1_MM && frame.getVideoMode().getPixelFormat() != PIXEL_FORMAT_DEPTH_100_UM)
		{
			printf("Unexpected frame format\n");
			continue;
		}

		//获得深度数据数组
		DepthPixel* pDepth = (DepthPixel*)frame.getData();
		printf("frame.getHeight() %d frame.getWidth() %d \t", frame.getHeight(), frame.getWidth());

		//二维图片一维化后找到中心点
		int middleIndex = (frame.getHeight()+1)*frame.getWidth()/2;

		int point = 0;
		//打印一维化后图片的各点深度值
		for (int i = 0; i < frame.getWidth(); i++)
		{
			for (int j = 0; j < frame.getHeight(); j++)
			{
				printf("point%d %8d \t", point,  pDepth[point++]);

			}
			printf("\n");
		}

		printf("******************************************************\n");
		printf("**     [%08llu]         %8d     %d      %d    %d    **\n", (long long)frame.getTimestamp(), pDepth[middleIndex],maxDepth,minDepth, middleIndex);
		printf("******************************************************\n");
	}

	depth.stop();
	depth.destroy();
	device.close();
	OpenNI::shutdown();
	system("pause");
	return 0;
}


六.其他说明

1. 在“OpenNI2\Documentation\”下有一个API说明文档

2.《openni初步认识》转载 openni初步认识

3. 《OpenNI2 开发者指南》转载 OpenNI2 开发者指南

4. 《浅析OpenNI2---Driver开发(1)》转载 浅析OpenNI2---Driver开发(1)

5.《浅析OpenNI2---Driver开发(2)》转载 浅析OpenNI2---Driver开发(2)


七. OpenNI2 示例开发

因为代码量多,这里就不详细说具体实现过程,下面是重新实现的OpenNI代码链接。

OpenNI2 示例开发

版权声明:本文为V社区用户原创内容,转载时必须标注文章的来源(V社区),文章链接、文章作者等基本信息,否则作者和本社区有权追究责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件至:v-club@hikrobotics.com 进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容。
上一篇

“悉灵杯”课题研究-基于ROS1的RGB-D相机SDK集成及示例代码开发

下一篇

“悉灵杯”课题研究-基于MV-EB435i的落差边缘检测算法开发记录

评论请先登录 登录
全部评论 0
Lv.0
1
创作
0
粉丝
2
获赞
相关阅读
  • 海康机器人“悉灵杯”课题研究活动-获奖公布
    2022-09-05
  • 汽车行业-汽车焊点检测案例
    2022-09-01
  • VM3D模块学习经验分享
    2022-09-22
  • SC系列智能相机图像存储的几种方式
    2022-09-08
  • 有关条码二维码相关知识
    2022-08-30

请升级浏览器版本

您正在使用的浏览器版本过低,请升级最新版本以获得更好的体验。

推荐使用以下浏览器