Shift in Point Cloud acquired using Kinect v2 API

I am acquiring Point Cloud using Kinect v2 API in Windows 10 64 Bit OS. Below is the code snippet-depthFrame = multiSourceFrame.DepthFrameReference.AcquireFrame();colorFrame = multiSourceFrame.ColorFrameReference.AcquireFrame();if (depthFrame == null || colorFrame == null) return;depthFrame.CopyFrameDataToArray(depthData);coordinateMapper.MapDepthFrameToCameraSpace(depthData, cameraSpacePoints);coordinateMapper.MapDepthFrameToColorSpace(depthData, colorSpacePoints);colorFrame.CopyConvertedFrameDataToArray(pixels, ColorImageFormat.Rgba);for (var...Read more

kinect - Maya Mel project lookAt target into place after motion capture import

I have a facial animation rig which I am driving in two different manners: it has an artist UI in the Maya viewports as is common for interactive animating, and I've connected it up with the FaceShift markerless motion capture system.I envision a workflow where a performance is captured, imported into Maya, sample data is smoothed and reduced, and then an animator takes over for finishing.Our face rig has the eye gaze controlled by a mini-hierarchy of three objects (global lookAtTarget and a left and right eye offset).Because the eye gazes are ...Read more

How to detect face recognition using Kinect?

I'm trying to develop a software where can detect face recognition using Kinect.Until now, I just know I can use OpenCV to do this feature. I just want that the camera can save the face and recognize him later.I've just use KinectSDK, so I need some help or orientation to implement the feature....Read more

kinect - How to segment depth image faster?

I need to segment depth image that captured from a kinect device in realtime(30fps).Currently I am using EuclideanClusterExtraction from PCL, it works but very slow(1fps).Here is a paragraph in the PCL tutorial: “Unorganized” point clouds are characterized by non-existing point references between points from different point clouds due to varying size, resolution, density and/or point ordering. In case of “organized” point clouds often based on a single 2D depth/disparity images with fixed width and height, a differential analysis of the corres...Read more

Raspberry Pi with Kinect

Could anyone get the camera data from the Kinect using a Raspberry Pi ?We would like to make a wireless Kinect connecting it using Ethernet or WiFi. Otherwise, let me know if you have a working alternative....Read more

Kinect 360 point cloud resolution increase - is it possible with more infrared projectors in the room?

If I scatter more infrared points around the room from separate infrared laser speckle projectors, and in doing so increase the point cloud resolution spread over objects in the room, will this result in a higher resolution 3d scan captured by the infrared camera on the Kinect 360? Or are there limitations on how much point cloud data the IR camera and/or Kinect software/hardware can process?...Read more

ros - how to run kinect and ASUS cameras simultaneously

I can run just one camera aloneI am following this enter link description hereI need to run both cameras simultaneouslywhen running the second camera the output:[ INFO] [1550737732.951379947]: Initializing nodelet with 4 worker threads.[FATAL] [1550737734.651595532]: Failed to load nodelet '/camera/rgb_rectify_mono` of type `image_proc/rectify` to manager `camera_nodelet_manager'[camera/rgb_rectify_mono-4] process has died [pid 12535, exit code 255, cmd /opt/ros/indigo/lib/nodelet/nodelet load image_proc/rectify camera_nodelet_manager --no-bond...Read more

Kinect V2 how to extract player/user from background with original resolution 1920x1080

In Kinect V2 as we know depth and color resolutions are different. With mapping API available in SDK it is easy to get color value from color frame and put in on depth frame as shown by many posts on the internet. That will give final image of the size 512x414.But I wanted to extract player/user in original color frame so that final image is of resolution 1920x1080.I can think of using the mapping API and mark color frame with User/PLayer pixel. Then apply some heuristic and expose RGB value neighboring pixels and complete the User/Player image...Read more

cinder - Kinect Color Point to Depth/Camera Point

I am working on a cinder and kinect v2 app and currently I am stuck on mapping a color point to a depth point. I searched through a color frame for a specific point of a certain color so I have the Color Frame Point x and y. I would like to get the depth from this, but of course the depth frame is a different resolution and viewpoint so you can't just index into it. I couldn't find any mapper for color point to depth point or even camera point. Is there a simple way of doing this other than taking the measurements yourself? My problem is simila...Read more

kinect - How can i use depth data in cad_60

i want to get distance from depth image .i have 3D position and i convert this point to 2D.know i read this coordinate data from RGB and depth image .iin this picture i mark 5 point and see this point data from depth image.why this coordinate data is not correct ?or why this data is 0 ?how can i get real data from depth image ?...Read more

skinning - Changing the avatar in Avateering sample (Kinect Developer Toolkit v1.7.0)

I've been trying to make a virtual dressing room, but am not able to change the avatar of the Avateering sample. I even tried with minor changes in the mesh of the Avatar in Autodesk Maya, but couldn't go through. When I run the code with this new Avatar, no Avatar gets displayed on the screen.I've changed the Content Processor to SkinnedModelProcessor as well, as mentioned in a post I searched on the web.I also followed the following blog post but again some random part of the new Avatar show up in the game environment, at an incorrect locati...Read more

kinect - Augmented Reality project for virtual dressing room

I'm doing a project on Augmented Reality which the final target is to implement a virtual dressing room. I'm also hoping to use Kinect device to get the motion of the body and map the dress on the body. The thing is I don't know how to start and don't know anything about Kinect api. Can you guys help me to get my hands wet on AR with Kinect device.Thanks in advance....Read more