Depth and Color streams are not taken from the same point so they do not correspond to each other perfectly. Also they FOV (field of view) is different.
-
cameras
- IR/Depth FOV 58.5° x 45.6°
- Color FOV 62.0° x 48.6°
- distance between cameras 25mm
-
my corrections for 640×480 resolution for both streams
if (valid depth) { ax=(((x+10-xs2)*241)>>8)+xs2; ay=(((y+30-ys2)*240)>>8)+ys2; }
x,y
are in coordinates in depth imageax,ay
are out coordinates in color imagexs,ys = 640,480
xs2,ys2 = 320,240
as you can see my kinect has also y-offset which is weird (even bigger then x-offset). My conversion works well on ranges up to
2 m
I did not measure it further but it should work even then -
do not forget to correct space coordinates from depth and depth image coordinates
pz=0.8+(float(rawdepth-6576)*0.00012115165336374002280501710376283); px=-sin(58.5*deg*float(x-xs2)/float(xs))*pz; py=+sin(45.6*deg*float(y-ys2)/float(ys))*pz; pz=-pz;
- where
px,py,pz
is point coordinate in [m] in space relative to kinect
I use coordinate system for camera with opposite Z direction therefore the negation of sign
- where
PS. I have old model 1414 so newer models have probably different calibration parameters