John Tsiombikas nuclear@mutantstargoat.com
3 February 2008
After the first successful test of my webcam marker tracking algorithm, it's now time for the real deal.
The purpose of my experiment, is to be able to detect the position of my head in 3D space, by processing the webcam-captured frames, locating the 2 markers, and then performing an inverse projection from 2D space to 3D space. That information can be used to set the view-point of a 3D environment to follow the motions of the user's head, thus increasing the user’s immersion in the 3D world considerably. Simple, natural motions of the user's head, are carried along in the virtual world, making the screen act as a window into that 3D environment.
Of course the point tracking code from my previous test is the same. However, I
modified my tracking program to accept local connections from client programs
that need to use that tracking information (x, y
normalized position of each
marker). Then I wrote a test program, that renders a simple OpenGL "world" (a
bunch of balls and a couple of coordinate grids), and uses the marker positions
from the other program, to calculate the user's head 3D position, and set up
the virtual camera to coincide with that.
Once again, you may watch the result on youtube. There's still some way to go, and some details to be ironed out... I'll keep you posted on anything new with this experiment :)
Oh, and of course, the code is always available at my subversion repository:
svn://mutantstargoat.com/nuclear/compvis/cam_test
svn://mutantstargoat.com/nuclear/compvis/vr_test
cam_test
): svn://mutantstargoat.com/nuclear/libwcam
This was initially posted in my old wordpress blog. Visit the original version to see any comments.