Kinect is quite young in the technology world and already developers are having their go at the device. Just three days after its release, an open source Kinect driver was forged. After that people began experimenting with the capabilities of this device, such as a method of interacting with photos.
Yet another Kinect concept comes to the scene, being demoed on YouTube. Oliver Kreylos has developed a C++ implementation of Kinect that allows for 3D reconstruction of a room using a single Kinect system. This works by taking advantage of the photographic video stream that is seen by the camera, then merging that data with the depth video stream. The depth camera will create points that the software can interpret to help create a 3D image that one can move around.
The resulting product is an awesome live 3D "model" of a room and everything inside of it. The question now is just how sophisticated can this system get? Now that essentially the difficult part of mapping the streams together is out of the way, the posibility of adding in more Kinect systems looks to be something that could very well happen. Using just an additional Kinect and an algorithm to mesh the four streams would help offset the shadow and create an even better image. The more cameras, the better the 3D image quality, although the processing power and meshing of the systems would get quite high quickly.
Kreylos also is looking to see how this technology could be applied to mix people with computer-generated imagry, something that is quite a step above from simple webcam effects. He says though that he still is working on that, but we can be sure that more will come from him in the realm of Kinect modding. Kreylos also demonstrates how objects can virtually be measured all using the software he has created.