Open Kinect Project

“Audi City London”, An Immersive Digital Dealership Experience.

Working for a solid year, Audi and Razorfish have completed a new flagship showroom for Audi that will be open near Piccadilly Circus in London, just ahead of the 2012 Olympic games.

Developed using the Kinect Razorfish has created an immersive world world that allows potential buyers to glean even more information about Audi’s line of cars by interacting directly with touch-screen panels, interactive video walls, physical touch surfaces, and objects that react to touch, and screen simultaneously. Watch the video below to get a full sense of what they have created.  There is even more information available on Razorfish’s Emerging Experiences Blog.

“Audi City London is a groundbreaking dealership experience delivered by one of the most technologically advanced retail environments ever created. The digital environment features multi-touch displays for configuring your Audi vehicle from millions of possible combinations. Your personalized car is visualized in photorealistic 3D using real-time render technology, making the Audi City vehicle configurator the most advanced in the world. After personalizing your Audi, you can toss your vehicle onto one of the floor-to-ceiling digital “powerwalls” to visualize your car configuration in life-size scale. From here, you can use gestures to interact with your personalized vehicle, exploring every angle and detail in high resolution using Kinect technology.”

Geek Out Monday. 3D Video on an iPad via Kinect.

It’s Monday, so I thought I would start the week with a geek fest featuring some 3D video built with a Microsoft Kinect, and played back on an iPad.

LAAN Labs, used String Augmented Reality SDK to display the video and audio that was recorded with the Kinect. Working with Libfreenect’s open Kinect project, they recorded the incoming data from the Kinect, and then built a textured mesh of the subject from calibrated rgb and depth data sets. This was done for each frame in the sequence which allowed the video to be played back in real-time. Using a simple depth cut off, they were able to isolate the person in the video from walls and other objects in the room.

The image was projected onto a printed image marker in the real world using the String SDK. That image was then used as a QR marker for the iPad to read and display the image.

While this is pretty rough, the result is still impressive, and it really shows off the power of Kinect’s open source community, String SDK, and the Open Kinect Project. I can’t wait to see how this develops. The potential for content development here is huge.