Kinect

“Audi City London”, An Immersive Digital Dealership Experience.

Working for a solid year, Audi and Razorfish have completed a new flagship showroom for Audi that will be open near Piccadilly Circus in London, just ahead of the 2012 Olympic games.

Developed using the Kinect Razorfish has created an immersive world world that allows potential buyers to glean even more information about Audi’s line of cars by interacting directly with touch-screen panels, interactive video walls, physical touch surfaces, and objects that react to touch, and screen simultaneously. Watch the video below to get a full sense of what they have created.  There is even more information available on Razorfish’s Emerging Experiences Blog.

“Audi City London is a groundbreaking dealership experience delivered by one of the most technologically advanced retail environments ever created. The digital environment features multi-touch displays for configuring your Audi vehicle from millions of possible combinations. Your personalized car is visualized in photorealistic 3D using real-time render technology, making the Audi City vehicle configurator the most advanced in the world. After personalizing your Audi, you can toss your vehicle onto one of the floor-to-ceiling digital “powerwalls” to visualize your car configuration in life-size scale. From here, you can use gestures to interact with your personalized vehicle, exploring every angle and detail in high resolution using Kinect technology.”

A Great Leap Forward. Gesture Based Computer Control.

I’ve been pretty fascinated with the Microsoft Kinect for creating Minority Report style user interfaces and computer input. There is a new device that does what the Kinect does, and possibly jumps ahead of it by being available for desktop systems like Windows, and Mac OS X.

Leap is about the size of an iPhone dock or large flash drive. It’s easy to use, you simply install the software, plug the device in, wave your hands to calibrate the system, and your off and running.

From the examples in the video, there are a ton of possibilities here, but where I think there is huge growth potential would be point of sale and visual merchandising solutions.

The fact that you can use your hands, or simply a finger to input information without having to make physical contact with the device is huge. Imagine being able to input, or interact with a screen behind a store front window, or in a store display. Leap has an interactive window of about 8 cubic feet from the actual computer set up. Plenty of room to create an interactive bubble in the environment.

Originally designed to aid developers with 3D modeling, Leap has expanded to allow the control of a wide variety of applications.  Leap is 200 times more sensitive than existing touch-free products and technologies and can track movements as closely as 1/100th of a millimeter.Pretty impressive if you have ever played with the Kinect development kit and know it’s limitations. The other nice thing about Leap is you can develop and define custom gestures which could be applicable to specific applications designed to take advantage of the hardware.

Oh and, the cost of the device is $69.99 which makes it extra affordable.

Nice To Meet You. A Social Construction Site.

In Hornstull, Sweden, Bonnier Properties is building a new shopping center with the noise and mess that usually goes with major construction. In order to soften the impact of the construction on the surrounding neighborhoods and people, Bonnier has developed an interactive art installation with hooks to social networks.

In a construction tunnel they installed projectors, speakers and kinect sensors that created a virtual, interactive forest. The sensors registered the people passing by the projection surfaces, which made the forest grow and change over time. If you stopped and interacted directly with the kinect inputs your actions were passed directly to Facebook, Twitter and Google+ which acted as in-points  to the forest which help create additional content.

NuFormer Interactive Projection Mapping.

As projection mapping gets more sophisticated, and the processing power of tablets and smartphones increases, we are going to start seeing some really cool uses of the technology. In the video below, NuFormer shows off some pretty slick uses of iPads, iPhones, and Gesture based manipulation (Kinect Based) of the projection surfaces.

The real highlights for me aren’t the tablet or smartphone integration, it’s the gesture based demo. That alone allows projection mapping events to become a seamless interactive experience with the audience. It’s no longer simply passive viewing like the experience with your TV. In addition the gaming demo is pretty impressive as well, and for the same reason. It allows the audience to engage with and participate in the sensory experience that makes projection mapping work.

Geek Out Monday. 3D Video on an iPad via Kinect.

It’s Monday, so I thought I would start the week with a geek fest featuring some 3D video built with a Microsoft Kinect, and played back on an iPad.

LAAN Labs, used String Augmented Reality SDK to display the video and audio that was recorded with the Kinect. Working with Libfreenect’s open Kinect project, they recorded the incoming data from the Kinect, and then built a textured mesh of the subject from calibrated rgb and depth data sets. This was done for each frame in the sequence which allowed the video to be played back in real-time. Using a simple depth cut off, they were able to isolate the person in the video from walls and other objects in the room.

The image was projected onto a printed image marker in the real world using the String SDK. That image was then used as a QR marker for the iPad to read and display the image.

While this is pretty rough, the result is still impressive, and it really shows off the power of Kinect’s open source community, String SDK, and the Open Kinect Project. I can’t wait to see how this develops. The potential for content development here is huge.