Kinect to the web browser
The opportunities for motion sensing development in the browser are aplenty. Mairead Buchan details how to get a browser to react to 3D motion using Kinect technology
This article first appeared in issue 240 of .net magazine – the world's best-selling magazine for web designers and developers.
The Microsoft Xbox Kinect was the first 3D motion sensing device to reach a mass market. As soon as it hit the shelves the internet was awash with hacks to hook it up to a computer. The first JavaScript library I found to expose data from a motion sensing device in a way that a web developer could take advantage of was DepthJS. Produced by a team at the Massachusetts Institute of Technology (MIT), DepthJS allows the browser to respond to predefined gestures as browser events in the same way that touch screen devices register the touchstart and touchend events.
The opportunities for motion sensing development in the browser have increased over recent years. This makes it an exciting area for anyone keen to exploit new changes in web development. There is a bewildering array of software choices so it’s important to understand your requirements from the outset. Different devices offer different interaction paradigms and there’s also a choice of libraries to tie the Kinect into the browser.
Getting started
First, you want to decide what kind of application you’re making. I see motion sensing applications largely split into two separate camps: apps requiring hand or body detection in 3D space (such as online gaming) and gesture-controlled websites where hand gestures are used to map to standard website interactions (such as a user scrolling the page, clicking a link, or swiping a carousel). The latter are ideal for when your users are browsing from a distance, say in an art gallery or museum. For a brilliant example of a gesture-controlled web application check out this project created by the Southeastern Pennsylvania Transportation Authority, allowing users to browse the bus timetable from inside the depot.
To get started with the motion sensing web development you will essentially need three things: a hardware device, a software driver that reads data passed into the USB, and a browser plug-in to convert the data into something that JavaScript can set event listeners for.
The Kinect itself comes in two different flavours: the Xbox Kinect and Kinect for Windows. The Kinect for Windows has near-field detection, allowing finger and gesture detection from as little as 40 centimeters from the device. The Xbox Kinect on the other hand requires the user to be at least three or four foot from the device. Obviously, this has ramifications on the kind of application you may be building. A game requiring full body detection is perfect for the Xbox Kinect. However, a website requiring fine gesture recognition will only be possible using the Kinect for Windows device – and that comes with a price. It comes in at a hefty £160, which is nearly double its Xbox counterpart.
Other options for hardware include PrimeSense’s own Asus Xtion device (PrimeSense are the company that helped Microsoft develop the Kinect) as well as the recently announced Leap Motion controller, which is intended solely for hand and finger gesture detection and near-field interaction. Your choice of middleware is largely dependent on what operating system you’re using. This is also likely to dictate how easy you find it to get up and running.
If you’re using Windows, you can use its own custom drivers and SDK. This is definitely the easiest install option. You have two choices for open source drivers: Libfreenect, developed by the Open Kinect community, and OpenNI, developed by PrimeSense. These both support the Xbox Kinect. Apparently OpenNI support for the Kinect for Windows device performs poorly on Mac/Linux and Libfreenect requires a patched version of the library to get near-field detection working. In short, if you want near-field mode, you may prefer to stick with the Windows drivers for now.
Installing the OpenNI library is fairly straightforward. A script is provided with the download that you can run from the command line. For comprehensive instructions on installing Libfreenect, you want to get yourself to the Open Kinect site and read the getting started guide. As this is experimental technology, there is no software installer icon to double click. This is (rolling up your sleeves) compiling from source installation. As a frontend developer, this was a little outside my comfort zone and resulted in well over 20 hours of teeth gnashing frustration as I Googled ‘cMake error messages’ and repeatedly reinstalled software packages.
If you’re working with Linux, you’re probably familiar with apt-get and this may not pose many problems for you. If you’re working with Mac OS X, then you need a package installer and the two frontrunners for this are MacPorts and Homebrew. Homebrew looks more user-friendly, but if you’re using someone else’s formulas to install packages with several dependencies, you don’t have any real control over what is being installed. I ended up opting for MacPorts in the end. At one point I had to remove a package and install an earlier version to resolve a problem with conflicting dependencies. This just wasn’t possible via the Homebrew installation.
Your choice of browser plug-in is likely to be dictated by both the type of application you want to create and your chosen technology stack. You currently have the following options: DepthJS, Kinected Browser, Kinesis.io and Zigfu.
Browser plug-ins
DepthJS relies on the OpenNI driver but it also stipulated Libfreenect as a dependency, so I ended up installing both. The DepthJS plug-in currently only runs on WebKit, but support is expected for Firefox.
If you’ve chosen to go with the Kinect for Windows SDK and drivers, then the Microsoft research team have developed its own Kinected browser ActiveX plug-in. Sadly it will only run on Windows 7+ and in Internet Explorer 9+, which is a little disappointing.
An alternative is Kinesis.io, which also runs off the Kinect for Windows driver, giving you the option of near-field development. Kinesis.io says it supports all major browsers, but installation requirements stipulate Windows 7 as an operating system.
Of all the options, Zigfu has the largest browser and platform support. You can use either the Kinect for Windows driver or the open source OpenNI alternative. The Zigfu API is mainly concerned with tracking visible users and skeleton joints for full body detection, as well as providing interaction widgets for in-game navigation.
I came across a lot of teething problems in the early stages of prototyping. There is a point in the development workflow where you can’t access any debugging information from the hardware. At times it was difficult to determine if the errors I received were due to environmental factors, such as lighting, position or wiring, sunlight levels affecting the infrared, a malfunctioning Kinect device or issues based purely around incorrectly implemented code.
Despite this, when I finally created a object mapped to a 3D environment in the browser moving in tandem with my hand in space, I felt a huge rush of achievement. It was worth the hours of installation woes and blind debugging by trial and error for that feeling alone.
I’m excited to see what other people will be creating with this technology and to see these web applications finally out in the wild – so get making!
Get top Black Friday deals sent straight to your inbox: Sign up now!
We curate the best offers on creative kit and give our expert recommendations to save you time this Black Friday. Upgrade your setup for less with Creative Bloq.
Discover 20 artful and inspiring approaches to website navigation at Creative Bloq.
Thank you for reading 5 articles this month* Join now for unlimited access
Enjoy your first month for just £1 / $1 / €1
*Read 5 free articles per month without a subscription
Join now for unlimited access
Try first month for just £1 / $1 / €1
The Creative Bloq team is made up of a group of design fans, and has changed and evolved since Creative Bloq began back in 2012. The current website team consists of eight full-time members of staff: Editor Georgia Coggan, Deputy Editor Rosie Hilder, Ecommerce Editor Beren Neale, Senior News Editor Daniel Piper, Editor, Digital Art and 3D Ian Dean, Tech Reviews Editor Erlingur Einarsson and Ecommerce Writer Beth Nicholls and Staff Writer Natalie Fear, as well as a roster of freelancers from around the world. The 3D World and ImagineFX magazine teams also pitch in, ensuring that content from 3D World and ImagineFX is represented on Creative Bloq.