The web is supposed to be the great democratiser, but what happens if people don't have access to a web browser? Anybody should have the ability to access any web content, service or application by using any type of connected device, from anywhere at any time. This is a principle defined by the W3C under the banner 'One Web'. However, the reality is far from this.
The current state of responsive design (the approach of tailoring the layout and style of a web page so that it adapts to the specific characteristics of the device it's being viewed upon), is making good progress on this road, but we're just getting started and are barely scratching the surface of what needs to be done. In order to offer our sites, services and applications to everybody, everywhere, we need to not only tailor and adapt the existing user interface, but also augment them, or add entirely new interfaces.
The One Web vision is defined by W3C in technical architecture terms as Multimodal Interaction. This is where a website's underlying data and services are treated as entirely distinct from a (potential) multitude of user interfaces built on top, known as 'modality components' , of which the web browser is just one. These interfaces should all be built using open web standards and separated from the data services by a controller system known as an 'interaction manager'. The end user then has the ability to select, or be presented with, the most appropriate user interface according to their context, needs, abilities, or their individual preferences. A well-crafted responsive site, that's built using HTML5 and CSS3, already allows us to present an appropriate user interface based on device, albeit only within the context of a web browser and only within a very narrow set of parameters.
The reason for building multiple user interfaces is to support the widest variety of input and output devices, where different underlying markup languages are often required to construct these. Depending upon the device, in a multimodal world, users will be able to interact with online services by providing input via speech, handwriting, touchscreens, keystrokes or gestures, with the output presented via displays, pre-recorded and synthetic speech, audio, and tactile mechanisms such as mobile phone vibrators and Braille strips. Alongside the obvious accessibility benefits of multimodal interaction, it's about bringing the user interface to the user, rather than them chasing after you on a restricted set of conditions that you've enforced.
Now imagine an online booking system. In addition to a responsive website, why not create a VoiceXML document driven by the same set of data, and connect it to a voice server to provide an automated speech interface via telephone or some future incar dashboard system? This would provide a greater number of new users with an interface more suited to them.
Embrace open standards
Native apps for mobile and desktop operating systems (OS), and voice-activated services such as Apple Siri and Google Voice Search, are carving the way to the future. However, these apps are often built using proprietary technologies. We must embrace open web standards such as HTML5, CSS, VoiceXML, Extensible Multimodal Annotation Markup (EMMA) and others proposed by the W3C to provide the most easily-connected and best future for our world wide web.
Take the time to study the sites and connected applications you are personally responsible for. Ensure data is accessible via services that aren't directly tied into a user interface. Then, start to envision how you could use open standard web technologies to create user interfaces that reach users according to their personal context, needs, abilities, or preferences. It's time to consider the world outside and beyond the web browser. The One Web future is about the user rather than their device.
Words: Den Odell
This article originally appeared in net magazine issue 248.