Product user interfaces have changed dramatically over the years, to the point where interaction with our various devices is completely unrecognisable from that of even 20 years ago. We have moved from the first iterations of the classic WIMP interface, on to increasingly advanced GUI solutions, right through to the paradigm-shifting touchscreen interface popularised a decade ago by the first ever iPhone.
As our technology continues to become more advanced, new user interfaces seem to be appearing absolutely everywhere. From our vehicles, to our thermostats, to even our fridges, we're reaching the point where everything in our lives can be controlled at the touch of a button or the swipe of a finger.
However, no matter how advanced these interfaces get, they still represent a barrier between us and our technology – a barrier which is swiftly beginning to look ever more obsolete in our inter-connected world.
Celebrated design guru Donald Norman put it best in 1990 when he said: "The real problem with the interface is that it is an interface. Interfaces get in the way. I don't want to focus my energies on an interface. I want to focus on the job… I don't want to think of myself as using a computer, I want to think of myself as doing my job."
That's the key. We are moving towards a future without any traditional interface, where we move away from the touchscreen – or any screen at all, in fact.
As our world fills with more and more interconnected devices, so too will our day-to-day lives. Because of this, the interfaces we use will naturally need to develop along with this – interfaces which are not fragmented and distracting, but rather those that are designed to be effective, seamless and most importantly of all… invisible.
Designing beyond the screen
The question is how do we make interfaces invisible and also begin to move beyond the screen? There is no doubt that the smartphone is a wondrous invention, which has revolutionised the way we socialise, work, and live.
However, we can also agree that this 24/7 tether to the outside world can also occasionally be a distraction at best, and downright intrusive at worst. The perennial beeps, buzzes, red dots and blue ticks serve as – ironically – a constant barrier between the world around us, taking us away from the tasks we're trying to accomplish.
This is why the natural next phase in user experience design will be about moving beyond the screen and interfacing with the devices around us in more natural ways, such as computer vision, artificial intelligence (AI), and also voice control. The concept behind these invisible interfaces – also known as Zero UI – is essentially designing for where all of these disparate elements converge, in order to provide a more intuitive experience for the end user.
Zero UI introduces us to a new world where our natural gestures, voice, glances and even our thoughts can be used to communicate with our devices in a seamless, non-intrusive way – leading us towards a user experience that makes us feel like we're communicating not with a machine, but rather with another person.
The aim of invisible interfaces is to help facilitate a world wherein our devices find the balance between presence and discretion; always being peripherally present and ready to accept user input, while never distracting or demanding our attention.
The key change to designing for this will be in anticipatory design, the process in which a designer - with the help of artificial intelligence – anticipates the needs and tasks of the user by making pre-emptive decisions on their behalf, in order to simplify the user journey and reduce cognitive load.
Here, we'll take a look at the main areas that invisible interfaces are poised to revolutionise the way we interact with our devices, as well as where the transition to invisible interfaces could take the field of UX design in future.
Designing for voice
The most popular invisible interface by far is that of voice, with recent investment in this area from tech titans Apple, Amazon and Google having thrust this technology firmly into the mainstream. This burgeoning market for voice interfaces creates an intriguing new challenge for UX designers everywhere. In fact, some are already hailing the sector as the natural next step forward in UX design.
The reason so many have adopted voice as the de facto standard for invisible design is simple. As we've moved through the different ways of interacting with our devices, the common goal has been to increase speed and usability while reducing user friction.
We started with complicated strings of keystroke commands, then moved on to incorporate the mouse, and more recently the rise of the smartphone has led to touchscreens taking over. As natural as the touchscreen is, though, it's still a screen, so the obvious next step is something that continues to evolve the user journey, being quicker and easier to use while further reducing friction. And what could be quicker or easier than voice?
In terms of how we implement voice in a mainstream capacity, we should first cast our minds back. In the 90s, what drove people to embrace online? The availability of internet access on our home computers; unified devices we could all get behind.
A decade ago, what drove us to embrace touchscreen UI? The iPhone; another unified device we could all get behind.
So, what invention is going to be the driving force for the mainstream adoption of talking to your devices?
If you were caught talking to your computer even a few years ago, you'd likely attract more than a few funny looks. Now, the explosion in popularity of voice-controlled devices such as the Amazon Echo and Google Home, suggests a future wherein this becomes the normal process, and menus, screens, pointers and commands are replaced by simple spoken language.
Controlling our devices with nothing but the spoken word might seem far-fetched – a pipe dream reserved for fanciful sci-fi films. However, this may be a reality far sooner than you think.
The reason this has been a pipe dream up until now is because the computing power required to process, break down and interpret human speech is huge, requiring more resources than were previously available in a mainstream capacity. However, numerous breakthroughs were made in this field in 2016, and we are now at the point where there's enough computational power available to us to make speech recognition and interaction a viable alternative to visual interfaces.
On top of this, we're lucky enough to be living in a time when around one in three people carries a smartphone – essentially a mini computer with a microphone attached – around in their pockets; a figure that will continue to rise in the years to come.
Designing from a technical standpoint
In terms of how we go about designing for this, the most fundamental element to consider is that because voice-controlled interfaces are invisible, users will not have the benefit of images, buttons or clickable links to guide them. Because of this, developers and designers must ensure the voice assistant is providing users with constant feedback and support so they are not left in the dark.
Simplified, brief interactions need to be weighed up against people feeling lost or not in control. In a traditional screen interaction, visual cues such as buttons, tick boxes, links, or error messages provide a virtual breadcrumb trail for users, letting them know exactly where they are in a given process.
Similar to screen-based design patterns, it's important to consider that users will need voice patterns they are familiar with every time they operate a new app or program, in order to acclimatise them to the new software. Above all, this process must be simple and intuitive, using universal conversation patterns as a method for executing commands.
It is also important to bear in mind that users have to use their short-term memory to remember key phrases to interact with the device. Therefore conversational exchanges need to be kept short and sweet to lessen the cognitive load and avoid confusing users.
Overall, while voice UIs may be simpler on the surface, they require reassurance and pacing to be built into the interaction if they are to provide the best possible user experience, particularly for those with cognitive impairments or lower levels of confidence.
Designing for language
Considering the range of functions that voice devices will need to be able to accomplish in order to effectively take over from the tech we currently use, as well as the fact that the end user will be working entirely without visual cues, language is crucially important.
Natural language, tone of voice, accent and tone are all vital. Currently, home assistants feel a little pre-programmed and artificial (it's still obvious you're speaking to a robot), so to alleviate that we need to look at the language people feel comfortable with.
Overly assertive, imperative language can be off-putting, particularly to less tech-savvy users. Colloquial terms can be more reassuring, however, overly quirky communication can be seen as condescending. Therefore, there is still considerable user research and usability testing required in this area.
Another critical design point to consider is avoiding bias. In traditional UI, and when you come to think of it, any software design, there can often be a disconnect where the designers are not necessarily representative of who will use the software.
This is especially important in voice UIs because people tend to be sensitive to language. For example, the language of a white male, graduate software developer could be quite different to the conversational tone between young girls or older adults. Of course, that's not intended as a generalisation about developers, but a reminder that as with any software, we aren't always the users of our products. Bringing a whole new meaning to 'tone of voice'.
This technology can also be leveraged in a positive way, and there is actually a fantastic opportunity here for forward-thinking designers to build specifically for greater personalisation and also customer engagement.
We hear the phrase 'tone of voice' - in relation to both a brand's personality and how they communicate with their audience and clients – a lot when we're talking about a brand's communications strategy, and this is another area that is set to be completely turned on its head with the mainstream adoption of invisible interfaces.
Whereas previously a brand's tone of voice may have largely been restricted to written communications, conversational interfaces provide an entirely new way to communicate and shape the overall experience a customer has with a brand.
This, of course, presents a brand new set of considerations for UX designers – namely the literal tone of voice brands choose to employ in these devices, including gender, dialect, and expression - but also a wealth of new opportunities. Imagine a world where each of the companies with which you communicate daily (perhaps the businesses you shop with, or those that deliver your news), had their own distinct personality and voice – perhaps even one which you, the user, could control? Such a thing would be truly a tone of voice for the digital age we live in.
Which areas of business will be most affected?
The early success that Amazon, Apple and Google have enjoyed in this area has helped to raise consumer awareness of just how useful these devices could be in everyday life. It looks as though this technology could be set to explode in popularity in 2018, with a host of innovative new software and hardware products coming to the market.
This technology can also be incredibly useful from an accessibility standpoint. By negating our current reliance on screens and creating the ability to control devices with nothing but your voice, users with visual or physical impairments will be able to access and use devices completely independently, with no need for external assistance – some for the first time ever.
In terms of where we'll see the tech take-off, a sector that could benefit hugely is digital health products, particularly fitness trackers and related health monitoring devices.
Our health sector is currently feeling the pressure from rising numbers of patients, in addition to an ageing population presenting more complex cases. Because of this, a greater focus has naturally been put on convincing people to take a more proactive stance in managing their own health.
Invisible interface devices such as personal fitness wristbands could be key to this, providing people with an unobtrusive way to measure their vitals, as well as other health related statistics such as steps, calorie intake and their heart rate.
Another sector that could be transformed by invisible interface devices such as the Amazon Echo is retail. Voice search in particular could be a game-changer; indeed, we're already beginning to see significant pickup in this area. A recent study from Google revealed that more than half of teens (55%) now use voice search on a daily basis – a strong statistic that goes some way to showcase the current penetration of invisible interfaces in everyday life.
This tech looks set to really take off in retail as more brands make the leap and begin proactively making use of it to engage customers in new and exciting ways. A great example here is Ocado, who recently made the headlines by becoming one of the first retailers to offer a dedicated app enabling customers to shop using voice commands.
While this is a tremendously exciting time for UX design, we as designers must adapt accordingly. Voice interaction represents an exhilarating new challenge to UX designers, one that we must acknowledge and learn quickly from if – as a community – we are to take full advantage of the opportunities this new, seamless technology presents.
This article was originally published in issue 301 of net, the world's best-selling magazine for web designers and developers. Buy issue 301 here or subscribe here.