The future of adaptive user interfaces

We sit down with web standards advocate Aaron Gustafson to talk about creating adaptive interfaces.

Aaron Gustafson, web standards advocate at Microsoft and author of Adaptive Web Design, will open Generate San Francisco on 9 June. The conference will also feature Rachel Nabors, Stephanie Rewis, Steve Souders, Josh Brewer and nine other great speakers covering prototyping, animations, performance, design systems, artificial intelligence, and much more. Get your ticket today!

How are our interfaces changing and adapting?
The interfaces and means we use to access content and services provided on the web have expanded greatly as we have imbued more and more devices with connectivity. When I started out on the web, screens were small – 800x600 was considered large – connections were slow, and folks were either accessing the web via a terminal interface like Gopher or Lynx or they were using a very early graphical browser on their desktop. Most screens only supported about 256 colours and interaction was only possible via keyboard and mouse and generally required round-trips to the server (or refreshes of a frame within the web page).

Things have obviously changed a lot since then in terms of how we interact with the web. We’ve still got mice and keyboards, but computers can also respond to our touch, gestures, our voices, and other physical implements like dials and pens. Some computers have tiny screens, some have giant ones, others have no screens at all. Over the years, the practice of designing for the web has generally followed a consistent path of taking advantage of more and more screen real estate, but with the advent of mobile, many of us shifted our focus to enabling users to accomplish core tasks like reading an article or purchasing a product.

Media queries and design approaches like responsive web design have allowed us to adjust our layout and designs to provide experiences that were more tailored to the amount of screen real estate (and its orientation), but we’ve only begun to scratch the surface when it comes to how we adapt our interfaces beyond visual design.

What’s the first step to creating a great adaptive interface?
Planning is absolutely the best first step. Think about each component part of your interface and brainstorm the different ways it may need to be experienced. Iterate on that. Ask tough questions.

Being mindful of things like source order and how each component is explained via assistive technologies – screen readers, yes, but also digital assistants like Alexa, Bixby, Cortana, Google Assistant, and Siri – should be part of this discussion. Of course you’ll want to think about how the purpose of the component can be achieved in various screen sizes, with and without JavaScript, and via other interaction methods as well. 

Consider the performance implications of your choices. Can you provide a default state that is streamlined and lightweight? When might it make sense to incorporate richer imagery and the like? Are there alternative ways you can approach that enrichment?

Taking the time to ask questions and plan out the experience ahead of time – even in broad strokes – will pay dividends when it come to copywriting, design, development, and testing.

Aaron Gustafson will deliver the opening keynote at Generate San Francisco on 9 June

What are some recurring mistakes you see in regards to interfaces and how can we avoid them?
One of the issues I see time and time again in web projects is improper use of semantics. Whether this comes from a lack of understanding of the purpose each element in HTML serves or a lack of concern for the implications of poor element choices, it’s a problem.

As a simple example, consider a form. Users need to submit that form. I’ve seen developers use button, input, a, and even div elements to provide a clickable button. But these choices are not equal. An input or button element, when given a type of submit, can provide this  functionality easily. 

Anchors and divs need help. Neither will look like a button without CSS and neither can submit the form without JavaScript. And then there’s keyboard focus and interactions. Choosing either of these latter two elements necessitates a whole lot of extra work and code to fulfil an otherwise simple requirement. And on top of that, if any of their dependencies are not met, the interface is rendered unusable.

The elements we choose matter.

What can people expect to take away from your talk at Generate San Francisco? 
My hope is that folks who see my talk will have their perspective broadened, if only a little bit. I want them to become more aware of the ways in which real people use the products we create. Folks who can only afford older or lower-end hardware, folks without constant network connectivity, folks who rely on keyboard commands or their voice or their eyes to browse and interact with the web. 

When you become aware of the myriad ways people can and will access the web, your work naturally becomes more inclusive. And that’s my goal: increasing the inclusiveness of the web.

Generate San Francisco will give you exclusive insights into design and development at Netflix, Uber, Airbnb, Twitter, Salesforce, Huge and more. If you can't make it to SF, there's also a Generate conference in London in September, which will feature workshops and talks about design and content sprints, responsive CSS components, UX strategy, conversational interfaces, accessibility and tons more.