How to build trust in emerging tech

In the early days of the internet, the web was largely open and decentralised. But in the roughly 25 years since, the consumer internet has changed: today, the web is more consolidated and centralised in its structures than ever before. Network effects tend to favour a winner-takes-all dynamic and so we have, by and large, one really big search engine, one really big social network and one really big ecommerce site.

Embarking on your own site build? Use a decent website builder and great web hosting service. And if you need online storage, these cloud storage options will help. 

But consolidation isn’t the only thing that has changed. Over time, security and privacy safeguards have been added, like end-to-end encryption for web traffic (although less so for email). These safeguards have been tacked onto existing structures and amended to standards. They hadn’t been part of the internet’s original design; they simply weren’t necessary in the web’s original, academically focused ecosystem.

For emerging tech today, especially the Internet of Things (IoT) and artificial intelligence (AI), it’s very different. We are now creating a data layer that extends to, and shapes, our physical environments. 

In this context, openness and safeguards for security and privacy are essential. We now casually embed internet-connected microphones and cameras in living rooms and bedrooms. This different context requires different thinking. We need to be able to trust the technology we live with.

To think this through, consider three different contexts: the smart home, the smart city and algorithmic decision-making (AKA artificial intelligence or AI).

Emerging tech in context

Let’s first look at IoT in the smart home. Voice assistants have microphones that by definition are always listening (to a degree) or at the very least could be. In political science, the potential or threat of abuse is considered just about as bad as the real thing because it can lead to chilling effects – if someone feels like they might be spied on, they change their behaviour. 

How is this relevant to how we design connected products? As we add more and more microphones (and other sensors) to our physical environment, we multiply the potential for abuse. If we want folks to use connected products, we need to ensure they know they can trust them. Otherwise the privacy of our homes is a thing of the past.

Large-scale data-driven systems with little openness, oversight, accountability and transparency are likely to cause massive damage

Now zoom out of the home and onto the city: when smart-city technology with all its sensors and algorithms is rolled out across the urban fabric, it applies to everyone. Nobody can opt out of public space. 

So this had better work – and work well – for everyone. Instead of efficiency, smart cities should promote openness, be transparent, and allow for well-intentioned ‘hacking’ (in the sense of modifying for unexpected needs). 

Finally, the third frontier: algorithmic decision-making or AI. Algorithms make decisions that impact all areas of our lives, from managing resource allocation to predictive policing. And so we need to make sure that we understand the algorithms – effectively making them more open – in order to guarantee appropriate mechanisms for governance, accountability and recourse. Governments need to understand that algorithmic decision-making directly affects people’s lives.

People are wary of emerging technologies and you can’t blame them. Large-scale data-driven systems with little openness, oversight, accountability and transparency – in other words, systems that aren’t built within an ethical, healthy framework – are likely to cause massive damages and unintended consequences. So let’s do better.

More trustworthy tech

To be clear, this isn’t an exercise in making consumers trust emerging technologies more – it’s an exercise in making emerging technologies more trustworthy. Today’s consumers don’t have good ways to make informed decisions about, say, a connected device’s trustworthiness. In his book Radical Technologies, Adam Greenfield sums up the dilemma: “Let’s be clear: none of our instincts will guide us in our approach to the next normal.” Gut feeling won’t cut it. We need better mechanisms, design practices and tools. 

Luckily, there are promising approaches to tackle this. As an industry, we must follow through with best practices in all things data-related. As consumers, we need to demand better from industry. And as citizens we need policy makers to get smart about regulation. Fortunately, after the Snowden revelations shook consumer trust in connected devices like never before, things have been looking up. 

Policy makers are slowly starting to get ahead of technology, rather than play catch-up. The European General Data Protection Regulation (GDPR) has been the first major regulatory initiative in this space that tries to protect consumer data at scale. (If and how the GDPR will play out over time remains to be seen.) California followed up with the California Consumer Privacy Act, which offers GDPR-like provisions.

Digital wellbeing

In the tech industry, there is a growing awareness of the need to design emerging tech to be better and more open – digital wellbeing initiatives by Apple and Google and the debates on how to thwart fake news are just two current examples of the industry trying to get their house in order.

Consumers benefit from all of this but they still haven’t had good tools to evaluate which products or companies deserve their trust. This, too, can change. As an example, take a concrete project we have initiated this year: the Trustable Tech Mark, a consumer trust mark for connected devices. Developed by the ThingsCon network with support from Mozilla, the Trustable Tech Mark will soon start offering an assessment framework to determine which connected devices are trustworthy. It looks at five dimensions: openness, privacy & data practices, security, transparency and stability. 

Trustable Tech Mark will offer an assessment framework to determine which connected devices are trustworthy

The Trustable Tech Mark aims not just to weed out the really inferior products at the bottom of the pile but also to highlight the ones that are truly trustworthy and employing – or establishing – best practices for user rights. For example, imagine an intelligent smart-home assistant that does all the data processing on the device without sending sensitive data to the cloud. Or smart lighting that avoids privacy risks by not using microphones in its light bulbs. Or a company that ensures that in case of bankruptcy or an acquisition, user data remains safe and the code is released as open source, so the product will work even after the company is gone.

The Trustable Tech Mark is only one of what we hope will be many initiatives to empower consumers to make better-informed decisions and make emerging tech more open. If industry, policy makers and consumers all can agree that transparency, decentralisation, accountability and openness are conditions that enable trust in technology, then we can look forward to an exciting – rather than scary – decade of emerging technology. As designers, developers, and technologists, we have an outsized role to play in this journey but we can – and should – also demand better as consumers. Industry and policy makers will follow this pressure. In the end, all parties benefit from better, more trustworthy emerging tech. 

Illustration by Kym Winters

This article was originally published in net, the world's best-selling magazine for web designers and developers. Buy issue 312 or subscribe.

Read more:

Thank you for reading 5 articles this month* Join now for unlimited access

Enjoy your first month for just £1 / $1 / €1

*Read 5 free articles per month without a subscription

Join now for unlimited access

Try first month for just £1 / $1 / €1

Peter explores the impact of emerging technologies. He is a Mozilla Fellow, founder and managing director of The Waving Cat and co-founder of ThingsCon.