What if AI worked differently?

Text by Becca Ricks and Mark Surman,
from the Creating Trustworthy AI white paper

The prevailing computing environment of any particular era shapes what people think is
possible and, in turn, what technologies we build.

When the personal computer became mainstream in the 1980s, we saw the invention of
spreadsheets and word processors and, with these inventions, the transformation of the
workplace. When the web became widely accessible in the late 1990s and early 2000s, the
browsers and, eventually, cloud-based apps became ubiquitous, leading to huge shifts in how
we entertain ourselves, collaborate with colleagues, and do business.

Today’s computing environment is increasingly shaped by AI and the data that powers it. From
recommendation engines to smart email filters to predictive text, AI-powered systems have
become ubiquitous in our modern society. The norms around how AI is developed transform
what kinds of tools, platforms, and experiences we end up building. As so much of our lives
become digital, we will continue to see these norms shape our everyday lives.

For example, one current norm is that companies develop products and services that collect as
much data about people as possible, and then use sophisticated models to analyze that data
and provide personalized experiences. The results of this norm can sometimes be delightful:
Spotify suggests songs we like, and Gmail’s autocomplete feature finishes our sentences. But
the results can also be harmful: There is evidence that video recommendation engines like
YouTube, which optimize for user engagement, profit by introducing people to increasingly
extreme viewpoints. In addition, targeted advertisements on Facebook have been shown to
manipulate people and exclude vulnerable communities.

Another computing norm is that companies with access to the most data have a competitive
advantage in the AI landscape, incentivizing further data collection. Big tech companies have an
outsized advantage over both smaller competitors and the people who use tech. Smaller
companies find it almost impossible to access enough data to compete on the personalization
or recommendation front, and people are often locked into one platform.

As these two examples illustrate, our current paradigms for building technology limit what we
think is possible. What if we radically adjusted these norms in AI development? What would it
look like if people had greater control over the data collected about them? What kinds of
processes and tools in the AI development pipeline will lead to greater accountability?

If our current computing environment is not working, then we must invent a new one. If people
feel like they’re not in control of their own data, we can incentivize companies to build
technologies that give people more agency. By changing the rules around how data is collected
and stored, we can invite smaller players to participate. By imagining new processes for how
technology is developed, we can shape the platforms, tools, and products that strengthen
collective well-being.

Changes like these are necessary if we want AI that strengthens – rather than harms – society
and communities.