Make something people want themselves to want

Sep 21st, 2019

algorithmic ethics

In his chair lecture “Love the Processor, Hate the Process”, Jonathan Zittrain outlined a set of ethical conundra relating to decisions made by tech companies about their data-driven products. Should Google have allowed anti-semitic websites to show up in searches for “Jew”? Should Facebook have allowed researchers to manipulate its news feeds to study the effect of positively or negatively valenced posts on our mood? The intuitive answer, he claimed, was "yes" in the former case and "no" in the latter. Google’s search engine is a tool, and as such should work as expected and be relatively immune from tweaking. The Facebook news feed on the other hand is more like a friend, and should act as friends would: in our best interest.

We’re in the midst of a Cambrian explosion of these machine learning-driven technical “friends”, charged with serving us up everything from news articles to potential mates. As they proliferate, they have increasing power to manipulate and guide our behavior in ways counter to our best interest. They can offer us news stories that stoke partisan animosity, videos that take us further down the path of radicalization, or snack-size bites of content that make us unable to put down our phones.

Naturally, as these negative examples piled up, so have the calls for action. Law professor Jack Balkin, for example, proposed the idea of an “information fiduciary”, which, like a financial fiduciary, would be a designation legally requiring data-holding organizations to act in our best interest. And after intense pressure, in January 2018, Mark Zuckerberg announced modifications to Facebook's news feed that would reduce “fake news” and promote things that made time on the site “well-spent”.

Most such proposals, however, focus on what these companies are supposed to know and do, which is odd given that it's our best interest under discussion. They are (somehow) supposed to know what’s in our best interest and serve that up to us. This may be appropriate in the negative; some offerings, like advertisements for fake political rallies created by the a hostile government, are clearly not in our best interest. But it's much more difficult in the positive, especially given the impoverished measures used by these systems to judge our preferences.

What's missing is the ability to bring our higher interests and goals to them. In the absence of this ability, companies are consigned to using behavioral proxies like views, shares, likes, and comments, which are almost always misleading signals of our best interest. These provide fractional and inaccurate information for algorithms to use in optimization, leading to feeds filled with click bait instead of substance. Rather than lash us to the mast, our algorithmic crews steer us toward the Sirens.

Users need interfaces and affordances that allow them to define and communicate these higher interests to algorithmic systems in ways those systems can use. We need to be able to provide additional inputs into algorithmic systems that can used to help them offer things actually in-line with our best interest. A product that genuinely benefit users is not just "something people want", but something people want themselves to want. To make products like that, companies need to incorporate ways for users to give them feedback about what they value so these products can actually serve their best interests.