The Center for Humane Technology Doesn’t Want Your Attention

Glass Rooms

There has been a steady stream of articles about and by “reformed techies” who are coming to terms with the Silicon Valley ‘Frankenstein‘ they’ve spawned. Regret is transformed into something more Missionary with the recently launched Center for Humane Technology.

In this post I want to focus on how the Center has constructed what they perceive as a with the digital ecosystem: the attention economy and our addiction to it. I question how they’ve constructed the problem in terms of individual behavior, and design, rather than structural failures and gaps; and the challenges in disconnection from the attention economy. However, I end my questioning with an invitation to them to engage with organisations and networks who are already working on addressing problems with the attention economy.

Sean Parker and Chamath Palihapitiya, early Facebook investors and developers, are worried about the platform’s effects on society.

The Center for Humane Technology identifies social media – the drivers of the attention economy – and the dark arts of persuasion, or UX, as culprits in the weakening of democracy, children’s well-being, mental health and social relations. Led by Tristan Harris, aka “the conscience of silicon valley”, the Center wants to disrupt how we use tech, and get us off all the platforms and tools most of them worked to get us on in the first place. They define the problem as follows:

Snapchat turns conversations into streaks, redefining how our children measure friendship. Instagram glorifies the picture-perfect life, eroding our self worth. Facebook segregates us into echo chambers, fragmenting our communities. YouTube autoplays the next video within seconds, even if it eats into our sleep. These are not neutral products. They are part of a system designed to addict us.”

Pushing lies directly to specific zip codes, races, or religions. Finding people who are already prone to conspiracies or racism, and automatically reaching similar users with “Lookalike” targeting. Delivering messages timed to prey on us when we are most emotionally vulnerable (e.g., Facebook found depressed teens buy more makeup). Creating millions of fake accounts and bots impersonating real people with real-sounding names and photos, fooling millions with the false impression of consensus.”

What the Center identifies as the ‘monetization of attention’ is, actually, the extraction of personal data. (Curiously, they do not use the phrase ‘big data’, or ‘your personal data’ anywhere in their website text.) This attention (or, personal data) is extracted from our digital and analog behavior and then is used to profile and target us to sell us lies, misinformation, or worsen our depression by showing us advertising for make-up. And we are targeted even when we aren’t paying attention at all, like when we are walking down a street with mobile phones in our handbags. Information about us is being extracted to identify and profile us almost all the time because it is profitable. How will the harmful effects of attention be arrested without a challenge to the monetization itself, and the values that sustain it?

It isn’t just about attention, however, and this is a fallacy I think is important to address. Your attention is valuable only because it is associated with an identity that exists in multiple (psycho)geographies  – financial, cartographic, intimate, socio-cultural, linguistic, religious, gendered, racialised -at the same time. These identities, and the attention that animates them, pop up across different devices, platforms, services and networks making it identifiable and knowable, and thus easy to sell things to. Think of your identity as electric cables and wires, and attention as electricity that runs along the outside of, rather than in or through, these wires.

Trying to change your digital behaviour is difficult and complicated because of how our political and personal expression, relationships of care, work and intimacy, maintenance of these relationships, and self expression, are all bound up in a narrow set of platforms and devices. Disconnecting from the attention economy is more like a series of trade-offs and negotiations with yourself; like a constant, personal algebra of maintenance, making digital choices, managing information flows across different activities and services; and some baseline, basic digital hygiene.

You can never really ‘arrive’ at a place of perfect disconnection because of how perniciously deep these tools and devices go; but there is something aspirational and athletic about it all, and in this sense disconnection from the attention economy really is a practice. I know this because I’ve consciously practiced this disconnection for some years because of where I worked. (I practice less now because my work has changed, but I am still conservative about what kinds of attention I give different platforms and services.)

Because of this work I’ve been part of communities where it is entirely normal to never know some of your work friends on social media, or to refer to people by their online handles rather than their actual given names.  Many of us who practice disconnection from the big data attention economy use open source tools that are usually ugly because they don’t try to grab your attention (there is little investment in UX ) but deliver a service, and we compartmentalize digital practices across different devices, identities, services and platforms. We may use social media but selectively, and we don’t necessarily connect all of them with our actual identities.

It is entirely possible to live a Google-free life for example, as some of my ex colleagues and friends do, but you make peace with the trade-offs, and adjust your life accordingly.  It’s like people who don’t drink Coca Cola, or are vegetarian but not on the weekends, or would rather cycle than take transatlantic flights. An interesting point about Coca Cola: in Berlin we have Afri-Cola, and Fritz Cola (caffeinated and not; with and without sugar) as alternatives to Coca Cola, which is also available in its many flavors. In some places there are structurally-afforded opportunities to be more flexible and make a wider range of choices.

Yet, the Center for Humane Technology constructs the problem as one of individual attention. And while they acknowledge the importance of lobbying Congress and hardware companies (Apple and Microsoft will set us free as if they don’t lock us into digital ecosystems and vie for our attention?), they emphasize a focus on individual action be that of tech workers, or users. By invoking ‘addiction’ they see the problem as being about individual attention, and eventually, individual salvation.

The absence of a structural critique is evident in the deterministic approach to fixing complex social problems such as children’s well being, or democracy by fixing technology design and UX. According to the Center, if you resist UX by turning your attention away, you can start to make a change by hitting the tech business where it hurts. And if tech businesses cease to get our attention, then democracy, social relations, mental health and children’s well-being might be salvaged.  Frankly this accrues more power to UX and Design itself; and creates a sort of hallowed epistemology flowing from Design.

The assumption is that these social conditions and relationships somehow did not exist before social media, or have changed in the past ten years because of UX and its seductions. I believe this is both not-true, and also true. We do engage in politics and democracy through our devices and social media, and we do see the weakening of existing values and notions of governance. This is not uniformly the case, nor even, around the world. There are muddied tracks around the bodies of these relationships.

Democracy as a design problem is not new. There has been considerable work over the past decade to enable citizens to use civic technology applications for transparency and accountability to hold governments to account and promote democratic values and practices. It might help the Center to look at some lessons from around the world where democracy has been considered to be failing and technology was applied as a solution. To cherry-pick one relevant lesson (because this is a vast area of expertise and research that I cannot do justice to in this post): building a tool or a platform to foster democratic values or behavior does not necessarily scale. The lesson is that it doesn’t flow in the direction tech — > democracy.

Applying this to the case of the Center, but in inverse, the lesson is that you cannot approach technology and social change from a deterministic perspective. Technology will amplify and accentuate some things: there will be more ‘voices’ but most likely the voices of those who are already powerful in society, will be heard the loudest. Networks of influence offline matter to how messages are amplified online; swarms of hate-filled hashtags, memes, and bots traverse the fluid connections between on and offline. Fixing Facebook and Twitter is absolutely essential, but it is not the same as addressing the weakening of public institutions, xenophobia, fragmentation of communities, the swing towards populism, the 2008 financial recession, or combinations of these. They need to happen in conjunction with each other. Democracy is actually about relationships among people, movements, and longstanding practices of activism and organising in communities.

Clearly, the Center for Humane Technology has set itself a mighty challenge. How are people going to change digital practices in the face of UX that is weaponized with dark patterns that intend to keep us addicted? How are they going to take down the business model built on surveillance capitalism, which they refer to as the attention economy? If social media is addictive, what sort of twelve step program are they going to come up with? How do you sustain being clean? They might want to check out a program for how to detox from data.

Despite my concerns, I actually believe this organization may be very successful and influential because they are well-placed in terms of money and influence. If the Center for Humane Technology actually worked to disarm UX, and made it possible for us to move our personal networks to platforms of our choosing, and enabled regulation of the data trade and protections  for users, then they might actually be disruptive. Let’s hope they succeed. In the mean time, the Center may find inspirational resources, and well-informed ground-up expertise among those who have already been building movements for users to take control of their digital lives such as:

Article 19; Bits of Freedom; Coding Rights; Committee to Protect Journalists; Cryptoparty; Data Detox Kit; Derechos Digitales; Digital Rights Foundation; Electronic Frontier Foundation; Freedom of the Press Foundation; Frontline Human Rights Defenders; The Glass Room; Gobo.Social; Internet Freedom Festival; Me and My Shadow; Mozilla Internet Health Project; Privacy International; Responsible Data Project; Security in a Box; Share Lab; Simply Secure; Surveillance Self Defence Kit; Tactical Technology Collective; Take Back The Tech.

Maya Ganesh has been a feminist information-activist for too long, and this post was an attempt to synthesize reflections from the past decade. She lives in Berlin and is working on a PhD about the testing and standardization of machine intelligence. She does not drink Coca-Cola. She can be reached on Twitter @mayameme

This content was originally published here.