Design Will Always Hurt People

Hart Crompton
12 min readMay 28, 2020

--

Who uses your tools after you move on?

This is an essay about hurting people with design. It is also about tech, city planning, and the KKK.

“Do no harm,” is a quaint, well-meaning adage. I’ve certainly heard it bandied about in a wide variety of professions, followed by nods and the slight furrowing of brows. Outside of organisations like Lockheed Martin, Raytheon, and police departments, there are in fact few fields where the explicit aim is to do harm. It’s a vapid phrase and I doubt anyone believes it. After all, if you truly believe it is possible to do no harm then you are either deluded or on course for a meltdown. Instead, I offer this: Understand that, at some point, your work will harm people. You may not intend it, but it is completely unavoidable. You need to understand who you are harming. You need to know how you are doing it. Most of all, you need to know why. Then, you need to do everything in your power to stop it. That still won’t be enough, but you have to try. This is why it is vital to think about the tools you use.

Tools can be wonderful. In planning, zoning ordinances keep factories from being built alongside homes and schools. Zoning ordinances allow for the protection of both the physical and ephemeral character of a neighbourhood. Zoning ordinances are used to inflict massive harm on minority communities and are still used to enforce segregation. Tools can be dangerous.

Several years ago, I consulted with a fairly wealthy town that was weighing its options as it saw rising population and skyrocketing housing prices. Residents were concerned: they wanted the town they remembered from their childhoods, they wanted it to be a place where their children and their children’s children could grow up and own homes. They were also out of undeveloped land and were physically constrained by geography or competing jurisdictions on all sides. Put simply, they were (metaphorically) walled in. We told them quite plainly that if they wanted to mitigate housing prices and increase the availability of homes they would need to change the way they used land.

Home lots in the city were mandated to be at least 2 acres (0.8 hectares) minimum. That’s a bit less than two American football fields (or a bit more than a soccer pitch). It’s a lot of land, too much for most residents. To beleaguer the football metaphor, most residents barely tended to more than the end zone. Most of their properties were vast fields of dirt or weeds hidden from sight with fences or trees. Almost no one in the town was willing to put the effort into maintaining so much space. They could easily have changed zoning and constructed more homes or even denser housing such as condominiums and apartments. They could still maintain their small-town feel while diversifying housing stock and making the city a more sustainable entity. We told them that.

They responded: “Yes, but we don’t really want those,” and here they would pause, “urban types moving here.” I remember the meeting. My coworker and I glanced at each other, immediately translating “urban” from the suburban lexicon into “black and Hispanic people”. The rest of the meeting was pointless. They knew what it would take to increase housing supply, but to them, the cost was too high. Instead, they gave up over a dozen acres to a strip mall that has yet to be built (and most likely never will be). A planner had used a tool and that tool had allowed the town to shut itself off from anyone it deemed undesirable while lamenting the situation the people found themselves in.

In the early 1900s, Ambler Realty was looking to construct apartment buildings on its land in the small town of Euclid, Ohio. The town — worried about growth, and character, and minorities, and immigrants, and urban types — wanted none of this and responded by enacting zoning ordinances that made the construction of large apartment buildings impossible. Naturally, Ambler sued, and eventually the case made its way to the supreme court, who sided with the town of Euclid. This became a landmark case that is now the basis for zoning across the country allowing jurisdictions to zone land for specific purposes and even control the size, shape, and character of the built environment. Every urban planner understands that much of modern planning and city design is owed to that court case. More than a footnote in history, Euclidean zoning is a powerful and flexible tool. Today, it is understood to be a tool often used by wealthy, homogenous communities to prevent any sort of change from coming to town. Get this, the possible side effects of the case weren’t unknown at the time. A lower court found that the ordinances were — in effect or intent — enforcing segregation by attempting to prevent disenfranchised minorities from being able to move to the town. In spite of this (and sometimes because of this) Euclidean zoning is still widely used. This type of zoning isn’t the only tool at a planner’s disposal, but it is quite commonly implemented without a single thought given to its ramifications.

I bring these stories up because it is important to consider the consequences of design. Replace zoning with “machine-learning” or “social-ratings” and similar patterns emerge. In the selection of our tools — whether creating our own or using existing ones — it is necessary to give a darn about how those tools will be used once you have moved on and to ask if you are using the right tools in the first place. You may not be able to post a sign at the entrance of a city: “No poor people or minorities!” but you can certainly keep them out through more creative and, most importantly, legal means. Uber may not be able to say: “We don’t want drivers or riders with a range of mental disorders,” but they can bar anyone from their system if that person’s rating falls too low. China has already taken this a step further, using grotesquely intrusive social ratings to determine whether an individual may even travel or find work — what happens as we continue to normalise social ratings in other countries?

I don’t like to use the term “unintended consequences” because it softens the blow. “We didn’t know,” is a too-common excuse and the fault can still lie with us for not considering the consequences regardless of our intentions. You can also be sure that these “unintended consequences” are absolutely intended by the people exploiting them. Whatever you make can and likely will be used to hurt someone at some point. The more general and powerful the system the more harm that it can cause. A graphic designer creating a new logo may not need to be so cautious, but a design firm working to overhaul a hospital’s computer systems has a great capacity to cause harm. Design can and absolutely must consider all possible uses, especially the nefarious ones, and preemptively find ways to mitigate them.

I saw an interview recently with a fairly influential designer who was working with cities and tech companies to implement “smart city” technologies. Before I go much further, I’ll note that this designer also referenced Elon Musk as a good source of ethical design principles so take their opinions with a grain of salt. They discussed sharing demographic, usage, and surveillance data with private firms to attempt to make cities more efficient. Who doesn’t love efficiency? We’ve all bemoaned traffic at one point or another, wondering aloud, “Why can’t someone do something about this?” I mean, we’ve got algorithms now, those ought to do something. Surely we can just pour traffic data into a machine-learning sieve and sort it all out. I’ll give you a hint: no amount of tech can fix traffic problems because traffic itself is the problem. Major roads will almost always be used beyond their capacity; adding additional capacity works for a week or two before everyone adjusts and the problem reasserts itself. You can measure traffic flow and light timings as much as you want, but if you only focus on cars you can never “solve” traffic. The answer is to look elsewhere, to find ways of reducing the number of drivers. Putting aside traffic, I couldn’t ignore another concern: who should we trust with such a large volume of personal data? This designer seemed to implicitly believe that the cities were looking out for their residents, but that is dubious at best.

To be fair, I will concede that cities do care about their residents: the wealthy ones, the ones who have the money and know-how to put up a real fight, the ones who can influence elections. Few cities prioritise helping the most disenfranchised people. Planning itself has a sordid history of being used by the moneyed and powerful to enact their designs on the city while quite literally bulldozing through anyone they deemed expendable (parallels to the sway held by the modern tech industry start to appear, consider the concessions governments make to court tech companies). In New York City, the notorious Robert Moses would have eventually levelled Manhattan to make room for freeways if he hadn’t been stopped by a grass-roots movement. He wasn’t acting on his own: he was backed by the city and state. Unfortunately, his plans were only foiled after destroying many minority neighbourhoods that never recovered from the planner’s designs.

This isn’t a problem exclusive to the past: It recently came to light that the Utah government was working with a tech company called Banjo to build a real-time surveillance system that would constantly monitor social media feeds, traffic cameras, surveillance cameras, police scanners, and more with the nebulous goal of improving safety and influencing public policy. “Safety” is a remarkable rallying cry. It’s unassailable: you want people to be safe, don’t you? How could you or anyone argue against safety? It’s also deliciously vague: with a little creativity, anything can fall under the “safety” umbrella. I’ll also note that Utah is, by almost any metric, one of the safest states in the country, but let’s not have that get in the way. Why would a security firm even need social media data? There isn’t a large number of people announcing their criminal plans on Facebook and one would hope that we’re not quite into the dystopia of predicted-crimes (although writing that out I know there are numerous tech firms pitching those systems to governments as of this very moment).

Let’s consider a potential scenario. You are a governor who has just entered into a many millions of dollars contract with a surveillance firm. The trouble now is paying for the darn thing. You can’t just raise taxes — god forbid — you’ve staked your entire political identity on that. No, what you need is a bigger tax base. Oh, you’d love a big juicy tech company to move their headquarters to your state. An Apple or a Facebook or whatever it is the kids are transfixed by these days, you really can’t be bothered to keep up. How could you persuade one of them to move to your state though? What do you have that other states don’t? Eureka! Data. You could offer to let them join your new public-safety data network. They are all-too-eager, and you walk away quite pleased with yourself. You are, of course, blissfully uncaring for the fact that those companies are now using their stake in your network to pre-screen employees for behaviour they deem “immoral”. Oh, and they happen to be capable of tracking employees outside of work hours, but they pinky-promise they only track them at work. To prevent security leaks, naturally. You don’t even notice that they seem to be much better at preemptively cracking down on unionisation efforts and whistleblowers. By the time there’s a major breach of the system with hackers gaining access to 2.5 million people’s data, you’ve already retired into a lobbying position.

The downsides and potential for abuse are obvious, but that isn’t the worst part (or rather, it isn’t the most bizarre part). It came to light that the CEO of Banjo was cosy with the KKK, something that likely could have easily been vetted if the state had cared — simply searching for the CEO’s name would have shown that he shot up a synagogue. This is the inherent problem that exists with any sort of public / private collaboration that leaves actual people out of the equation. The state didn’t ask people in the community if this was what they wanted, let alone try to figure out what the state truly needed. They didn’t seek out public input on the company before awarding the contract. Obviously, the state cancelled the contract, but what if it hadn’t been the CEO? What if it had been a lower-level designer or software engineer who would still hold massive sway over the project but not have to worry about public scrutiny? What then would the impact on the populace have been if the government was literally using a security system designed by a member of the KKK?

I’m not often charitable when it comes to ethics in design. I believe designers must always be thinking ahead, not just considering how much good their work could achieve but how much harm it could inflict as well. The aforementioned designer I spoke of regarding smart cities talked briefly about this. They had worked on a project utilising facial recognition and other biometrics to analyse emotion and engagement in real-time. They said they jumped at the chance to work on something so exciting, so cutting edge — it was a real brain tickler for them. It was only after they got deep into the work that they even considered the negative ramifications of such a system. Could advertisers exploit and reinforce negative emotions to sell a product (I mean, more effectively than they already do)? Could social media sites change the content they served in real-time to further boost engagement and keep people trapped longer even at the cost of their emotional well-being? I don’t mean to demonise this particular designer, but I think the fact that they only considered the potential consequences after creating the system is all too common. I couldn’t help but wonder: why, from the very beginning, hadn’t they thought about ways of limiting the scope of the system so it couldn’t be used outside their intent? Building something and sending it out into the wild while wringing your hands, saying, “Oh, I hope no baddies do anything naughty with this,” is meaningless. You don’t get brownie points for being concerned about a problem you created. I’m sure a similar case could be made for Banjo. I doubt every employee had been a member of the KKK, but there certainly were enough designers willing to build a wide-reaching surveillance system with clear avenues for exploitation.

Of course, it isn’t possible to completely sanitise our work. There will always be ways of exploiting systems, and the goal should not be to succumb to despair and never work again. I just believe that mitigation and prevention are entirely possible and designers can (and should!) build safeguards into their work to prevent exploitation. Much like the philosophy of open-source software, an open, accessible, and inclusive design process that seeks out and listens to the affected communities will be much more resilient to abuse and misuse. Secrecy in process and outcome only benefit the powerful and the wealthy.

This is of particular importance as people become more cognisant of how their data is used. We were once fairly comfortable with sharing our personal data under the assumption that social media entities would then serve us somewhat tailored advertising. That’s not so bad, right? Better than entirely random advertising — maybe something will at least be interesting to you. Then people learned that they weren’t being served ads, they were, in fact, the product being served up to advertisers, large companies, and even foreign governments. Companies purposefully obfuscated their terms of use, relying on the fact that most people wouldn’t or even couldn’t comprehend the language used and would just click “accept” without truly understanding what they were agreeing to. Now go further, look past the creative legalese and realise that for every scummy, exploitative, and devious system used by social media today there is a team of designers willing to make it work. People aren’t reacting to the use of their data, but the abuse of it. This is why I get a little nervous whenever the idea of “smart cities” is brought up. The design of systems is too often both shortsighted and far-reaching; a horrifying combination. To accept the idea of smart cities, or social ratings requires that we also trust the decision-makers to have our best interests at heart. On top of that, since working with private companies is almost inevitable, it requires trusting that these companies are working with our best interests as well.

Again, please don’t believe that the answer is to be paralysed into inaction. You can and absolutely should still be a designer. Be a thoughtful designer, be proactive, think about who you are serving and why. How could your work be misused? How much harm could it cause? Is there a way to prevent or mitigate that? How can you maintain the integrity of your original vision? What happens once you’ve moved on? These are valuable questions to ask and answer in the course of your work.

P.S.

It was recently reported that Banjo was also selling government data to a healthcare company. Shocking, I know.

--

--