“Pinpointing where bias or error exists in a human brain is difficult or impossible—there are just too many neurons and connections to narrow it down to a single malfunctioning chunk of tissue.” ( Pariser 202)
I found myself frustrated by the above quote, and I think it reflects my frustration with chapter 7 in general. Here, as somewhat through the rest, Pariser does a few things: he treats human brains like machines (malfunctioning tissue), he assumes biases are inherently wrong (again, the use of malfunctioning) and he’s all too willing to assign this blame to the technology itself instead of the human creators (this is less so this quote, but moreso with the bits before and after).
This is problematic, and perhaps I’m being too nitpicky or projecting onto Pariser the frustration I have at a lot of technology writing nowadays, but I’m keen to talk about it anyway. The problem lies in the idea that there is a single model of a functioning brain, and that divergence from the model is a malfunction. When I was a computer engineering major, I heard the phrase “it’s not a bug, it’s a feature” as a joke often, but I think there’s some truth to that when it comes to brains. The divergence we view socially (and medically, but that’s a whole other can of worms) is not as we once viewed divergence, as simple difference, but instead brings along with it a lot of negative connotation — viewing these as malfunctions, for instance.
This allows room for things like mental illness or neuro-divergence to be seen as deviant (also once known as difference, now seen as bad) or wrong, and I won’t go into why I think this change happened socially (it’s definitely capitalism), but it’s important when it comes to the idea of being precise in our writing. I highly doubt this is what Pariser was thinking about when he wrote this sentence, but it stuck out to me because I tend to look at the finer details when I’m writing.
Part of technology writing that I see a lot of, and Pariser is victim to this too, is assuming that biases are inherently wrong and that we can ever achieve a state of not having them, thus eventually making it possible to have some sort of godlike, neutral technology that is morally superior because of this lacking. Seeing biases as a malfunction, as a broken part of human thinking that we program into machines, instead of as a part of being human that is socialized into the world we live in, means that instead of addressing the source of biases and how to make them less influential overall, we spend time and energy trying to get rid of them. We cannot, as biases are a product of experience.
Donna Haraway, the feminist scholar who first wrote about Standpoint Epistemology, and essay where this idea was popularized into science studies, thinks that by including more people, we include more experiences, and thus we can minimize the impact any one bias has on science, or, in this case, technology. Instead of trying to fix what we cannot, spending money and time on attempting to program something objective, we can instead take this and hire more people from more walks of life who can work together and against one another for a strong (but not complete) objectivity.
Finally, Pariser spends a long time in this chapter dancing around where to place the blame of irresponsible technology, onto the tech itself and the user for using it. In the discussion on 194-199, about RFID chips and personalized technology, Pariser sets up the idea of a world in which we, the users, can be categorized and found anywhere by the multitude of cameras in our daily lives, hinting at the possibility of this being used against us in a more harmful way than his example of a person checking on their partner’s licentious activities.
It would not be the fault of the camera nor the user should this happen, much like how it is not the fault of the plastic straw or the Starbucks customer why the oceans are so polluted. It is the fault of those in power, elected and unelected, the corporations that would seek to use this information against us, and the failure (or success, depending on how you view American politics) of a system that does not protect its people. The technology is not the issue here, no matter how much time we spend trying to make it so. We need legislation and standardization, not to kill the forward motion of progress or spend time fear-mongering about the future like so many tech writers are wont to do.
It would be easier to live in a world in which we could just blame the technology for existing, but since we don’t, I don’t much think it’s worth our time to try any more than we already have.