Technology is not a magical force beyond our control, no matter how difficult certain aspects may be to grasp. It is shaped by specific people, and, more importantly, by distinct worldviews. Sociologist Ruha Benjamin, 46, a professor in the Department of African American Studies at Princeton University, has authored four books examining the intersections of technology, diversity, inequality, and justice. A leading figure in her field, Benjamin challenges the narratives of major tech companies and advocates for the causes of the Global South on her social media platforms, with a particular focus on Palestine.
Benjamin was in Barcelona to participate in the Smart City Expo on the same day Donald Trump was elected president of the United States. While she feels it is too early to comment on the election results, she does warn about the growing impact of the reactionary wave sweeping across the world.
Question. Is a reactionary wall being erected against everything you’ve been advocating for in your books?
Answer. When people are confronted with the fact that the idealized image that they have of their nation or their group is a lie, the backlash is that people are comfortable in the lie. They don’t want to be challenged with the truth about the racist histories of our societies, about the ongoing inequalities and forms of oppression. Lies are comfortable for those who have been socialized to think of themselves as superior to others, because that superiority is being threatened. And politicians tell them that the lie is okay, that the lie is the truth. We see it in the United States, Europe, or in India, where I was born. The rhetoric is very similar: make something great again. But it was never great to begin with, they were societies founded on slavery and genocide.
Q. What role do social media and big tech companies play in this reactionary movement?
A. They play a major role because they create bubbles that reinforce what we already think. But even more than social media, there are other technologies that are having huge consequences in people’s lives. There are algorithms deciding who will be hired or fired. There are artificial intelligence tools that are deciding what grade students get in school, or are used in healthcare and policing. Tech companies sell digital solutions that reinforce the status quo and hide that behind a veneer of neutrality and objectivity. The most powerful technologies are the ones that we’re not even aware of, but that are shaping our opportunities in life.
Q. In a recent article, you analyze how the development of artificial intelligence (AI) perpetuates inequalities. Why does this happen?
A. When we talk about AI, we have to talk about the people behind it. Because when we start putting a face to these technologies, we realize that what we’re being sold as a public good is really serving private interests and the self-interest of a small group of people who, in my opinion, are imposing their visions on the rest of us, and packaging it in a way that makes it seem like it’s going to benefit everyone. We have to demystify the technology and talk about the eugenic values that these people hold, in which some lives are valued more than others.
Q. AI is sold as an almost magical technology.
A. And that’s important for their monopolization of power. Because when we’re told something is inevitable, we don’t try to change it. That also makes it more attractive, but we have to start calling out these mythologies. Behind it are content moderators in the Philippines, digital workers in Kenya, Amazon warehouse laborers… people who are hidden from view. So we think when we’re using ChatGPT, the results magically happen. Many people are being harmed so that some of us can have more efficiency and convenience. We have to take into account the working conditions and also the environmental, energy and water costs that are needed to train a single algorithm. We have to ask ourselves if it’s worth it.
Q. How should it be regulated?
A. Pharmaceutical products have to go through many levels of testing before they reach the user. On the other hand, technologies are already experimenting on us, we are their clinical trial. The U.S. can look to the EU as a starting point in terms of digital rights. We cannot allow technology companies to come in and push out what was there. In Barcelona, for example, Uber gives you the option to look at taxis and public transportation. It seems like a small difference, but it shows a change.
Q. And that came after major taxi strikes.
A. Exactly. It is the power of the people to say we’re not going to allow these companies to disrupt our lives.
Q. In your latest book, Imagination: A Manifesto, you talk about the power of imagination but also about how it is highly conditioned. Why?
A. Imagination is more important than ever. Like in this election: we’re sold two options, with different ingredients, but neither is good for our health. Imagination is saying, ‘We don’t accept these two options. We want to dream of a third, fourth, fifth options.’ This applies in politics and wherever we are told that it is impossible to do something. We are told: health care for all, impossible. Free public transportation, impossible. And yet we are told that we can go to Mars or create general AI. These very far-fetched fantasies of the elites are sold to us as within our grasp. We can get this done. Just give us your money. Trust us. We shouldn’t buy into those imaginations, and we should grow our collective imagination.
Q. In today’s time of conflict, where there are wars in Gaza, Sudan and Ukraine, how can imagination help?
A. The first thing is to understand that these conflicts, genocides and forms of violence are connected. The problem with our imagination is that it is so focused, when everything is directly related. That will change our budgets, because right now it seems that we have no money to help with floods and climate, but we have an infinite amount of money for the military and wars. And then we have to listen to the people who are buried under the rubble of progress, literally and figuratively, if we want a world in which everyone thrives.
Q. Technology is also used for literal destruction.
A. Technological innovation is not the same as social progress. A lot of innovation can simply reinforce old ways of thinking and hierarchies. Technological advancement often hides harm and violence. For example, AI systems that are supposed to target more precise targets in Israel, in practice create many more targets than before, because it goes faster. And it is more deadly. Intelligence conceived in this way grows out of this eugenic idea: some people are smart, others are not, and if you are not smart enough to create this technology, you are going to be bombed. But in technology, everything is hidden: in a tech conference like Smart Expo, there is an Israeli pavilion.
Q. Talking about this issue is a big problem in U.S. universities. How do you feel about it?
A. Some call it “the new McCarthyism.” I have colleagues who have been fired for simply speaking out about Gaza. My own students are standing trial for a peaceful sit-in. We are seeing the hypocrisy of many institutions, such as higher education, but also big companies, such as Google or Microsoft. They love to talk about freedom of speech, but now the truth is coming out about their real values: compliance.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition
This post was originally published on here