Integrity First for America wound down operations in December 2022; click here to learn more. This is an archived website and Charlottesville case files will continue to remain available.
Illustration: Dianna McDougall for Adweek

By Scott Nover, ADWEEK

In the past two months, longstanding debates about the role of social media companies in policing hate speech have come to a head. Twitter and Reddit have cracked down on hate speech and content that incites violence, while Facebook is facing an all-out advertiser boycott organized by civil rights groups critical of its lack of action on hate speech and harmful misinformation.

Platforms are also being used to organize real-world events. Such was the case with the deadly 2017 Unite the Right rally in Charlottesville, Va., a white supremacist and neo-Nazi march during which Heather Heyer was murdered by a rallygoer who purposely drove his car into a crowd of counterprotesters.

The civil rights group Integrity First for America (IFA) is funding a lawsuit against the rally’s organizers, including Jason Kessler, Andrew Anglin (founder of the Neo-Nazi site The Daily Stormer) and the alt-right leader Richard Spencer.

Ahead of her appearance at Adweek’s NexTech 2020 summit, we caught up with IFA executive director Amy Spitalnick about the role of social media in spreading hate, both online and off.

This interview has been edited for clarity and length.

You’re suing the neo Nazis and white supremacists responsible for the 2017 Unite the Right rally. What role did social media play in allowing hate groups to organize the physical rally online?
Social media was central to the Unite the Right violence. What’s important to understand is that the violence was no accident; rather, it was planned for months in advance via private Discord chats and other communications. In those chats, these neo-Nazis and white supremacists discussed everything from what to wear and what to bring for lunch to which weapons to carry and whether they could hit protesters with cars—which is of course exactly what happened. “Next stop: Charlottesville. Final stop: Auschwitz!” they wrote while meticulously planning the details of that violent weekend.

Those chats are the basis of our lawsuit, which details a racist, violent conspiracy to target people based on their race, religion and willingness to stand up for the rights of others. It’s important to note that this isn’t a [free] speech case—in fact, the court underscored that point in rejecting the defendants’ motion to dismiss, noting that the First Amendment does not protect violence. Rather, it is a conspiracy case, rooted in a number of federal and state statutes including the Ku Klux Klan Act of 1871.

Social media has provided a way for these extremists to connect across the country and around the globe—not just ahead of Charlottesville, but also when it comes to the broader cycle of white supremacist violence. We see firsthand how each attack is celebrated online and used to inspire the next one, from Charlottesville to Pittsburgh to Christchurch to El Paso and beyond.

In many ways, social media has become the Klan den of the 21st century. These extremists are no longer meeting in the forest wearing white hoods; rather, they’re meeting in Discord chats and other online forums.

Which social media platforms were responsible, what policies allowed hate to spread, and have any of them changed for the better since Charlottesville?
In the aftermath of the Charlottesville violence, there was a wave of companies that claimed they’d start taking action against extremism. Some are trying to walk the walk; others less so. Some of the companies that took action are highlighted in the Communities Overcoming Extremism report, which came out of a yearlong initiative (in which IFA was proud to participate).

But as has been the case with the broader crisis of white supremacy and violent extremism, the news cycle is such that—until the next attack or other awful reminder—these topics tend to fall off people’s radar. We can’t keep waiting until it’s too late (again and again) to recognize the urgency of this issue and demand real change.

We are seeing a reckoning online right now around hate speech, and a lot of it has to do with President Donald Trump: Twitter is restricting and fact-checking the president’s tweets, Snapchat stopped promoting him, Twitch suspended him, Reddit banned hate speech and r/The_Donald, and companies are boycotting Facebook for not doing enough. Has Trump made policing platforms harder by legitimizing and spreading hate—“very fine people on both sides,” he said of Charlottesville?
There’s no question that extremists are uniquely emboldened. That’s in no small part due to the dog whistles, winks and nods, and—increasingly—explicit support from certain leaders.

But just because some are trying to legitimize white nationalism, racism, anti-Semitism and other forms of extremism doesn’t mean that a social media platform—or any entity for that matter—needs to allow it in their space.

The excuse that it’s too hard to do something serious about hate on these platforms is just that—an excuse. These companies have innovated some of the most sophisticated tools to utilize consumer data and other information. They have the resources and ability to act here. They just need the will.

What tangible steps do you think social media companies could take to ensure their communities don’t foster another hate-fueled attack?
We should be clear: No social media company is obligated to give a platform to hate or violence, period. To start, Facebook groups and other social media forums dedicated to white supremacy, Holocaust denial and other forms of extremism have no place on these sites. And it’s a lot easier to put in place policies that catch hate and violence when you’re ensuring civil rights [leaders] and other experts are part of your company’s core leadership. (The Stop Hate for Profit campaign has a good set of recommendations for Facebook that I would encourage people to check out.)

What do you think of de-platforming hate groups which could, some argue, push hateful speakers to fringe areas of the internet where they cannot be monitored as well?
While it’s not a perfect solution, we know that de-platforming works. Don’t just take it from me: Neo-Nazi Richard Spencer said it himself in court the other week, complaining that he’s “financially crippled” because of our case and can’t raise money because he’s been de-platformed from a number of payment processors and other sites. Significantly limiting these extremists’ online reach has a tangible impact on their ability to organize and orchestrate violence, raise money and otherwise further their hate.

You’re right that we can’t simply play a game of whack-a-mole. That’s why this effort also needs to include domain registration and web hosting services, so that the sites whose entire business models are built on platforming extremism are also held accountable.

Finally, all of this must be part of a broader effort to treat violent extremism as the urgent threat to our national (and global) security and to our democracy that it is. That includes not just the private sector but also policymakers and law enforcement—in addition to brave private plaintiffs like ours who are taking these violent neo Nazis and white supremacists to court and holding them to account.

Click here to read the full article on ADWEEK's website.

Stay up to date

Our lawsuit against the Nazis and white supremacists who organized the attack on Charlottesville goes to trial on October 25. Subscribe here for updates about the case and the broader fight against white supremacy.