The spread of so-called “fake news” via social media has been one of the biggest real news stories of 2016. Wellesley faculty members in computer science are researching this phenomenon, investigating everything from “Twitter bots” to Facebook algorithms. Professor P. Takis Metaxas and Assistant Professor Eni Mustafaraj, both of the computer science department, have spoken to the media about the trend and are also using the topic as a springboard for discussing biases, the dangers of echo chambers, and the need for critical thinking.

Most recently, faculty research was mentioned by the PBS NewshourBuzzFeed, and Tech News World. The PBS story, for example, talked about the faculty’s findings related to the 2010 special election in Massachusetts held after the death of Sen. Ted Kennedy: “Researchers at Wellesley College found that, in the hours before the election, a Republican group from Iowa used...Twitter bots to spread misinformation about the Democratic candidate Martha Coakley.” Their messages reached 60,000 people, reported Tech News World

In an email interview, Metaxas said many valuable lessons to be learned from fake news go well beyond simply understanding data. Because the ethical issues it raises are clear, he said, in the classroom “what we spend more time discussing is how to determine whether something is true or not. We discuss the fact that people prefer not to challenge their previously held beliefs and [instead] focus on the details that reinforce [their beliefs], rather than those that challenge them.” He offers the example of what has been dubbed “pizzagate,” a fake news story about John Podesta, the campaign manager for Hillary Rodham Clinton '69. Metaxas and his colleagues have been following the spread of the story through their TwitterTrails project, an interactive, web-based tool that allows users to investigate rumors on Twitter. The graphs on the TwitterTrails website show that “the ‘dialogue’ on Twitter was almost exclusively between those who already believed [the story] and kept reinforcing their beliefs through the echo chamber that they created,” according to Metaxas.

Following the presidential election, Mustafaraj spoke at a teach-in on campus about the way platform algorithms support bias in social media, and how social media can be manipulated as a tool for propaganda. “Only a few days ago, Facebook introduced several new changes to its interface that will make it more difficult to spread fake news,” said Mustafaraj, adding that the social media giant will label some stories as “disputed.” She welcomes the move, but said it can’t solve every problem. She points to the social news aggregator website Reddit. “[It] was taken over by “The_Donald subreddit" group, composed of people who deliberately game the platform to push their view on the main page and effectively silence everyone else. They also are not shy [about using] intimidation and other tactics,” she said.

Metaxas and Mustafaraj say researchers are currently focusing on how to prevent the spread of fake news. Stopping the spread of fake news is very complicated, Metaxas said, because the stories are sophisticatedly "designed to influence those who are susceptible to conspiracy theories, and those people have serious trouble recognizing fake news.” Mustafaraj added, “What complicates the fight against fake news is the lack of digital and specifically web literacy skills in the general population.” Another issue is that financial incentives make fake news lucrative, Mustafaraj added. Metaxas proposed “a comprehensive four-sided solution: experts that have the time and experience to investigate important news; technology that makes it easy for people to be informed…; restricting the financial incentives for ‘fake news’ producers; and education on critical thinking.”