Researchers have been refining methods to detect fake accounts on social media for many years. But methods created to sniff out individual bots can fail to detect more sophisticated forms of manipulation—such as state-sponsored disinformation or harassment campaigns spanning thousands of accounts over many years.
Camille François, the chief innovation officer at Graphika, says the public needs better data and models to address online manipulation without inadvertently silencing genuine voices.
François and her team use machine learning to map out online communities and the ways information flows through networks. They apply data science and investigative methods to these maps to find the telltale signatures of coordinated disinformation campaigns. Last year, François and colleagues at Oxford used this approach to help the US Senate Select Committee on Intelligence better understand Russian activities during and after the 2016 presidential election.
François says that some of her biggest breakthroughs have come from interviewing troll farm defectors and victims to understand the inner workings of troll farms. “This work is two parts technology, one part sociology,” she says. “The techniques are always evolving, and we have to stay one step ahead."