Spotting terrorist behaviour online is harder than finding child ​abuse images

http://www.theguardian.com/technology/2014/dec/04/terrorist-communications-internet-companies-isc-child-abuse-images-different-technical-undertakings

Version 0 of 1.

Why are we outraged by the suggestion that Facebook users’ messages should be screened for potential terrorist threats yet we accept that airline passengers are screened for terrorist threats before boarding a plane? What’s the difference between moving people or information around the world? This is the question raised by the UK parliament’s intelligence and security committee when it suggests Facebook and other internet platforms should “take responsibility” for detecting terrorist activity online.

There are a number of reasons why requiring Facebook and other websites to become partners in state surveillance threatens free expression and privacy, but before considering this radical step, let us examine whether it makes technical sense.

We might like to believe that internet powerhouses possess the technological wizardry to pinpoint terrorist behaviour hidden in the hundreds of millions of messages generated each hour with the same accuracy of an airport metal detector which can spot a revolver in a traveller’s pocket. Implicit in the ISC report is the suggestion that these Silicon Valley geniuses could make the world a safer place but just refuse to do so. But the reality is more complex.

The committee suggests online services can spot terrorist behaviour in much the same way as ISPs and search engines currently detect and remove child abuse images. Yet these are very different technical undertakings. Most child abuse images are detected by computer programmes designed to notice patterns recurring from one image to another. Law enforcement experts identify an initial set of illegal images and the pattern recognition software flags online picture files that are similar.

YouTube uses a similar approach to identifying videos that may infringe copyright. Copyright holders upload samples of their works and then a very clever YouTube system flags any videos on the site that contain similar images or audio. These techniques all depend on being able to train systems to know what kind of material to look for. If you give these systems enough examples of what they are looking for and provide feedback on their success, they tend to work pretty well.

Finding terrorist communications is much harder that finding copyright infringing videos or child abuse images. First, there just aren’t that many terrorists in the world (luckily) so there is little data with which to train automated alerts. More importantly, according to the ISC report, terrorist behaviour is adaptive. A video rip-off of a copyrighted movie can’t change is characteristics to avoid detection. Nor can a child abuse image morph into something else. However, terrorists know they are being watched so take steps to avoid detection.

In fact, the ISC report is a compelling explication of just how hard it is for expert human investigators to know how to interpret behaviour of potential terrorists. The perpetrators of the terrible murder of Fusilier Lee Rigby were under regular surveillance by British authorities, but they were still not able to pinpoint threats. Similarly, while automated terrorist alerts sound appealing, it is hard to design systems that can discern intent, so any such approach would be subject to risk of misidentification.

Of course, the internet ought not be a free-fire zone for terrorists and criminals. So just what kind of help ought websites offer in law enforcement and intelligence investigations? Rather than expect platforms to proactively identify terrorist needles in the giant online haystack, they ought to respond to reasonable and judicially supervised law enforcement requests for information about specific individual suspects. That way courts can assess the reliability of data being sought and provide appropriate protections for individuals. Asking websites to monitor and remove speech from their services on their own poses a grave risk to freedom of expression.

Despite the protestations of the committee, US internet platforms do actually respond to UK surveillance requests, returning data roughly three-quarters of the time, only a slightly lower rate than US requests, according to transparency reports produced by the companies. I’m not troubled by the difference. As a user I want the websites I use to scrutinise these requests carefully and resist the ones that seem beyond the bounds of the law. Different countries have varying degrees of privacy protection in their surveillance laws and I want internet companies to stand up for their users’ rights.

Online surveillance will continue to be a toxic issue until we have a reset in the relationship between governments (both law enforcement and intelligence agencies) and the online community (both providers and users). This reset requires substantive agreement on human rights norms for global surveillance, and some real accountability mechanism that users can trust.

Computer scientists in my lab and around the world are designing a new class of accountable systems that can help restore trust. These systems enable both governments and internet platforms to provide transparent proof to the world that they are actually following the rules as required by law.

Restoring trust online begins with a binding commitment to global human rights principles by those who would conduct surveillance, and then legal and technical systems that assure those rules are being followed.

Daniel J Weitzner is principal research scientist at the MIT Computer Science and Artificial Intelligence Lab, former White House deputy chief technology officer for internet policy, and co-founder of TrustLayers, a new accountable systems company

• Jessica Valenti: If tech companies wanted to end online harrassment, they could do it tomorrow

http://www.theguardian.com/commentisfree/series/jessica-valenti-column