In the United States, Australia, and Europe, police continue to arrest local Moslems (most of them recent immigrants) for trying to organize terror attacks. The police invariably have compelling proof, in the form of emails, phone recordings or videos. Many of these terrorists would not have been caught before September 11, 2001. That's because the intelligence gathering tools, and attitudes towards them, have changed a lot in the last decade.
It's not just that a lot more people have been hired to seek out terrorists. The big change is the technology. More and more, its robots that are looking for the terrorists. This approach has raised some interesting legal questions. For example, are privacy rights violated if only a robot is looking at the information? Many people aren't concerned with robots watching what they do, or have done. But American law, and the courts that interpret it, still give privacy rights primacy, even if no humans are involved in the surveillance. It wasn't always that way.
Privacy rights have become a growing issue since World War II. But, since September 11, 2001, it's become obvious that protecting those rights can get people killed. The investigation of the 911 attacks revealed that a terrorist suspect was captured before the attacks, who had information on his laptop that could have exposed the preparations for the attack. The FBI did not look at the laptop's hard drive because of concerns over violating the suspects privacy rights.
Privacy in the modern world is a misunderstood concept. While the law keeps the government from using many forms of information, or information searching, for law enforcement or national security, there are far fewer restrictions on commercial use of similar data and tools. The difference is that, without the access of commercial users to credit card, real estate, and other commercial transactions, the cost of these transactions would go up because of increased fraud. Thus the public tolerates this degree of surveillance to reduce fraud, and what they pay for things. And then there's data mining, an old technique that, as long ago as the 1970s, was used to identify and arrest terrorists in Germany. Yet the same techniques today are seen by the law as an assault on privacy rights. Meanwhile, data mining has been used by commercial firms for decades to sort out who to sell to.
What it comes down to is people not trusting their government, or at least trusting banks and credit card companies more than politicians. There's probably some wisdom in that, but it constantly puts intelligence officers up against a choice between tracking down terrorists, and breaking the law, or just ignoring the problem and making sure that all your paperwork is in order when the post-attack investigators come looking for reasons "why this happened."
The distrust of politicians and government officials rests on attitudes more than facts. There's far more abuse of databases by private individuals than by government officials (who are more likely to get caught and prosecuted.) As a result, there are very few cases of these data searches actually being abused. But the fear is great, just like the irrational fear of nuclear power plants, alongside a tolerance for much more dangerous coal and oil fired plants. It's why people feel safer driving to an airport, than when they fly off on an aircraft. It's more dangerous to travel in the car, but we're not talking about logic and truth here, but emotion and fears that can be exploited.
But now robots are doing the searching, and suddenly the fears are going away. Take video surveillance. For a long time this was seen as yet another intrusion on privacy, even through almost all the surveillance was on public spaces. But suddenly, everyone has an "aha" moment when they realize that the cameras are recording, and nearly all those videos are never seen by human eyes unless a crime has been committed. At that point, you can more easily identify the criminal, and prosecute with little muss or fuss. The criminals, at least the ones with half a brain, now avoid areas where there are cameras, and crime rates go down in those areas.
But trying to make the same case for data mining databases in search of terrorists, even when nearly all the work is done by robots, still raises the hackles of civil libertarians who see this as an infringement on privacy. The government can't be trusted, even though there is no track record of government abuse in this area. It's not just an American problem. In the 1970s, after German police used data mining to shut down a lethal bunch of leftist terrorists, the data mining program was dismantled, lest some bureaucrat do some unnamed, but really terrible, mischief. The terrorists are back, and the police have had to carefully sneak back in the data mining tools.
The same thing is happening in the United States. With paranoid lawyers at their sides, for protection, intelligence agencies are using data mining in innovative ways that catch the terrorists, while keeping the data miners out of jail. So far. Members of Congress who have been briefed have let the roundabout methods pass, for now. Members of Congress have been known to suddenly develop amnesia if something they have let pass suddenly becomes a war crime in the struggle to protect privacy.