Danish Police Incident Calls for a Security Assessment
A department responsible for IT Security should, obviously, follow those practices it expects others to follow. Sometimes you come across situations where tasks that should fall under an IT security umbrella are handled by an external department which might not even be familiar with Security practices. These situations can cause a lot of problems and it is therefore essential that when a similar need arises you make sure that the proper knowledge and security procedures are made available to that team. A recent story highlights this problem.
Techdirt.com reported that the Danish police accidentally blocked access to thousands of legitimate internet sites due to human error. Denmark ‘censors’ the Internet through a DNS blacklist system. This list is provided by The National High Tech Crime Centre of the Danish National Police and is used by all ISPs periodically.
I won’t delve into issues of Internet censorship, especially through the use of a DNS blacklist, but I’ll be focusing on the operational failings that led to this mistake. There are so many bad practices in this incident that I don’t know where to start. The most important thing to keep in mind when you have such a process is that if you’re doing an important task like blacklisting web sites nationwide, you must make sure that the environment where such an operation takes place is secured. No one except authorized personnel should have access to the system, and you need both physical and logical checks in place. Secondly, if your task is critical (and blocking access to a website for everyone definitely is), you need to employ division of labour to ensure there is no abuse. No one person should have total access and the ability to perform the task from start to finish. The last thing you want is to give the power to a sole person to decide and implement which sites citizens can access and which they cannot.
There also needs to be a secure path between whoever is pushing the updates and whoever is pulling the updates. Public key cryptography can be used to ensure updates are really issued by the person responsible for that role. This will ensure that even if someone were to gain access to the issuer system, or even hijack the connection on the receiver side, the updating system would not implicitly thrust any input but it would first verify if the source is genuine.
These stories trouble me. It is obvious that there was only a basic security design in place for events to have unfolded this way. To a certain extent it was fortunate that such high profile sites were accidentally blocked as it would not be easy to reverse a blocked site if this were not important. In any case, I hope this story turns out to be an eye opener for those involved in such important exercises to do things properly or we’ll hear of other cases where innent sites are blocked – accidentally or intentionally.









Troubling, indeed. The thing about a process like this is that you really want to dehumanize it as much as possible. If a task of national censorship is laid out in front of you, you want to eliminate as much of the interpretive process as well, try and automate whatever you can. And make sure that if it falls into a grey area, it’s best to let it go most times. When you let people decide what is and isn’t acceptable on a site by site basis, you run a huge risk of human error.
I agree that something like censorship should be dehumanized to a certain degree, because another thing that concerns me is in itself a grey area. Some sites carry content that might be censored but for legitimate reasons, such as search engines, news sites and news aggregators. At the end of the day, the risk of human error might be better than the false positives you will inevitably generate with an automated system.
Of course, a possibility would be a hybrid system where humans decide what is going to be censored and what isn’t, entered in an automated process where other people validate what is going to be blocked, and then automatically implemented.
This is a blessing in disguise for the Danish police department. Although it is a hard lesson learned, this will serve as an excellent learning process for them. World Wide Web security also involves several cases of human error on the protectors’ part. You can’t put this out of the equation. As what I’ve stated earlier – it’s a learning process.
Even in countries with a more advanced Internet security system, such as the ones applied in the United States, United Kingdom and Canada, human errors do occur mainly because of bureaucracy. No single office or department is exclusively in-charge of handling cyber threats.
That’s ridiculous. It seems that freedom on the internet is going to continue diminish though. First, there is corporate pressure i.e. companies like facebook and google recording every second of our online engagement. Second, there is governmental pressure which we’ve seen transpire in the form of PIPA and SOPA.
I am pessimistic about the future of the internet.