Best Practices for Patch Management
The first item on an administrator’s to-do list is generally patch management. If done incorrectly patch management can be a risk for the organization instead of a risk mitigator. A few simple best practices however easily eliminate all of these risks as well as ensure that the process is finished quickly and efficiently.
Monitor the patch status of all your applications
The first step in patch management is to be aware when new patches are needed. The easiest way to accomplish this is by employing a solution that monitors your network patch status and notifies you automatically when patches are available. If budget is an issue another possibility is to keep track of what applications you use and periodically check the respective websites for new issued updates.
Caution: A common mistake here is that administrators sometimes focus exclusively on Microsoft patching only. This might be due to convenience because Microsoft have efficient patch management solutions or due to a genuine belief that this is enough. This however is a misconception, as applications from other vendors that have vulnerabilities left unpatched can pose a security risk just as much as any Microsoft application. Administrators should ensure all their applications are kept up to date.
Test patches before deploying
Often this step is skipped completely. When deploying patches without properly testing them out you risk that one of the patches might conflict and cause issues on the organization’s infrastructure.
You have to keep in mind that patches are pieces of code that change the existing code of the application they apply too. The changes can be numerous including a change in the behaviour of that application and/or its underlying libraries. In some cases these changes could conflict with other applications deployed on that system that make use of this functionality thus causing malfunctions which, in some cases, can be system wide.
There are reported cases in which patches cause a system to blue screen on boot up requiring a complete system reinstallation to recover.
Suggestion: Ensure optimum patch testing before patch deployment – it is good practice to keep a standardized installation on companywide workstations and servers. Ensure that you keep perfectly mirrored test machines with the same exact software and same versions of your production network. When you test the patches successfully on your test setup prior to patch deployment you can rest easy that no problems will be experienced when deploying on your production network.
Caution: For proper testing in this context it is essential that users are restricted from installing software indiscriminately. If a user installs a piece of software that the administrator is not aware of it might conflict with the patch about to be deployed.
Automate Deployment
Patch management can be a time consuming operation. There are plenty of patch management solutions that can help with automating this deployment process for both Microsoft and non-Microsoft patches thus minimising administrator interaction.
If budget is an issue there are free solutions by Microsoft that can help automate patch management for Microsoft products; however, as mentioned earlier it is still essential to patch non-Microsoft products even if this needs to be done manually.
Disaster Recovery
Another important, yet often overlooked, best practice is to have a disaster recovery plan should your patch management fail and cause problems. Backups are the easiest option and they can also be used to mitigate other risks such as a virus infection or intrusion.
Caution: When employing testing before deployment it might be tempting to think that disaster recovery from a botched patch is redundant as this risk would have been mitigated; however, there are various scenarios in which some subtle differences (perhaps even in hardware or a user installing an application without the administrator’s knowledge) might cause issues when the patch is deployed on the production environment yet have no issues on the testing environment itself.










I think a disaster recovery plan is not only the best practice to addressing patch management issues, it’s downright essential. You can test the patch in controlled conditions all you want, but who’s to say that it won’t choke up all your systems once the patch goes live? Having fail-safes are a must have with systems reaching a level of complexity far beyond our wildest dreams less than ten years ago.
It’s not that I don’t think companies don’t test their patches before deploying. It’s more that I don’t think companies test them enough. There’s a million possible things that can go wrong with a single patch, and though all of them can’t realistically be pinpointed under most project timetables. These vulnerabilities should at least be minimized. The advantage you gain by sticking to unrealistic deadlines is immediately lost when the system goes down due to bad patch management.
Hi Nathan, you’re right in that a disaster recover plan is definitely essential. I would still recommend going through the testing cycle however. It’s true that one cannot be completely sure that a patch will not cause issues once deployed in the live environment no matter how attentive one is with the testing; however, even an effective disaster recovery plan will take time to execute and during that time you might be losing precious productivity. I think they’re both important and should both be employed.
I agree however, you’re 100% right that a disaster recovery is an essential part of patch management!
@nathan
I’ve actually worked in a company where the lack of an effective disaster recovery plan ended up costing them a fortune. In an effort to make deadline and lower costs, the company gambled to release a patch far sooner than it could be amply tested. The theory behind the decision was that the patch could actually be fixed in future, less sizable, updates. When the patch proved to be more bugged than we first imagined, a recovery plan would’ve been able to minimize areas affected by the patch release. In the end, what was supposedly cheap and risk-free, ended up being costly and damaging.
I’d like to think that all IT development companies (at least to my knowledge) test patches to some degree before they are rolled out. Unfortunately, from what I gather from the discussions sparked by this article; companies pressured by time and resources aren’t able to test patches as extensively as they should. However, it seems that untested patches are causing more trouble than they’re worth. Might as well be safe and test them extensively.