Follow GFI:
Find us on Facebook Follow us on Twitter Find us on Linkedin Subscribe to our RSS Feed Find us on YouTube Find us on Google+
 

Patch Mismanagement: Recipe for Disaster

on February 25, 2013

You know how important it is to keep all the systems on your network updated – you’ve certainly been told often enough. And if you operate in a regulated industry, it’s not just a good idea – it’s a requirement. Part of the compliance process includes being able to show that systems have all necessary security updates applied. But actually doing it is more of a challenge than it should be.  Despite repeated evidence that not applying updates in a timely manner can expose a corporate network to serious risk, a surprising number of business computers are missing some vital updates.

The sheer volume of security updates that come down the pike, not just from Microsoft and other OS vendors but from the makers of third party software as well, makes it a scramble to stay ahead of the game. But there are other factors that contribute to the “failure to patch” syndrome, too. Let’s look at some of those, er, excuses – and how the job of patching can be made a little less onerous.

If we compare computer security to physical security, we could say firewalls function similarly to fences and guard dogs, access controls are a little like locks, and applying patches is somewhat analogous to installing steel reinforced doors or unbreakable windows. In the latter cases, we look at vulnerabilities in the original design of the software or building, and we update to something that is less vulnerable.

Patching is an important part of your overall security strategy, because a secure OS and applications create the foundation upon which other security measures rest. However, in order to be effective, patching has to be done right. When approached in a haphazard manner, it can be worse than useless; it can result in downtime, loss of productivity and even create a less secure environment. That’s why you need written patch management policies and procedures. It’s also important to designate a person or person(s) responsible for the various steps in the patch management process. For a small network, one person may do it all. In a large datacenter, you might need to break it into different areas of responsibility to be assigned to different persons.

They say “no man is an island” and no computer program is, either. Probably the key factor to consider is that while patches are generally created by vendors focused on a particular operating system or application, in the real world our OS and apps are running within an entire ecosystem where programs interact with one another and every piece of software has the potential to affect many others. The particular hardware configuration, the settings that are configured in the OS and apps, what other programs are running and even the order in which they start up can determine whether the patching process goes smoothly and does what it was intended to do or brings a vital function to a screeching halt.

That’s why testing is such an important part of the patching process. But before you get that far, a vulnerability assessment can save you a lot of time and grief. Where are the vulnerabilities in your environment? Is a particular patch even relevant for your systems, in your network environment? Answering those questions means having someone in the org who stays on top of current security issues as they become known. That person must be familiar with the software in use and the vendors’ patch release practices and schedules, but you should look beyond “official” info from the vendors.

Once it’s determined that a vulnerability impacts your org and there is a patch available, it pays to research the experiences others are having with that patch. Are there known conflicts with particular programs or configurations? Are those conflicts likely to affect your own environment? Only after these questions are answered do you proceed to the testing stage.

Your test environment should obviously match your production environment as closely as possible, both in terms of hardware and software. Once the patch has been applied to the test machines without problems, it’s time to roll it out to your production systems. You’ll want to prioritize so that critical or most exposed systems are patched first, and so that patches addressing the most severe vulnerabilities get top priority. Even though you’ve tested the patches, something can still go wrong – so you need a backup plan (such as redundant systems that can take over if a critical server doesn’t reboot after patching).

If all this seems like a daunting job, it can be. However, it can be made much easier through automation. That can mean writing your own scripts, or it can mean the use of commercial patch management and deployment tools that have been developed and tested extensively in a variety of environments and for which vendors provide support. A full service patch management solution should be able to scan systems for missing patches, download the patches, test them and deploy them to the production network. It should make the process as automatic as possible while still giving administrators ultimate control.

Time is of the essence in getting serious vulnerabilities patched before an attacker exploits them. Yet the patching process must proceed through all the steps to avoid the disasters that can be created by too-hasty patching. The big advantage of automation software is that it gets you through the steps of the process much more quickly than you can do it manually. In today’s tough economic times, it’s tempting to try to DYI as much as possible to save money, but in my opinion, a good patch management system will soon pay for itself, not just in dollars but in administrative overhead and even reduced stress for everyone who has a stake in ensuring that systems are secure – right up and including the CEO.

If you’re looking for a good patch management solution, check out what GFI LanGuard can do for you or download a FREE trial and give it a spin!

 

About the Author:

Debra Littlejohn Shinder has been working and writing in the field of IT security since 1998. She’s an author of and contributor to over 25 books on computer technology, including “Scene of the Cybercrime,” based on her previous experience as a police officer and police academy instructor. Deb is owner and CEO of TACteam and has contracted with Microsoft, Intel, HP, Prowess Consulting, Sunbelt Software, GFI Software, ConfigureSoft, 2X Software and other software and hardware companies. She currently writes articles and blogs for Windowsecurity.com, WindowsNetworking.com and ISAserver.org and has published more than 1500 articles for web sites and print magazines. Deb has been a Microsoft MVP in the area of enterprise security for the past ten years.