I came across a story by Nick Farrell of the Inquirer about a case that happened to an Australian Power company.

The crux of the story is that a virus infected the company’s infrastructure and was spreading inside the network. The engineers quickly replaced the Windows machines with their Linux development stations and this ensured that the plant didn’t have to be shut down due to the infection.

This got me thinking about whether having a cross platform environment can be used effectively as a redundancy system? Granted there are a lot of challenges here; and the Australian story also had an important element in its favor – the Windows machines in that case where merely using XServers to connect to an underlying Solaris System, so there was really nothing to it in order to accomplish the changeover.

If an organization wants to have a cross platform infrastructure it would need to develop its infrastructure on multiple platforms for no additional benefit other than to have a more robust failover. There is also the risk that if third-party software is used (which arguably will be so in most cases) these might not be cross platform, thus making it impossible to have a cross platform redundant system in some of the infrastructure.

Training might also be a prohibitive factor. This too depends on the environment/work involved. It is one thing if you’re simply running a business that requires users to do word processing, quite another if you have a development environment. Users might need to be trained in both your cross platform systems which will be costly in most cases.

However the benefits in such a case can be great as well. If your business requires maximum uptime, this can help achieve that goal in some cases.

This approach might be an effective second level strategy against viruses and malware. If your virus scanner fails and the network gets infected, the recovery procedure might involve rebooting and booting in your secondary OS.  Viruses are rarely cross platform and this should generally be safe enough.

Another advantage would present itself in the updating lifecycle. Sometimes updates break the infrastructure or part of it, so having an alternative platform would ensure that you have a quick failover should that happen.

Additionally this can also protect against data corruption. If the environment of a workstation gets corrupted or damaged to a level that it becomes unusable, all the user would need to do is reboot into the other environment and the primary platform can then be fixed after hours thus avoiding any downtime to that system.

A final advantage would be in the case of a critical vulnerability being discovered in one of the services currently in use in your infrastructure. If it’s a server vulnerability disclosed to the underground community before a patch is developed, there would be no choice but to either disable that service or, if it is critical, live with the risk which will leave you exposed. However, if you have a multiplatform failover, provided that the issue is not cross platform (which luckily isn’t that common), all you need to do is switch to the alternative platform until the problem is resolved thus eliminating the exposure risk.

The only thing that this will obviously not protect you against is hardware failure, but for everything else this scan can also be cost effective in that you would not require additional hardware for a failover environment (to a certain degree, obviously it would still be recommended to have some infrastructure in place to switch over to, should some of the hardware fail) as the platforms can be installed on the same machine in dual booting mode.

All this is just an idea – I am not aware of this strategy being put into use anywhere really. What do you readers think? Would you consider this as being feasible or is it difficult to implement logistic wise? Maybe some of you might actually already employ something like this, in which case it would be nice if you can share some of your experiences of using such an approach!