<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Talk Tech To Me - GFI Blog &#187; Miro Stauder</title>
	<atom:link href="http://www.gfi.com/blog/author/miro-stauder/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.gfi.com/blog</link>
	<description>Brought to you by GFI Software</description>
	<lastBuildDate>Fri, 13 Sep 2013 16:51:58 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	
		<item>
		<title>Wild Wild West (WWW)</title>
		<link>http://www.gfi.com/blog/wild-wild-west-www/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=wild-wild-west-www</link>
		<comments>http://www.gfi.com/blog/wild-wild-west-www/#comments</comments>
		<pubDate>Mon, 05 Jul 2010 13:15:36 +0000</pubDate>
		<dc:creator>Miro Stauder</dc:creator>
				<category><![CDATA[Tech Zone]]></category>
		<category><![CDATA[encryption]]></category>
		<category><![CDATA[Security]]></category>

		<guid isPermaLink="false">http://www.gfi.com/blog/?p=2575</guid>
		<description><![CDATA[In the past decade the internet has surpassed all expectations and changed the lives of us all. The World Wide Web holds little or no safety for the end user. Very much like the Wild West in the 1800s, the &#8230;]]></description>
				<content:encoded><![CDATA[<p><a class="lightbox" title="Gunfighters 6607" href="http://www.gfi.com/blog/wp-content/uploads/2010/06/Wild-Wild-West.jpg"><img class="alignright size-medium wp-image-2576" style="border: 0pt none; margin: 10px;" title="Gunfighters 6607" src="http://www.gfi.com/blog/wp-content/uploads/2010/06/Wild-Wild-West-300x214.jpg" alt="" width="300" height="214" /></a>In the past decade the internet has surpassed all expectations and changed the lives of us all. The World Wide Web holds little or no safety for the end user. Very much like the Wild West in the 1800s, the opportunities and possibilities are endless; however, so are the dangers. Everyone has to watch his/her back because of the unscrupulous gangs of identity thieves and scammers that are just waiting for you to walk into a trap. Online self-defense is a necessity.</p>
<p><span id="more-2575"></span></p>
<p>There is an arms race going on between the dark and white forces; a Sisyphus work of building defenses which are in turn being defeated in a seemingly endless cycle. How can we ever break out of this cycle to finally feel and be safe?</p>
<p>Trust, together with encryption, is the keys to this goal. While most of the internet traffic is unencrypted and untrusted in origin, it is vulnerable to attacks. Obviously encryption by itself is not the silver bullet; it has to be done right, together with trust management and without exceptions.</p>
<p>This can&#8217;t be done overnight. Wherever possible, encryption should be used with proper key management. This would close many holes in the system, no longer exposing end user data to the attackers. The end user needs to be educated and forced to use the more secure &#8211; encrypted storage and protocols whether it’s HTTPS, SFTP, DNSsec or IPsec. Also email encryption and digital signing has been available for decades, but is rarely used by the general public.</p>
<p>It&#8217;s up to us, the IT pros, to set the standards, to configure secure defaults on our systems and in our products. We have to insist on using the most secure options, no compromises.</p>
<p>Many of us use VPNs which are de-facto encrypted by default, but many other services are not! We need to fix this. The best start would be:</p>
<ul>
<li>use encrypted storage, internal and external</li>
<li>use IPsec on your intranet</li>
<li>force HTTPS/SFTP on your website/webmail</li>
<li>force SMTPS/IMAPS/POPS on your email server</li>
<li>introduce email signing/encrypting</li>
<li>enforce proper key management</li>
</ul>
<p>More advanced securing can be achieved by employing DNSsec and NTP over SSL. Also a good idea is to pass proprietary/custom/3rd party protocols via SSL/TLS/IPsec tunnels.</p>
<p>When the majority of IT pros start following these basic rules, the situation will improve. It’s going to take time, but I am optimistic that we will get there.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.gfi.com/blog/wild-wild-west-www/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>compiled from source = bad security practice</title>
		<link>http://www.gfi.com/blog/compiled-source-bad-security-practice/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=compiled-source-bad-security-practice</link>
		<comments>http://www.gfi.com/blog/compiled-source-bad-security-practice/#comments</comments>
		<pubDate>Fri, 06 Nov 2009 10:44:42 +0000</pubDate>
		<dc:creator>Miro Stauder</dc:creator>
				<category><![CDATA[Tech Zone]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[package management]]></category>

		<guid isPermaLink="false">http://www.gfi.com/blog/?p=1567</guid>
		<description><![CDATA[Today I saw a ‘how-to’ of what is supposed to be the &#8216;perfect server&#8216; setup. Well, the &#8216;perfect&#8217; was not meant literally, but the setup is in fact very nice &#8211; from a functional point of view. Open source is &#8230;]]></description>
				<content:encoded><![CDATA[<p><a class="lightbox" title="server setup" href="http://www.gfi.com/blog/wp-content/uploads/2009/11/server-setup.jpg"><img class="alignright size-medium wp-image-1568" style="margin: 10px;" title="server setup" src="http://www.gfi.com/blog/wp-content/uploads/2009/11/server-setup-300x300.jpg" alt="" width="240" height="240" /></a>Today I saw a ‘how-to’ of what is supposed to be <a href="http://www.howtoforge.com/perfect-server-centos-5.4-x86_64-ispconfig-3" target="_blank">the &#8216;</a><strong><a href="http://www.howtoforge.com/perfect-server-centos-5.4-x86_64-ispconfig-3" target="_blank">perfect server</a></strong><a href="http://www.howtoforge.com/perfect-server-centos-5.4-x86_64-ispconfig-3" target="_blank">&#8216; setup</a>. Well, the &#8216;perfect&#8217; was not meant literally, but the setup is in fact very nice &#8211; from a functional point of view.</p>
<p>Open source is great, you can learn a lot from looking at the source code of an application, you can even fix a bug here and there, or code in a feature you always wanted. And all for free&#8230;</p>
<p>What bothered me with this setup was the excessive amount of <strong>custom compiled</strong> subsystems to make them all perform in the desired way. To get the system working is a nice achievement, but to keep it running in production would be a nightmare. This is a <strong>bad security practice</strong> on a binary package based distro, let me explain why.</p>
<p><span id="more-1567"></span></p>
<p>The applications compiled from source do not integrate with the package manager, and if they do (<strong>rpmbuild</strong>), it&#8217;s just a dirty trick, to compile and build a package to install it. Usually the package is just included in the inventory; versioning is broken, dependencies broken, updates broken&#8230;</p>
<p>The administrator would have to track changes to the custom compiled subsystems, pick out the worthwhile updates, and watch for <strong>security fixes</strong>, patch, compile, reconfigure and test the system while keeping good uptime. That&#8217;s not good and you don&#8217;t want to do that, unless you are some kind of masochist!</p>
<p>Instead let’s use the resources of the respective distros packaging team. That&#8217;s what we have <strong>package management</strong> for. Use it! Each of the top distros has a dedicated team to keep the packages up-to-date.</p>
<p>If your distro does not natively provide the package you desire, look for optional or 3rd party repositories. Usually your requirements are not that unique, and the application is already prepackaged in one of the optional repositories. There is a good chance that the repositories are maintained well enough, and you&#8217;ll have updates available when needed.</p>
<p>Next time when you decide to install something, think &#8211; is it also maintainable?</p>
]]></content:encoded>
			<wfw:commentRss>http://www.gfi.com/blog/compiled-source-bad-security-practice/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Into uncharted territory</title>
		<link>http://www.gfi.com/blog/uncharted-territory/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=uncharted-territory</link>
		<comments>http://www.gfi.com/blog/uncharted-territory/#comments</comments>
		<pubDate>Tue, 20 Oct 2009 00:45:48 +0000</pubDate>
		<dc:creator>Miro Stauder</dc:creator>
				<category><![CDATA[Tech Zone]]></category>
		<category><![CDATA[Funtoo]]></category>
		<category><![CDATA[Grub2]]></category>

		<guid isPermaLink="false">http://www.gfi.com/blog/?p=1234</guid>
		<description><![CDATA[I decided to go a bit further than usual, and push the limits of the unknown. I decided to install Funtoo, a Gentoo based distro or rather fork, by the Gentoo fouder Daniel Robbins. Daniel is no longer active in &#8230;]]></description>
				<content:encoded><![CDATA[<p><a class="lightbox" title="Into Unchartered Territory" href="http://www.gfi.com/blog/wp-content/uploads/2009/09/Into-Unchartered-Territory.jpg"><img class="alignright size-medium wp-image-1235" style="margin: 10px; border: 0px;" title="Into Unchartered Territory" src="http://www.gfi.com/blog/wp-content/uploads/2009/09/Into-Unchartered-Territory-300x223.jpg" alt="" width="300" height="223" /></a>I decided to go a bit further than usual, and push the limits of the unknown.</p>
<p>I decided to install <strong><a href="http://www.funtoo.org/" target="_blank">Funtoo</a>,</strong> a Gentoo based distro or rather fork, by the Gentoo fouder Daniel Robbins. Daniel is no longer active in Gentoo, and Funtoo is his new pet project, introducing some radical new ideas into the Gentoo landscape. Not to stop there, I went for <strong>64bit</strong>, <strong>core2 optimized</strong> compiler flags, <strong>Ext4</strong>, latest <strong>kernel</strong> (<strong>2.6.30-gentoo-r5</strong>), <strong>grub2</strong> and <strong>git based portage</strong>. Oh boy, am I asking for trouble or what!?!</p>
<p><span id="more-1234"></span>To start off, I needed a 64bit boot cd, just any Linux distro would do, but it must be 64 bit &#8211; the bootstrap enviroment must be compatible with the installed enviroment, otherwise the chroot from one to the other won&#8217;t work. The choice fell to a <a href="http://gentoo.ynet.sk/pub/releases/amd64/current-iso/" target="_blank">Gentoo-MinimalInstallCD-64bit</a>, fresh weekly build, cca 120MB in size.</p>
<p>The <a href="http://www.funtoo.org/en/articles/funtoo/quick-install-howto/" target="_blank">install</a> follows mostly the usual <a href="http://www.gentoo.org/doc/en/handbook/handbook-x86.xml?part=1&amp;chap=6" target="_blank">Gentoo install routine</a> &#8211; fdisk, mkfs, mount, mount proc, mount dev, chroot, env-update, source profile. To jumpstart the network a &#8216;<strong>dhcpcd eth0</strong>&#8216; will do &#8211; for now.</p>
<p>The MinimalInstallCD supports ext4 out of the box, so formatting was a non-issue. I downloaded the latest preconfigured and optimized stage3 &amp; portage tarballs for <a href="http://dev.funtoo.org/linux/funtoo/core2/" target="_blank">core2 architecture</a> from funtoo.org and extracted them into the new ext4 partition. Some customization is needed to <strong>fstab</strong> to match the partition layout, and <strong>make.conf</strong> to make use of more compilation threads on my multicore CPU. The compiler flags and <strong>make.profile</strong> were already preset by my choice of the coresponding stage3.</p>
<p>Here comes the tricky part, grub2 &amp; kernel. I won&#8217;t drag you through the whole kernel building, just make sure you switch on ext4 support in the new kernel. My wrong was the assumption that ext4 was enabled by default; well, we all learn from mistakes&#8230; Btw, I used <strong><a href="http://www.gentoo.org/doc/en/genkernel.xml" target="_blank">genkernel</a></strong> to build the kernel.</p>
<p><a href="http://en.gentoo-wiki.com/wiki/Grub2" target="_blank">Grub2</a> is a different cup of coffee.</p>
<p>Grub2 is unstable, and won&#8217;t be stable any time soon. To further complicate things, the only usable version (ext4) is from SVN, and my firewall does not allow svn: connections. I tested the 1.96 ebuild, it failed miserably, time for hacking the 9999 aka SVN ebuild.</p>
<p>Grub2, needs a patch to recognize gentoo style kernels, which is included with the 9999 ebuild unlike 1.96, but the path to access SVN via http: is different then via native svn: protocol. A bit of searching, a quick fix and rebuilding the digest, and the SVN ebuild was working now via http. There is also a weird dependency on ruby, which I find to be somewhat overkill.</p>
<p>Now the ebuild worked, <strong>grub2-install</strong> worked, but still no luck booting. A few sources recommend to be extremely conservative with the build flags, so I chose to set &#8216;<strong>sys-boot/grub static custom-cflags multislot</strong>&#8216; in packge.use.</p>
<p>No harm done, but still no boot. Finally I figured out that <strong>grub-mkconfig</strong> assumed I used a /boot partition (I didn&#8217;t), and generated a wrong path to the kernel in <strong>grub.conf</strong>. Once fixed, the kernel started booting and I was hit by a problem once more, the boot stalled at the mounting of the / partition. As mentioned before, don&#8217;t forget to build in Ext4 support into your kernel!</p>
<p>Rebuilt kernel with Ext4, booting now&#8230; login: Damn! Forgot to set the root password. Where is that boot CD again?</p>
<p>While now the system booted and kind of worked, the / partition stayed read only. I had to remount it manually &#8216;<strong>mount -o remount,rw / /</strong>&#8216; to be able to fix anything. After some pondering and unsuccessful attempts to fix the issue, I remembered that I encountered this problem before while installing Gentoo on my SGI O2. It turned out to be the same problem, the kernel parameter <strong>root=/dev/hda1</strong> won&#8217;t cut it, <strong>real_root=/dev/hda1</strong> to the help! It has to do something with the temporary roots in ram during early boot and unmatched filesystem types, who knows&#8230;</p>
<p>Next came the screen resolution, where the familiar <strong>vga=791</strong> kernel parameter did nothing. This time some Googling helped me to find some Arch Linux guys solution, including &#8216;<strong>insmod vbe</strong>&#8216; in grub.conf did the trick, <strong>vga=791</strong> works now!</p>
<p>Well, I hope that this was as much Fun for you as it was for me; more on Funtoo next time.</p>
<pre><em>All product and company names herein may be trademarks of their respective owners.</em></pre>
]]></content:encoded>
			<wfw:commentRss>http://www.gfi.com/blog/uncharted-territory/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>The SVN recovery story</title>
		<link>http://www.gfi.com/blog/svn-recovery-story/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=svn-recovery-story</link>
		<comments>http://www.gfi.com/blog/svn-recovery-story/#comments</comments>
		<pubDate>Fri, 09 Oct 2009 01:00:29 +0000</pubDate>
		<dc:creator>Miro Stauder</dc:creator>
				<category><![CDATA[Tech Zone]]></category>
		<category><![CDATA[SVN recovery]]></category>

		<guid isPermaLink="false">http://www.gfi.com/blog/?p=1237</guid>
		<description><![CDATA[So it happened that the disk hosting our code repository developed some bad blocks. Must be contagious or something, this is already the third system within a month&#8230; I noticed this, when I received some strange messages from cron. Basically &#8230;]]></description>
				<content:encoded><![CDATA[<p><a class="lightbox" title="SVN recovery" href="http://www.gfi.com/blog/wp-content/uploads/2009/09/SVN-recovery.jpg"><img class="alignright size-medium wp-image-1238" style="margin: 10px; border: 0px;" title="SVN recovery" src="http://www.gfi.com/blog/wp-content/uploads/2009/09/SVN-recovery-300x200.jpg" alt="" width="300" height="200" /></a>So it happened that the disk hosting our code repository developed some bad blocks. Must be contagious or something, this is already the third system within a month&#8230;</p>
<p>I noticed this, when I received some strange messages from <strong>cron</strong>. Basically my backup script started failing when <strong>svnadmin</strong> dump could not read a file. On closer examination, one of the DIFF files in the svn repository was damaged. This was bad, as the content is stored as a series of diffs, and a particular branch of the repository became unavailable.</p>
<p><span id="more-1237"></span>The <strong><a href="http://subversion.tigris.org/" target="_blank">Subversion</a></strong> aka <strong>SVN</strong> repository is located on <strong>/var</strong> a separate partition. Since /var receives a lot of writes, I decided to migrate the whole /var to a different partition to avoid further bad blocks.</p>
<p>Armed with <strong><a href="http://www.samba.org/rsync/" target="_blank">rsync</a></strong> and <strong><a href="http://www.gnu.org/software/ddrescue/ddrescue.html" target="_blank">ddrescue</a></strong> I started the recovery process. You know the drill; format the new partition, rsync the content and ddrescue the damaged file. But this time ddrescue let me down. Now it was time for backups to step in.</p>
<p>Satisfied with a quick solution &#8211; restore from backup, I stopped all services which were using /var, unmounted it, mounted the new partition as /var and restarted all previously stopped services.</p>
<p>To my surprise the SVN still did not want to cooperate. I was getting a bad feeling about this, which was confirmed after examining all eleven weekly backups. In all of them the file was corrupted.</p>
<p>How did I not notice this earlier? Well, it was time to think about a new strategy.</p>
<p>I tried to run <a href="http://svnbook.red-bean.com/en/1.0/ch05s03.html#svn-ch-5-sect-3.1.2" target="_blank"><strong>svnadmin</strong> recover mode</a>, but that didn&#8217;t yield anything.</p>
<p>I don&#8217;t know the internal workings of SVN, but what I figured out is:</p>
<ul>
<li>content is stored as diffs</li>
<li>each commit is one big file containing all changes</li>
<li>there is file containing the number of the current revision</li>
</ul>
<p>Luckily I knew exactly what files were committed in that particular diff, even better for me, it was a commit of only a few new big files! My best chance would be to recreate the corrupted commit.</p>
<p>When I manipulated the revision counter, the SVN server was tricked into thinking that the &#8216;current&#8217; version was whatever I set it to be.</p>
<p>So I rolled back the revision counter to the revision before corruption, checked out the branch with <strong><a href="http://tortoisesvn.tigris.org/" target="_blank">TortoiseSVN</a></strong> into a new location, included the &#8216;new&#8217; files and committed them to the repository. Re-setted the revision counter to original and voila, everything started working!</p>
<p>I guess this time I got more luck than brain. What would you do?</p>
<pre><em>All product and company names herein may be trademarks of their respective owners.</em></pre>
]]></content:encoded>
			<wfw:commentRss>http://www.gfi.com/blog/svn-recovery-story/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Disk failing, what now?</title>
		<link>http://www.gfi.com/blog/disk-failing/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=disk-failing</link>
		<comments>http://www.gfi.com/blog/disk-failing/#comments</comments>
		<pubDate>Wed, 07 Oct 2009 13:03:47 +0000</pubDate>
		<dc:creator>Miro Stauder</dc:creator>
				<category><![CDATA[Tech Zone]]></category>
		<category><![CDATA[ddrescue]]></category>
		<category><![CDATA[disk fail]]></category>
		<category><![CDATA[failing blocks]]></category>

		<guid isPermaLink="false">http://www.gfi.com/blog/?p=1231</guid>
		<description><![CDATA[A disk can, and will, develop bad blocks during its lifetime. Usually the disk firmware is good enough in recognizing failing blocks, before they become unrecoverable and remap them, but nothing is perfect and  problems happens. In that case, where &#8230;]]></description>
				<content:encoded><![CDATA[<p><a class="lightbox" title="Disk failing, what next" href="http://www.gfi.com/blog/wp-content/uploads/2009/09/Disk-failing-what-next.jpg"><img class="alignright size-medium wp-image-1232" style="margin: 10px; border: 0px;" title="Disk failing, what next" src="http://www.gfi.com/blog/wp-content/uploads/2009/09/Disk-failing-what-next-300x201.jpg" alt="" width="300" height="201" /></a>A disk can, and will, develop <strong>bad blocks</strong> during its lifetime. Usually the disk firmware is good enough in recognizing failing blocks, before they become unrecoverable and remap them, but nothing is perfect and  problems happens.</p>
<p>In that case, where you end up with unrecoverable i/o errors, and your OS refuses to cooperate, don&#8217;t give up. Here comes ddrescue, well, to the rescue!</p>
<p><span id="more-1231"></span><strong><a href="http://www.gnu.org/software/ddrescue/ddrescue.html" target="_blank">ddrescue</a></strong> is a GNU tool, similar to unix dd but better in handling i/o errors. It is multi-platform, but native to Linux. On Windows it&#8217;s available via Cygwin.</p>
<p><a href="http://www.garloff.de/kurt/linux/ddrescue/" target="_blank">GNU ddrescue</a> is often confused with dd_rescue, an unrelated and currently unmaintained project with a similar aim.</p>
<p>ddrescue can be used on any block device, including raw disk devices, partitions, or single files. It automatically uses some smart methods of access like direct i/o, different block sizes, reverse advance and repeated retries to maximize the chances of data recovery.</p>
<p>If you got just a few unreadable files (like what happened to me), then run ddrescue just on the damaged files.</p>
<p># ddrescue -d –R -r 100 /damageddisk/somedir/damagedfile /rescuedir/recoveredfile</p>
<p>-d instructs to use direct I/O</p>
<p>-R retrims the error area on each retry</p>
<p>-r 100 sets the retry limit to 100</p>
<p>In my experience, if it does not succeed in 10 retries, it won&#8217;t ever&#8230; but you are free to try and hope for miracles <img src='http://www.gfi.com/blog/wp-includes/images/smilies/icon_smile.gif' alt=':)' class='wp-smiley' /> </p>
<p>&lt;/tips&amp;tricks&gt;</p>
]]></content:encoded>
			<wfw:commentRss>http://www.gfi.com/blog/disk-failing/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Playing with processor affinity and Python</title>
		<link>http://www.gfi.com/blog/playing-processor-affinity-python/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=playing-processor-affinity-python</link>
		<comments>http://www.gfi.com/blog/playing-processor-affinity-python/#comments</comments>
		<pubDate>Mon, 20 Jul 2009 10:16:32 +0000</pubDate>
		<dc:creator>Miro Stauder</dc:creator>
				<category><![CDATA[Tech Zone]]></category>
		<category><![CDATA[processor affinity]]></category>
		<category><![CDATA[python]]></category>
		<category><![CDATA[scripting language]]></category>

		<guid isPermaLink="false">http://www.gfi.com/blog/?p=410</guid>
		<description><![CDATA[Have you ever wondered as to how processor affinity influences a single threaded process on a multiprocessor machine? Well, I have. Today nearly all new machines come with 2 or 4 cores. If you’re lucky, you have an 8 core &#8230;]]></description>
				<content:encoded><![CDATA[<p><a class="lightbox" title="Playing with processor affinity and Python" href="http://www.gfi.com/blog/wp-content/uploads/2009/07/Playing-with-processor-affinity-and-Python.jpg"><img class="alignright size-medium wp-image-413" style="margin: 10px;" title="Playing with processor affinity and Python" src="http://www.gfi.com/blog/wp-content/uploads/2009/07/Playing-with-processor-affinity-and-Python-300x225.jpg" alt="" width="240" height="180" /></a>Have you ever wondered as to how processor affinity influences a single threaded process on a multiprocessor machine? Well, I have. Today nearly all new machines come with 2 or 4 cores. If you’re lucky, you have an 8 core machine, and if you are a very lucky, you get 16 or more to play with. And no, virtual cores don’t count <img src='http://www.gfi.com/blog/wp-includes/images/smilies/icon_smile.gif' alt=':)' class='wp-smiley' /> </p>
<p>So what does processor affinity do to a process on a multiprocessor architecture? This depends on the system architecture. The two common system architectures today are SMP and NUMA. SMP is the classic multiprocessing used by Intel up to the Core2 generation, and NUMA has been used for a long time by AMD Opterons and descendants, and lately also by the Intel Core i7 family. On SMP it does not matter, which data are processed on which core; however, the contrary is true on NUMA, where memory can be local or remote relative to the core used.</p>
<p><span id="more-410"></span>While playing with my workstation (Server 2008 x64 on a quad core Core2 Q6600 @ 3GHz with 8GB of RAM), I noticed a funny thing. The process I was running, a Python script, was jumping from core to core without any recognizable pattern. No other CPU intensive processes were running, only the usual system stuff. The script does some data analysis on quite large datasets – tens of gigabytes, which have to be processed as a stream, because of Python’s incredible memory hunger.</p>
<p>Before you start complaining that the script should have been parallelized in the first place, yes, it is. Parts of it at least are. But certain parts simply must be run sequentially.</p>
<p>So my question was, wouldn’t it be more efficient, if the single threaded process would run on one dedicated processor? Well, unless the process scheduler is über smart, I think it would.</p>
<p>See, the jumps from core to core probably because some serious cache trashing, so if the process runs on one dedicated core, the cache can be utilized much better. Unless, (and here is where my doubts come in), Python is an interpreted scripting language. The interpreter is quite complex with its own infrastructure, taking a lot of CPU power on itself. What if the scheduler is smart enough to split the different parts of the interpreter and the running byte code across multiple cores, and dedicate each piece of code to a core to avoid cache trashing? That would be nice! Not as nice as an auto parallelizing Python, but still, a smart feature to have.</p>
<p>I looked at some documentation about Server 2008 process scheduler, but could not find anything, which would confirm or disprove any of my theories. Apparently there are optimizations related to NUMA architecture, to minimize process core/cpu to memory distance, but nothing about maximizing cache utilization by jumping cores.</p>
<p>Therefore I ran my own tests. I tested on 32 and 64 bit version of Python 2.6.2, as well as Psyco – the python accelerator. Unfortunately, Psyco supports only 32 bit Python <img src='http://www.gfi.com/blog/wp-includes/images/smilies/icon_sad.gif' alt=':(' class='wp-smiley' /> . All tests were run on my workstation as mentioned earlier. Times are in minutes and seconds.</p>
<p><span style="text-decoration: underline;">Script1:<br />
</span><strong>Processor Affinity                       All Cores      Dedicated Core<br />
</strong>Python 2.6.2 32bit                              10:03                10:03<br />
Python 2.6.2 64bit                              10:54                10:42<br />
Python 2.6.2 32bit + Psyco             10:14                 9:38</p>
<p><span style="text-decoration: underline;">Script2:<br />
</span><strong>Processor Affinity                       All Cores      Dedicated Core</strong><br />
Python 2.6.2 32bit                              41:58                 39:46<br />
Python 2.6.2 64bit                              40:09                40:00<br />
Python 2.6.2 32bit + Psyco             30:12                29:38</p>
<p>I have run each test only once, because of time constraints, but I made sure that all tests run under equal conditions. As you can see, the affinity gain is minimal, but measurable, and specific to the executed code.</p>
<p>So generally I can say, yes, it makes sense to pin down a process to use a dedicated core, but the gain is dependant on the code used, and probably negligible. Ultimately, there are better ways to speed up your code.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.gfi.com/blog/playing-processor-affinity-python/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Conficker, at the airbase and hospital near you…</title>
		<link>http://www.gfi.com/blog/conficker-airbase-hospital-near-you/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=conficker-airbase-hospital-near-you</link>
		<comments>http://www.gfi.com/blog/conficker-airbase-hospital-near-you/#comments</comments>
		<pubDate>Mon, 15 Jun 2009 08:45:32 +0000</pubDate>
		<dc:creator>Miro Stauder</dc:creator>
				<category><![CDATA[Tech Zone]]></category>
		<category><![CDATA[conficker]]></category>
		<category><![CDATA[malware]]></category>
		<category><![CDATA[worm]]></category>

		<guid isPermaLink="false">http://www.gfi.com/blog/?p=4</guid>
		<description><![CDATA[Old news: Fighter jets grounded, base infected with Conficker! Recent news: Hospital equipment infected by Conficker worm! What? How can a supposedly secure environment like a military installation or a hospital catch a worm? Panic everywhere. Why is everyone so &#8230;]]></description>
				<content:encoded><![CDATA[<p>Old news: <a href="http://www.telegraph.co.uk/news/worldnews/europe/france/4547649/French-fighter-planes-grounded-by-computer-virus.html" target="_blank">Fighter jets grounded, base infected with Conficker!</a><br />
Recent news: <a href="http://news.cnet.com/8301-1009_3-10226448-83.html" target="_blank">Hospital equipment infected by Conficker worm!</a></p>
<p><strong>What?</strong> How can a supposedly secure environment like a military installation or a hospital catch a worm? <a href="http://www.wired.com/threatlevel/2009/03/will-conficker/" target="_blank">Panic everywhere</a>.</p>
<p><a class="lightbox" title="worm" href="http://www.gfi.com/blog/wp-content/uploads/2009/05/worm.png"><img class="alignright size-medium wp-image-98" style="margin: 10px;" title="Conficker worm" src="http://www.gfi.com/blog/wp-content/uploads/2009/05/worm-300x300.png" alt="" width="240" height="240" /></a>Why is everyone so scared of Conficker? The worm basically does nothing! It only tries to dig in and wait. According to a build in the timer something should have happened on April 1st 2009. April 1st came and went, and nothing happened. Everyone was expecting a doomsday scenario, where the worm was expected to do something horrific, but nothing happened apart from an update to a newer version, and more waiting for commands. So far there have been 5 versions of the worm observed, labelled as A, B, C, D and E. They seem to be modifications of the original, as the author tries to get things right and build a bigger bot army.</p>
<p><span id="more-4"></span>Only the latest version E was observed as actually doing something: send spam, and install scareware. Technically, it’s not Conficker itself that does this, it’s the payload that it downloads and executes on demand.</p>
<h2>So why is it so dangerous?</h2>
<p>The <strong>payload </strong>is the keyword. The infected machine is at the disposal of the attacker to do anything he wants. An unknown payload executed on demand could be anything from DOS attacks to extortion, spamming to spying. It also updates itself, to enhance its own capabilities, and plugs entry points to avoid infection from competing worms. Very flexible isn’t it? All this is secured by hash signatures and encryption.</p>
<h2>How does the thing actually spread?</h2>
<p>It spreads in two ways:</p>
<ul>
<li>Network</li>
<li>Removable Storage</li>
</ul>
<p>The worm tries to attack a known vulnerability in the DCE-RPC service running on port 445, also used for various services needed by Windows file sharing. A patch for this hole was released in October 2008 – MS08-067; regardless, the worm still succeeded in spreading. Another way of spreading is old fashioned copying. It places a copy of itself on removable storage, and uses the autorun feature for infection. Also worth mentioning is a dictionary attack on administrative network shares, but this might not have been a very successful infection vector, because it seems to be missing in the latest versions of the worm.</p>
<h2>How to spot the infection?</h2>
<p>Conficker uses a number of self defense mechanisms, which are a giveaway. It disables a number of services -  Automatic Update, Security Center, Defender and Error Reporting. It also creates its own service to stay resident. The name is constructed from two random words from titles of other services. Another type of self-defense employed is the <a href="http://www.confickerworkinggroup.org/infection_test/cfeyechart.html" target="_blank">redirection of domain names</a> related to AV products and the Windows Update.</p>
<p>An infection can also be spotted using a professional product. Most AV vendors offer a stand alone tool to detect a Conficker infection. <a href="http://www.gfi.com/lannetscan">GFI LANguard 9</a> is also capable of detecting infected machines remotely as well as detecting missing patches regardless of infection. Once an infection is detected a removal tool should be employed and the systems should be patched to avoid repeated infection.</p>
<p>Back to the question, how can this worm spread into supposedly secure institutions? Well, the problem mostly lies in people &#8211; People who are naïve or who do not follow the security guidelines set by the organization.</p>
<ul>
<li><strong>Developers:</strong> Environment choices, running a whole Windows system on an embedded black box such as an MRI or a heart monitor might not be the best choice.</li>
<li><strong>Administrators:</strong> Not updating systems as required by choice; in some cases not being able to do it because of the black box nature of a system where only the vendor is allowed to update, thus leading to substantial delays in updating that might leave the system vulnerable.</li>
<li><strong>End users:</strong> Not following rules, and trying everything to work around restrictions.</li>
</ul>
<p>So, what can we expect to see of Conficker in the future? According to <a href="http://www.viruslist.com/en/weblog?weblogid=208187675" target="_blank">some research</a>, there are now only around 200,000 infected machines. However, this might be just the tip of the iceberg, because this includes only the latest version of the worm - version E.</p>
<p>Version E is set to <a href="http://www.securecomputing.net.au/News/142643,conficker-e-set-to-become-dormant-on-may-3.aspx" target="_blank">expire on 3rd May 2009</a>, but not the previous versions. Are all versions destined to eventually upgrade to E and retire? Did it serve its purpose? Is it going to be replaced by something else?</p>
<p>More questions than answers. The future will tell. The moral of the story so far – <strong>keep your systems updated</strong>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.gfi.com/blog/conficker-airbase-hospital-near-you/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>

<!-- Performance optimized by W3 Total Cache. Learn more: http://www.w3-edge.com/wordpress-plugins/

 Served from: www.gfi.com @ 2013-09-14 23:54:48 by W3 Total Cache --