<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Search Engines, Friend or Foe?</title>
	<atom:link href="http://www.gfi.com/blog/search-engines-friend-foe/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.gfi.com/blog/search-engines-friend-foe/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=search-engines-friend-foe</link>
	<description>Brought to you by GFI Software</description>
	<lastBuildDate>Fri, 13 Sep 2013 13:27:20 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	
	<item>
		<title>By: Emmanuel Carabott</title>
		<link>http://www.gfi.com/blog/search-engines-friend-foe/comment-page-1/#comment-435</link>
		<dc:creator>Emmanuel Carabott</dc:creator>
		<pubDate>Thu, 26 Nov 2009 09:16:59 +0000</pubDate>
		<guid isPermaLink="false">http://www.gfi.com/blog/?p=1372#comment-435</guid>
		<description><![CDATA[Hi Leandro, Thanks for posting. I think you misunderstood me or maybe I was a bit unclear cannot rule that out either :) Anyhow what I meant to say in the article was not that we should remove search engines or try to stop their evil rein. I know they are essential in everyday life and they ultimately do more good than bad. The article was intended more as a warning / question really. The warning was that when designing systems / software  one should consider search engines in their security strategy. 

If we lived in a world without search engines then putting a printer on a direct Internet connection to say, save money by having all peripheral offices print to it instead of sending faxes might be safe enough. Even if the printer has a nasty vulnerability that might allow anyone to have administrative access to it, without search engines a malicious person would need to scan numerous IPs, find those that might be printers, filter them out by finding printers and disregarding false positives then further filter the results to find the vulnerable printers. It would take a lot of time to do that which lowers the risk quite a bit. However if we consider search engines then a simple search taking less than 5 seconds will give the attacker a comprehensive list of vulnerable printers. This makes the risk a lot higher. 

My question on the other hand was whether people already consider this threat vector or whether they maybe did consider it but decided the risk was not worth the investment needed to tackle it? 

Bottom line, I guess we&#039;re really in agreement here Leandro :) Thanks again for sharing your ideas with us.]]></description>
		<content:encoded><![CDATA[<p>Hi Leandro, Thanks for posting. I think you misunderstood me or maybe I was a bit unclear cannot rule that out either <img src='http://www.gfi.com/blog/wp-includes/images/smilies/icon_smile.gif' alt=':)' class='wp-smiley' />  Anyhow what I meant to say in the article was not that we should remove search engines or try to stop their evil rein. I know they are essential in everyday life and they ultimately do more good than bad. The article was intended more as a warning / question really. The warning was that when designing systems / software  one should consider search engines in their security strategy. </p>
<p>If we lived in a world without search engines then putting a printer on a direct Internet connection to say, save money by having all peripheral offices print to it instead of sending faxes might be safe enough. Even if the printer has a nasty vulnerability that might allow anyone to have administrative access to it, without search engines a malicious person would need to scan numerous IPs, find those that might be printers, filter them out by finding printers and disregarding false positives then further filter the results to find the vulnerable printers. It would take a lot of time to do that which lowers the risk quite a bit. However if we consider search engines then a simple search taking less than 5 seconds will give the attacker a comprehensive list of vulnerable printers. This makes the risk a lot higher. </p>
<p>My question on the other hand was whether people already consider this threat vector or whether they maybe did consider it but decided the risk was not worth the investment needed to tackle it? </p>
<p>Bottom line, I guess we&#8217;re really in agreement here Leandro <img src='http://www.gfi.com/blog/wp-includes/images/smilies/icon_smile.gif' alt=':)' class='wp-smiley' />  Thanks again for sharing your ideas with us.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Leandro Amore</title>
		<link>http://www.gfi.com/blog/search-engines-friend-foe/comment-page-1/#comment-416</link>
		<dc:creator>Leandro Amore</dc:creator>
		<pubDate>Fri, 20 Nov 2009 21:15:49 +0000</pubDate>
		<guid isPermaLink="false">http://www.gfi.com/blog/?p=1372#comment-416</guid>
		<description><![CDATA[Jhon, Emannuel;
You are right in addressing search engines as dangerous tools, but internet is a dangerous place. We can’t live without browsers and they really, from my point of view, do more good than evil.
 Imagine your everyday browsing without the power of search engines,can we go back to the times in which you had to manually register your site in the search engines or that the only thing that was looked by these crawlers where predefined tags?
This is an old discussion, you even have a book for exploiting this kind of “vulnerabilities” called Google hacks released in 2003 (http://oreilly.com/catalog/9780596004477).
I think the battle between the white and black hats will never end, but it will contribute for making internet a more secure place to be if you play by the rules and keep your stuff safe. This is a big task and perhaps for IT guys like us is not that difficult, but we should make our best to train the normal user to keep their things safe.
Every year we will find new threats, as misused social networks or search engines, but we have to keep our selves trained and put some energy in teaching other how to use these powerful resources without getting into trouble.
Best regards
Leandro]]></description>
		<content:encoded><![CDATA[<p>Jhon, Emannuel;<br />
You are right in addressing search engines as dangerous tools, but internet is a dangerous place. We can’t live without browsers and they really, from my point of view, do more good than evil.<br />
 Imagine your everyday browsing without the power of search engines,can we go back to the times in which you had to manually register your site in the search engines or that the only thing that was looked by these crawlers where predefined tags?<br />
This is an old discussion, you even have a book for exploiting this kind of “vulnerabilities” called Google hacks released in 2003 (<a href="http://oreilly.com/catalog/9780596004477" rel="nofollow">http://oreilly.com/catalog/9780596004477</a>).<br />
I think the battle between the white and black hats will never end, but it will contribute for making internet a more secure place to be if you play by the rules and keep your stuff safe. This is a big task and perhaps for IT guys like us is not that difficult, but we should make our best to train the normal user to keep their things safe.<br />
Every year we will find new threats, as misused social networks or search engines, but we have to keep our selves trained and put some energy in teaching other how to use these powerful resources without getting into trouble.<br />
Best regards<br />
Leandro</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Emmanuel Carabott</title>
		<link>http://www.gfi.com/blog/search-engines-friend-foe/comment-page-1/#comment-338</link>
		<dc:creator>Emmanuel Carabott</dc:creator>
		<pubDate>Wed, 04 Nov 2009 09:34:26 +0000</pubDate>
		<guid isPermaLink="false">http://www.gfi.com/blog/?p=1372#comment-338</guid>
		<description><![CDATA[Hi John, Thanks a lot for your contribution, you&#039;re most certainly right what you mention, searching for code that could potentially be exploitable if not implemented correctly is definitely an important point that I missed!]]></description>
		<content:encoded><![CDATA[<p>Hi John, Thanks a lot for your contribution, you&#8217;re most certainly right what you mention, searching for code that could potentially be exploitable if not implemented correctly is definitely an important point that I missed!</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: John Mello</title>
		<link>http://www.gfi.com/blog/search-engines-friend-foe/comment-page-1/#comment-337</link>
		<dc:creator>John Mello</dc:creator>
		<pubDate>Tue, 03 Nov 2009 23:08:28 +0000</pubDate>
		<guid isPermaLink="false">http://www.gfi.com/blog/?p=1372#comment-337</guid>
		<description><![CDATA[This is s very enlightening item. Some of the concerns you mention were  raised when Google introduced its code search engine (http://www.google.com/codesearch). One of the complaints about the engine, which searches public source code for snippets of code, was that it provided hackers with an efficient tool for finding programming errors that can exploited by malware. For example, a hacker might search for functions such as strcpy and gets knowing that if those functions aren&#039;t programmed properly, they can be exploited to cause buffer overflows which can be leveraged to execute malicious code.

By the way, I tried searching for intitle:index.of “Apache/1.3.34 Server at&quot; in some of the other search engines and turned up some interesting results. Yahoo found 13,600,000 occurances of the search term; Bing, 1,270,000; and Ask, 0.]]></description>
		<content:encoded><![CDATA[<p>This is s very enlightening item. Some of the concerns you mention were  raised when Google introduced its code search engine (<a href="http://www.google.com/codesearch" rel="nofollow">http://www.google.com/codesearch</a>). One of the complaints about the engine, which searches public source code for snippets of code, was that it provided hackers with an efficient tool for finding programming errors that can exploited by malware. For example, a hacker might search for functions such as strcpy and gets knowing that if those functions aren&#8217;t programmed properly, they can be exploited to cause buffer overflows which can be leveraged to execute malicious code.</p>
<p>By the way, I tried searching for intitle:index.of “Apache/1.3.34 Server at&#8221; in some of the other search engines and turned up some interesting results. Yahoo found 13,600,000 occurances of the search term; Bing, 1,270,000; and Ask, 0.</p>
]]></content:encoded>
	</item>
</channel>
</rss>

<!-- Performance optimized by W3 Total Cache. Learn more: http://www.w3-edge.com/wordpress-plugins/

 Served from: www.gfi.com @ 2013-09-15 03:46:21 by W3 Total Cache --