Some days ago, I came across an interesting article on Phishing detection tools. It is an analysis of the various functions featured by different tools; some of them use black lists, whereas others go beyond that, using heuristic methods for detecting fraudulent websites.
According to the study, the use of heuristic methods to determine whether a webpage is actually a Phishing site allows to uncover it from the very moment the webpage is visited, without the need to include it into the URL black lists provided by security vendors. At the same time, including a new entry in a black list requires the confirmation of the black list providers or their partners, which increases the length of the risk period.
The report also points out the short life of Phishing campaigns: 66% are over after 24 hours, so anti-Phishing bars have to be fed very quickly to avoid attacks. The vendors are trying to counter this with heuristic techniques.
Then, why only two out of the eight black lists use heuristic procedures? According to the study, one of the analyzed products detected 70% of fraudulent websites since the beginning of the campaign, which helped avoiding the majority of attacks.
For the moment, vendors are not inclined to go for heuristics techniques due to the large percentage of false positives yield – although these false positives are attributed to the eight competitors. On top of it, fraudsters know about these heuristic methods that are based on web contents, URL and HTML signatures, etc. Therefore, they already know how to counterattack and render useless these measures.
However, given the data set presented in the report, I wonder if there are other reasons for the lack of use of the correlation of rules for Phishing detection. Maybe the vendors who use it know something new since, were the rules of heuristic methods known by fraudsters, they wouldn’t allow such a quick Phishing detection.
Here is a link to the article: