Moz continues to provide interesting tools and site measures. I only follow things as I find it interesting (not as a profession). I am not a SEO person and paying $100 a month (or much more) they charge for their tools isn’t worth it for my curiosity. But they make some things available for free and provide some interesting blog posts on what they find and about their tools.
This new Spam Score analysis by Moz seems very interesting: Spam Score: Moz’s New Metric to Measure Penalization Risk. The idea is sensible, they are trying to determine the spam riskiness of a site based on the correlations they can draw from their web crawl data and Google search results. Moz can then see where sites are not ranking well when many factors would indicate they should rank and then draw a conclusion that Google has penalized certain sites (and not given sites with links from those sites credit or worse penalized sites with links from those sites).
This seems like a really good idea. The found 17 flags that are correlated with spam hits to the site. And when sites trip more and more of those flags the likelihood of Google classifying those sites as spam rise. When a site has 0 spam flags Moz calculates a .5% chance of the site showing up in Google search results (or not showing more likely) in a way that indicates Google sees the site as spam. 4 spam flags equals a 7.5% chance of being a “spam site.” A site with 6 spam flags has at 16% chance of being spam, 7 flags means a 31% chance, 8 is a 57% chance, 9 a 72% chance and 14 a 100% chance.
In their post Moz says that tripped spam flags are not meant to be an indication of something that needs to be fixed (after all the flags are just correlation, not causation – “fixing them” may do nothing for search results). That may be true but if sites are showing a 5-yellow for spaminess it is highly likely lots of people are going to want to reduce this scary looking feedback about their site.
It may well be changing to avoid the flag by adding twitter buttons and making whatever tweaks to get rid of several more flags is what is likely to happen.
My guess is a spaminess rating that wasn’t just x/17 but a factor of how many of 17 tripped plus an understanding of how important that was (I would imagine including which interactions of spam flag were more critical…).
I would be surprised if there isn’t a big difference in a certain 3 flags being tripped versus 3 other flags being tripped (plus say 4 other random flags). That is to say, even with Moz’s limited ability to know what Google is directly reacting to versus correlations you can observe. I would imagine this could big improved into a 100 point (or whatever) system that gave a much more valuable spam site insight than just treating each flag as equally important (and ignoring especially deadly interactions between flags – which flags when they are tripped together cause the likely spam hit to be seen in google results.