web spam yonatan ariel sdbi 2005 based on the work of gyongyi, zoltan; berkhin, pavel;...
TRANSCRIPT
Web SpamWeb SpamYonatan Ariel
SDBI 2005
Based on the work of
Gyongyi, Zoltan; Berkhin, Pavel; Garcia-Molina, Hector; Pedersen, Jan Stanford University
The Hebrew University of JerusalemThe Hebrew University of Jerusalem
ContentsContents
• What is web spamWhat is web spam
• Combating web spam – TrustRank
• Combating web spam – Mass Estimation
• Conclusion
Web SpamWeb Spam
• Actions intended to mislead search engines into ranking some pages higher than they deserve.
• Search engines are the entryways to the web
Financial gainsFinancial gains
ConsequencesConsequences
• Decreased search results quality “Kaiser pharmacy” returns techdictionary.com
• Increased cost of each processed query Search engine indexes are inflated with
useless pages
The first step in combating spam is understanding it
Search EnginesSearch Engines
• High quality results, i.e. pages that are Relevant for a specify query
• Textual similarity
Important
• Popularity
• Search engines combine relevance and importance, in order to compute Ranking
Definition revisedDefinition revised
• any deliberate human action that is meant to trigger an unjustifiably favorable relevance or importance for some web page, considering the page’s true value
SSearch earch EEngine ngine OOptimizersptimizers
• Engage in spamming (according to our definition)
• Ethical methods Finding relevant directories to which a site
can be submitted
Using a reasonably sized description meta tag
Using a short and relevant page title to name each page
Spamming TechniquesSpamming Techniques
• Boosting techniques Achieving high relevance / importance
• Hiding techniques Hiding the boosting techniques
We’ll cover them both
TechniquesTechniques
• Boosting Techniques
Term Spamming
Link Spamming
• Hiding Techniques
TFTF
• TF (term frequency(
measure of the importance of the term (in a specific page)
number of occurrences of the considered term
number of occurrences of all
terms
IDFIDF
( ) tp
kk
nTF t
n
• IDF - (inverse document frequency) a measure of the general importance of the term in a
collection of pages
total number of documents in the
corpus
Total number of documents where t
appears
TFTF IDFIDF
| |( )
| ( ) |Dj
DIDF t
d t
TF-IDFTF-IDF
• A high weight in tf-idf is reached by a high term frequency (in the given document) and a low document frequency of the term in the whole collection of documents.
• Spammers: Make a page relevant for a large number of queries
Make a page very relevant for a specific query
and
( , ) ( ) ( )Dt p t q
TFIDF p q TF t IDF t
Term Spamming TechniquesTerm Spamming Techniques
• Body Spam Simplest, oldest, most popular.
• Title Spam Higher weights.
• Meta tag spam Low priority <META NAME="keywords" CONTENT="jew,jews,jew
watch,jews and communism,jews and banking,jews and banks,jews in government..history,diversity,Red Revolution,USSR,jews in government , holocaust, atrocities, defamation, diversity, civil rights, plurali, bible, Bible, murder, crime, Trotsky, genocide, NKVD, Russia, New York, mafia, spy, spies,Rosenberg">
Term Spamming Techniques (cont’d)Term Spamming Techniques (cont’d)
• Anchor text spam <a href=“target.html”> free, great deals, cheap,
cheap,free </a>
• URL Buy-canon-rebel-20d-lens-case.camerasx.com
Grouping Term Spamming TechniquesGrouping Term Spamming Techniques
• Repetition Increased relevance for a few specific queries
• Dumping of a large number of unrelated terms Effective against Rare, obscure terms queries
• Weaving of spam terms into copies contents Rare (original) topic Dilution – conceal some spam terms within the text
• Phrase stitching Create content quickly
Remember not only airfare to say the right planetickets thing in the right place, but far cheap travelmore difficult still, to leave hotel rooms unsaid the
wrong thing at vacation the tempting moment.
TechniquesTechniques
• Boosting Techniques
Term Spamming
Link Spamming
• Hiding Techniques
Three Types Of Pages On The WebThree Types Of Pages On The Web
• Inaccessible Spammers cannot modify
• Accessible Can be modified in a limited way
• Own pages We call a group of own pages a spam farm
First Algorithm - HITSFirst Algorithm - HITS• Assigns global hub and authority scores to each page
• Circular definition: Important hub pages are those that point to many
important authority pages Important authority pages are those pointed to by
many hubs
• Hub scores can be easily spammed Adding outgoing links to a large number of well knows,
reputable pages.
• Authority score is more complicated The more the better
Second Algorithm - Page RankSecond Algorithm - Page Rank
• a family of algorithms for assigning numerical weightings to hyperlinked documents
• The PageRank value of a page reflects the frequency of hits on that page by a random surfer is the probability of being at that page after
lots of clicks We continue at random from a sink page
Page rankPage rank
( ) ( ) ( ) ( ) ( )static in out lossPR M PR M PR M PR M PR M
All n own pages are part of the
farm
All m accessible pages point to the
spam farm
Links pointing outside the spam
farm are supressed
No vote gets lost (each page has an
outgoing link)
All accessible and own pages point
to t
All pages within the farm are reachable
Inaccessible accessible Own
t
Techniques – Outgoing linksTechniques – Outgoing links
• Manually adding outgoing link to well-knows hosts; increased hub score Directories sites
• dmoz.org
• Yahoo! Directory
Creating massive outgoing link structure quickly
Techniques – Incoming LinksTechniques – Incoming Links
• Honey-pot – useful resource
• Infiltrate a web directory
• Links on blogs, guest books, wikis Google’s tag – <a href="http://www.example.com/" rel="nofollow">discount</a>
• Link exchange
• Buy expired domains
• Create own spam farm
TechniquesTechniques
• Boosting Techniques
Term Spamming
Link Spamming
• Hiding Techniques
Content HidingContent Hiding
• Color scheme font’s color same as background’s color
• Tiny anchor images links (1x1 pixel)
• Using scripts Setting the visible HTML style attribute to
FALSE.
CloakingCloaking
• Spam web servers can return a different document to a web crawler
• Identification of web crawlers: A list of IP addresses ‘user-agent’ field in the HTTP request
• Allow web masters block some contents
• Legitimate optimizations (remove ads)
• Delivering contents that search engine can’t read (such as flash)
RedirectionRedirection
• Automatically redirecting the browser to another URL
• Refresh meta tag in the header of an HTML document <meta http-equiv=“refresh” content=“0;url=target.html>
• Simple to identify
• Scripts
<script language=“javascript> location.replace(“target.html”) </script>
How can we fight it?How can we fight it?
• IDENTIFY instances of spam Stop crawling / indexing such pages
• PREVENT spamming Avoid cloaking – identifying as regular web
browsers
• COUNTERBALANCE the effect of spamming Use variation of the ranking methods
Some StatisticsSome Statistics
The results of a single breadth first
search at the Yahoo! Home page
A complete set of pages crawled and
indexed by AltaVista
Some More StatisticsSome More Statistics
Sophisticated spammers
Average spammers
ContentsContents
• What is web spam
• Combating web spam – TrustRankCombating web spam – TrustRank
• Combating web spam – Mass Estimation
• Conclusion
MotivationMotivation
• The spam detection process is very expensive and slow, but is critical to the success of search engines
• We’d like to assist the human experts who detect web spam
Getting dirtyGetting dirty
• G = (V,E) V = set of N pages (vertices) E = set of directed links (edges) that connect
pages• We collapse multiple hyperlinks into a single link
• We remove self hyperlinks
• i(p) – number of in-links to a page p
• w(p) – number of out-links from a page p
Our ExampleOur Example
V = { 1, 2, 3, 4}
E = { (1,2),(2,3),(3,2),(3,4)}
N = 4
i(2) = 2; w(2) = 1
1 432
A Transition MatrixA Transition Matrix
0 if (q,p) E
( , ) 1 otherwise
w(q)
T p q
In our exampleIn our example
1 432
0 0 0 0
11 0 0
20 1 0 0
10 0 0
2
T
The out edges of
‘3’
The in edges of
‘4’
An Inverse transition matrixAn Inverse transition matrix
0 if (p,q) E
( , ) 1 otherwise
i(q)
U p q
In Our ExampleIn Our Example
1 432
10 0 0
20 0 1 0
10 0 1
20 0 0 0
u
The in edges of
‘2’The out edges of
‘2’
Page RankPage Rank
• mutual reinforcement between pages the importance of a certain page influences and is
being influenced by the importance of some other pages.
:( , )
( ) 1( ) (1 )
( )q q p E
r qr p
w q N
In-links votesdecay factor
start-off atuthority
Equivalent Matrix EquationEquivalent Matrix Equation
1 (1 ) 1Nr T r
N
Scalar ScalarN vector N vectorN vector
Dynamic
Static
A Biased PageRankA Biased PageRank
(1 ) Nr T r d
A static score distribution
(summing up to one)
Only pages that are reachable from some d[i]>0
will have a positive page rank
Oracle FunctionOracle Function
• A binary oracle function O over all pages p in V:
0 if p is spam( )
1 otherwiseO p
1
4
2 3
65
7 good
bad
O(3 ) = 1
O(6 ) = 0
Oracle FunctionsOracle Functions
• Oracle invocations are expensive and time consuming We CAN’T call the function for all pages
• Approximate isolation of the good set Good pages seldom point to bad onesGood pages seldom point to bad ones
• As we’ve seen, good pages *can* point to bad ones
bad pages often point to bad ones
Trust FunctionTrust Function
• We need to evaluate pages without relying on O.
• We define, for any page p, a trust function
• Ideal Trust Property (for any page p)
T(p) = Pr[ O(p) = 1 ] Very hard to come up with such function
Useful in ordering search results
Ordered Trust PropertyOrdered Trust Property
T(p) = T(p) Pr[O(p) = 1] = Pr[O(q) = 1]
T(p) < T(q) Pr[O(p) = 1] < Pr[O(q) = 1]
First Evaluation Metric - First Evaluation Metric - Pairwise Pairwise OrderednessOrderedness
1 if T( ) T( ) and O( ) < O( )
( , , , ) 1 if T( ) T( ) and O( ) > O( )
0 Otherwise
p q p q
I T O p q p q p q
( , )| | ( , , , )
( , , )| |
p q PP I T O p q
pairord T OP
A violation of the ordered trust proerty
Trust function T, oracle function O, pages p,q
The fraction of the pairs for which T did not make a mistake
Threshold Trust Property
T(p) > O(q) = 1
• Doesn’t necessarily provide an ordering of pages based on their likelihood of being good
• We’ll describe two evaluation metrics Precision
Recall
Threshold Evaluation MetricsThreshold Evaluation Metrics
|{ | ( ) and ( )=1}|( , )
|{ | ( ) }|
p T p O pprec T O
p T p
Total number of good pages
in X
|{ | ( ) and ( )=1}|( , )
|{ | ( ) 1 }|
p T p O prec T O
p O p
Total number of ‘good’ estimations
Total number of correct ‘good’
estimations
Total number of correct ‘good’
estimations
Computing TrustComputing Trust
• Limited budget L of O-invocations
• We select at random a seed set S of L pages and call the oracle on its elements
• Ignorant Trust Function:
0
( ) if p S( ) 1
otherwise2
O pT p
Not checked by human experts
For exampleFor example
• L = 3; S={1,3,6}
[1,1,1,1,0,0,0]O 1
4
2 3
65
7
Oracle Actual Values
Ignorant function values
• We choose X = 7 Pairwise orderness = 34/42
• For ½ Precision =1; Recall =0.5
0
1 1 1 1t [1, ,1, , ,0, ]
2 2 2 2
Trust PropagationTrust Propagation
• Remember approximate isolation ?
• We generalize the ignorant function
• M-Step Trust Function: The original set S, on which we called
O
There exists a path of a maximum length of M from page p to page q,
that doesn’t include bad seed pages
ExampleExample
1
4
2 3
65
7
0
1 1 1 1t [1, ,1, , ,0, ]
2 2 2 2
ExampleExample
1
4
2 3
65
7
1
1 1 1t [1,1,1, , ,0, ]
2 2 2
ExampleExample
1
4
2 3
65
7
2
1 1t [1,1,1,1, ,0, ]
2 2
ExampleExample
1
4
2 3
65
7
3
1t [1,1,1,1,1,0, ]
2
A mistake
ResultsResults
A drop in performanceThe further away we are from
good seed pages, the less certain we are that a page is good!
Trust AttenuationTrust Attenuation
• Trust Dampening
<1.
We could assign maximum(b,b*b) or
average(b,b*b)
Trust AttenuationTrust Attenuation• Trust Splitting
The care with which people add links to their pages is often inversely proportional to the number of links on the page
Trust Rank AlgorithmTrust Rank Algorithm
1. (Partially) Evaluate seed-desirability of pages
2. Invoke the oracle function on the L most desirable seed pages, normalize the result (a vector d)
3. Evaluate TrustRank scores using a biased PageRank computation with d replacing the unfiorm distribution
For ExampleFor Example
• Desirability vector
[0.08,0.13,0.08,0.10,0.09,0.06,0.02]
• Order the vertices accordingly:
[2, 4, 5, 1, 3, 6, 7]
1
4
2 3
65
7
For ExampleFor Example (cont’d)(cont’d)
• Compute good seeds vector (other seeds are considered bad)
[0, 1, 0 , 1, 0, 0, 0]
• Normalize the result
d = [0, 1/2, 0 , 1/2, 0, 0, 0]
Will be used as the biased page rank
vector
1
4
2 3
65
7
For ExampleFor Example (cont’d)(cont’d)
• Compute TrustRank Scores
[0, 0.18, 0.12, 0.15, 0.13, 0.05, 0.05]
Highest score
Highest score
Higher than p4,
due to p3
P1 is unreferenced
1
4
2 3
65
7
High due to a direct link from p4
t = d
For i = 1 to M do
t = T t +(1- ) d
return t
Selecting SeedsSelecting Seeds
• We want to choose pages that are useful in identifying additional good pages
• We want to keep the seed set small
• Two strategies Inverse page rank
High Page Rank
I. Inverse PageRankI. Inverse PageRank
• Preference to pages from which we can reach many other pages We can select seed pages based on the
number of outlinks
• We’ll choose the pages that point to many pages that point to many pages that point to many
pages that point to many pages that point to many pages that point to
many pages …
This is actually PageRank, where the importance of a page depends on its outlinks
• Perform PageRank on the graph G=(V,E’)
• Use inverse transition matrix U (instead of T)
II. High PageRankII. High PageRank
• We’re interested in high PageRank pages
• Obtain accurate trust scores for high PageRank pages
• Preference to pages with high PageRank Likely to point to other high PageRank pags
May identify the goodness of fewer pages, but they may be more important pages
StatisticsStatistics
• |Seed set S| = 1250 (given by inverse PageRank)
• Only 178 sites were selected to be used as good seeds (due to extremely rigorous selection criteria)
Statistics (cont’d)Statistics (cont’d)
Bad sites in PageRank and TrustRank buckets
Statistics (cont’d)Statistics (cont’d)
Bucket level demotion in Trust Rank
A site from a higher PageRank bucket appears in a lower TrustRank BucketSpam sites in
PageRank bucket 2 got demoted 7 buckets in average
ContentsContents
• What is web spam
• Combating web spam – TrustRank
• Combating web spam – Mass EstimationCombating web spam – Mass Estimation Turn the spammers’ ingenuity against
themselves
• Conclusion
Spam Mass – Naïve ApproachSpam Mass – Naïve Approach
• Given a page x, we’d like to know if it got most of its PageRank from spam pages or from reputable pages
• Suppose that we have a partition of the web into 2 sets V(S) = Spam pages
V(R) = Reputable pages
First Labeling SchemeFirst Labeling Scheme
• Look at the number of direct inlinks If most of them comes from spam pages, then declare
that x is a spam page
G-0
G-1
S-k
S-2
S-1
good
bad
S-0x
2x
2
P (3 )(1 ) /
out of which ( )(1 ) / is due to spamming
for c=0.85, as long as k 2, this is the majority
k n
k n
Second Labeling SchemeSecond Labeling Scheme
• If the largest part of x’s PageRank comes from spam nodes, we label x as spam
G-0
G-2
S-3
S-2
S-1
S-0
S-3
x
S-5
S-6
G-1
G-3
good
bad
20 2
20
g and g contribute (2 4 )(1 ) /
s contributes ( 4 )(1 ) /
n
n
1 m{x ,...,x }x
1 m
we can compute q
x's page rank due to x ...x
Improved Labeling SchemeImproved Labeling Scheme
G-0
G-2
S-3
S-2
S-1
S-0
S-3
x
S-5
S-6
G-1
G-3
good
bad
0 3
0 3
{g ,...,g } 2x
{s ,...,s } 2x
q =(2 2 )(1- ) /
q =( 6 )(1- ) /
n
n
Spam Mass DefinitionSpam Mass Definition
xThe absolute spam mass of x, denoted by M ,
is the PageRank contribution that x receives
from spam nodes
xThe relative spam mass of x, denoted by m ,
is the fraction of x's PageRank due to contributing
spam nodes
EstimatingEstimating
• We assumed that we have a priori knowledge of whether nodes are good or bad – not realistic!
• What we’ll have is a subset of the good nodes, the good core Not hard to construct
Bad pages are ofren abandoned
Estimating (cont’d)Estimating (cont’d)
• For computes 2 sets sets of PageRank scores:
• p=PR(v) – based on the uniform random jump distribution v (v[i] = 1/n, for i = 1..n)
• p`=PR(v`) – based on the random jump distribution v`
1 if i is in the good core
`[ ]0 otherwise
v i n
Spam Mass Definition (cont’d)Spam Mass Definition (cont’d)
x x
X x x
x xX
x
Given PageRank scores p and p`
the estimated absolute spam mass of node x is
M p - p`
and the estimated relative spam mass of x is
(p - p` )m
p
G-0
G-2
S-3
S-2
S-1
S-0
S-3
x
S-5
S-6
G-1
G-3
good
bad
Spam Detection AlgorithmSpam Detection Algorithm
• Compute PageRank score p
• Compute (biased) PageRank p`
• Compute the relative spam mass vector
• For each node (with PageRank high enough), if its relative spam mass is bigger than a (given) threshold, declare that x is spam
StatisticsStatistics
ContentsContents
• What is web spam
• Combating web spam – TrustRank
• Combating web spam – Mass Estimation
• ConclusionConclusion
ConclusionConclusion
• We introduced ‘web spam’
• We presented two ways to combat spammers TrustRank (spam demotion)
Spam mass estimation (spam detection)
questions?
Thank youThank you
BibliographyBibliography
• Web Spam TaxonomyWeb Spam Taxonomy (2004) - Gyongyi, Zoltan; Garcia-Molina, Hector, Stanford University
• Combating Web Spam with TrustRankCombating Web Spam with TrustRank (2005) - Gyongyi, Zoltan; Garcia-Molina, Hector; Pedersen, Jan
• Link Spam Detection Based on Mass EstimationLink Spam Detection Based on Mass Estimation (2005) - Gyongyi, Zoltan; Berkhin, Pavel; Garcia-Molina, Hector; Pedersen, Jan
• http://www.firstmonday.org/issues/issue10_10/tatum/
• http://en.wikipedia.org/wiki/TFIDF
• http://en.wikipedia.org/wiki/Pagerank