Jump to content

Spam Filter

From mediawiki.org

This is a development page. To stop spam now, see Manual:Combating spam.

Project Introduction

[edit]
Spam Filter Project

Purpose

Create a new software solution to the Wiki spam problem. Address deficiences in existing Anti-spam features ($wgSpamRegex filter and the SpamBlacklist extension)

Scope

Add new extension to enhance spam filtering with multiple regex patterns and with maximum control at the local wiki.

Benefits

Empower small wikis. Leverage the spam patrol resources of large wikis. Unbalance spammers by presenting them with multiple anti-spam strategies instead of just one central strategy.

Current status

Proposal phase - reviewing requirements and seeking developers.

Project Activity Log

[edit]
See m:Spam Filter/Archive for archives

Wikitech-I Anti-spam Initiatives

[edit]
  • Read [1] - Subscribe
  • -- 2006-01-28 - Aerik Sylvan - url logging script User:Aerik/Urllogger
  • -- 2006-01-28 - Tim Starling - two new spam cleanup scripts
  • -- 2006-01-28 - Brion Vibber - captcha framework - testing [2]

Survey of ways to manage wiki spam

[edit]

Spam Blocking Tactics

[edit]
  • Block known textual patterns
    • Block known URLs, domains (or sub-domains) matched with or without patterns
    • Block signature CSS/HTML markup matched with patterns
    • Block keywords or keyword combinations matched with patterns
      • Block page titles that match patterns
      • Block user account creations that match patterns
  • Block banned users
  • Block automated posts with Captcha
  • Block automated posts with image map Save page button
  • Block automated posts with write a phrase textbox, that is, the user has to type an exact phrase in an input field to complete an edit.
    • Directions designate target area on button image to click
    • Directions for clicking are part of button image
  • Block bad behavior based on preset time periods and
    • Frequency of posts (throttling)
    • Frequency of keywords (threshold)
    • Frequency of repeating or non-repeating URLs (threshold)
    • Take action against any user if a keyword or URL threshold is exceeded
      • Block further posts by anonymous users
      • Challenge registered users with Captcha or email confirmation

Spam Matching Tools

[edit]
  • Matching with regex
    • see PCRE Perl Compatible Regular Expressions
    • system default limit is 64KB compiled regex
    • system default limit can be increased with a performance penalty
    • 64KB limit can be bypassed with program logic and multiple regex strings with a performance penalty
  • Matching with program logic
  • Data retrieval via

Spam Blocking & Cleanup Methods

[edit]

Spam Report Management

[edit]
  1. Manual spam reports via central wiki pages
    1. Major spam attacks
      1. Example: WikiMedia Extension_talk:SpamBlacklist
    2. Sporadic spam attacks which may be escalated
  2. Automated spam reports
    1. Import external BlackLists and/or use with internal BlackList
      1. Examples:
        1. Wikimedia SpamBlacklist extension | Spam blacklist (URLs only)
        2. Chonged.org [3] (URLs only)
        3. EmacsWiki [4] (URLs and spam text) --> CommunityWiki [5] [6]
    2. Import decentralized BlackLists from trusted sites; merge lists; remove expired or whitelisted items.
      1. Examples:
        1. CommunityWiki [7]
        2. Meatball PeerToPeerBanList
        3. Proposed RDF Vocabulary French

Problem and Solutions

[edit]

Currently spambots are overwhelming small wikis and becoming more disruptive to large wikis. Large wikis have the resources to keep up with the spam flood but small wikis do not. By leveraging existing MediaWiki features, harnessing community spam patrol resources, and designing a user friendly interface, small wikis can be given a fighting chance.

Filtering spam with URLs is volatile. Filtering with non-URLs spam terms is an order of magnitude more stable, as experienced on one production website:

  • I am using a custom $wgSpamRegex filter at KatrinaHelp.info and I am not using a URL Spam blacklist at that site. The bot spam has tapered off to none for the past 2 weeks.(Jan 20)
  • I am using a custom URL Spam blacklist and the BadBehavior extension on my personal wiki, as well as a custom $wgSpamRegex filter, so I am familiar with how those spam blockers work.

Small wikis can be more tolerant of regex filter terms that would be unacceptable on large wikis. For example blocking "porn" would be insignificant to a small petcare wiki but a non-starter on Wikipedia.

One size does not fit all. This proposal gives the small wiki owner some control over how much spam filtering is needed or used without burdening them with the complexities of regular expressions. The wiki community harvests spam word information in the wild and shares it with the small wiki owners who want it.

This proposal is divided into two sets of requirements

  1. An interim solution which builds on existing spam blocking features in MediaWiki initiated by $wgSpamRegex located in LocalSettings.php; and
  2. A more robust solution which builds on expanded filter logic

Requirements

[edit]

Using existing filter logic

[edit]

I am proposing the development of a $wgSpamRegex User Interface extension. This UI would allow small wikis owners to adjust the $wgSpamRegex value by selection from a list:

    Filter status (on/off)
 0) Custom filter  (default = "/block-this-regex/i";)
 1) Level 1 filter (e.g. "/sseexx.wsienb-seric123.com/i";, least sensitive)
 2) Level 2 filter (e.g. "/a_few_choice_spam_terms/i";)
 3) Level 3 filter (e.g. "/common_spam_terms|tricky_URLs/i";)
 4) Level 4 filter (e.g. "/every_known_spam_term/i";)
 5) Level 5 filter (e.g. "/http|the|a|you|me/i";, most sensitive)
  • a) $wgSpamRegex filter status (on/off) would be selected from 2 radio list items. (default=off)
  • b) Filter 0-5 could be selected from 6 radio 'button' items. (default=0)
  • c) or, Filter 0-5 could be selected with 6 checkbox items and concantenated into one regex filter, subject to 64KB size limit.

Least sensitive corresponds to most specific, and most sensitive corresponds to least specific. Dialing up the filter level would increase the probability of false positive spam blocks.

The Level 1-5 regex filters would be derived from wikis on all topics, and maintained by a network of volunteer spam patrollers at an authorized meta location. Volunteers would follow a peer preview/review process similar to the process at m:Talk:Spam blacklist

The UI would help the wiki owner to install regex filter updates from the authorized meta location. With the UI, the wiki owner could copy any level regex filter to the custom filter and make changes. Each regex filter would have a release date that is visible in the UI.

Small wiki owners could signup at the meta wiki to be patrolled and be notified of regex filter updates.

Higher level regex filters would be more restrictive. If the small wiki owner found the last regex filter selection to be too restrictive, she could dial down the sensitivity to a lower level. If she finds any level regex filter to be a problem, she could report it to the volunteer patrol team.

If a wiki owner found a new regex filter prevented page edits, she could turn off the filter, make page changes, and then turn the filter back on.

Once the UI extension has been thoroughly tested, it could be integrated into the main distribution.

Managing false positives & negatives

[edit]
  1. Allow updated filters to be tested before installation
  2. Allow updated filters to be tested against published content

With logging

[edit]
  1. Log rejected spam automatically
  2. Allow rejected spam to be flagged as false positives
  3. Log reverts automatically
  4. Allow reverts to be flagged as false negatives
  5. Allow updated filters to be tested against false positives and false negatives

Using expanded filter logic

[edit]
New filter logic capabilities
Allow multiple /regex pattern/ matches
Allow thresholds for each /regex pattern/ filter and a total for all filters
Allow thresholds be adjustable for key-word counts
Allow threshold be unique key-word matches
e.g. pattern="cialis|levitra" matches "cialis,cialis,levitra" 2 times
Allow threshold be non-unique key-word matches
e.g. pattern="cialis|levitra" matches "cialis,cialis,levitra" 3 times
Example filter selection list
  • $Filter status (on/off) would be selected from 2 radio 'button' items. (default=off)
  • Custom & Topic filters would be selected with checkbox items. (default=none)
    • Muliple topic filters would be processed with OR logic
  • Each filter would have a threshold count setting [integer] (default=1, can be preset by filter source)
    • Each threshold count would have a Unique? (yes/no) setting (default=no)
  • Need to have a total threshold count in case no single filter reaches a threshold.
    Filter status (on/off)
 0) Custom filter(default = "/block-this-regex/i";)
 1) Topic filter (e.g. specific URLs)
 2) Topic filter (e.g. spammy URL blocks like /poker.?tips.(cn|com|net|org|ru)/
 3) Topic filter (e.g. spammy HTML tag sequences)
 4) Topic filter (e.g. drug/medical keywords)
 5) Topic filter (e.g. porn keywords)
 6) Topic filter (e.g. SEO keywords)
 7) Topic filter (e.g. mortgage/insurance keywords)
 8) Topic filter (e.g. travel/hotel keywords)
 9) Topic filter (e.g. misc. English spam keywords)
10) Topic filter (e.g. Spanish keywords)
11) Topic filter (e.g. French keywords)
12) Topic filter (e.g. German keywords)
13) Topic filter (e.g. Italian keywords)
14) Topic filter (e.g. Chinese keywords) Need to investigate this 
15) etc.
  • a) Need to import external filters/blacklists via URL with known formats.
  • b) Import updates need to be scheduled locally to control bandwidth useage.
  • c) Imports should not proceed if the local version is up-to-date.
  • d) Filters could be processed sequentially or concantenated if they did not exceed 64KB. Best practice should be based on performance comparisons.

Comment: This topical filter is based on my idea. I am not sure how well it goes with the original sensitivity vs. specificity rule catagorization but I think it is easier to maintain. Organizing by specificity is rather subjective and I think would be hard to be consistant between list maintainers. The reason for topical catagorization is that it avoids that problem and allows the wiki owner more flexability in what is blocked on his wiki. For example, a wiki on travel would certainly not want to block travel/hotel based keywords; a wiki on web design wouldn't want to block SEO terms. I worry language based topic filters, most spam does seem to be in English or Chinese, but if a Chinese user wants spam protection, would they be able to use the Chinese topic? It doesn't give them as much control as the English topics. Would that be considered discrimination by some? --JoeChongq 08:32, 21 January 2006 (UTC)[reply]

Project Management

[edit]

Design

[edit]

Wiki User Interface

[edit]
  • Administrative functions accessible via restricted namespace.

PHP/MySQL

[edit]
Replace $wgSpamRegex with require_once( "extension_path/wgSpamRegex.php" );
  • wgSpamRegex.php retrieves compound regex filter from database of text file.

InterWiki

[edit]

Meta-Wiki

[edit]

People Interested in Participating

[edit]
I initiated this project. I am familiar with WikiMedia/PHP/MySQL at an administrative level and I do some regex. I am willing to get involved at any level of design or development.
  • Aerik Sylvan - I bounced a few ideas of Brion last summer - I'm interested in working on some advanced yet flexible spam filters. Qualifications? I have a modified mediawiki wiki with several custom anti-spam enhancements :-) --Aerik 05:08, 22 January 2006 (UTC)[reply]
  • Platonides Well, from my point of view, that's an interesting project ;) This is a topic which will have more and more importance as wiki[pedias|books|etc.] grow. Sadly, spam is everywhere. I know PHP alghough have no experience with MediaWiki itself - 16:13, 24 January 2006 (UTC)[reply]
  • Ian Drysdale, sysop of Chelsea Wiki and Mohala. Having two small to medium wiki's constantly spammed prompted us to take action. Linking to Wikipedia's blacklist hasn't been enough, and temporarily I've enforced logged in edits only. Can code in PHP, SQL and have hacked a little mediawiki code. Experience in Perl and have developed a bot to interface with MediaWiki from IRC. Please keep me informed of progress or how I can help?
  • alxndr, admin of Oberwiki. I'm annoyed by cleaning up after spambots and would be more than happy to lend a hand. I am comfortable with PHP and SQL, and have hacked at the MediaWiki code a bit (just enough to get it to do what I want). I'd love to help.
  • I proposed an experiment here. - jw
  • name/link/interest/skills
  • name/link/interest/skills

Proposal Comments

[edit]

Make part of the core mediawiki 1.14 or 1.15 build, or 2.x

[edit]

Public wikis desperately need ConfirmEdit to be installed by default and they will need advanced spam filter capabilities from the very first moment they're set up. Tikiwiki has a solution that isn't ideal but it works: Any version of any page can be totally deleted from the database permanently as if it never existed - leaving only a record that a revision is missing (you will see revision 34 and then revision 51 meaning everything from 35 to 50 was deleted). This is at least a form of accountability. Without making it part of the core build you can forget users with less than guru-class mediawiki skills ever setting up a public wiki, and inferior wiki software that does spam control well will take over. That would be a terrible shame.

Garbage collecting the database

[edit]

The ideal solution would deal post-facto but accountably with spam - after all you do not discover that your spam solution is imperfect until spam is polluting your DB and ruining the usefulness of your diffs. Here is a way to do that which should be considered:

  1. Enable a bureaucrat (or even WikiSysop-only) function to mark large numbers of recent revisions specifically as spam, exactly the way large public email services do it. This can be added to Special:Recentchanges for anyone to help with. Any revision that's marked spam can be marked not spam by someone else prior to a scheduled cleanup that removes only the revisions marked spam more than 72 hours ago. That gives 72 hours to unmark non-spam. Anything marked more recently will be updated in the next week's garbage collection. The same approach could be extended to vandalism but *not* obviously to anything that is just controversial, libellous, stupid or a copyvio... the features required to retain accountability in those situations would slow down the solution to the spam problem.
  2. Garbage collection removes revisions marked spam entirely from the database and writes new tables as if those revisions never occurred. Other tables are written that contain the spam and may or may not contain the real revisions, to enable analysis of the spam bot's strategy. Some spam versions will have bits and pieces of retained real text in them. (Note that when spammers retain text that is actually worse because someone may simply remove the spam and leave the partial or damaged page behind, thinking they've fixed the problem. Instead they should always revert to an older version.) The spammed table is automatically archived then the clean one replaces it as the mediawiki table - now there is no evidence of any spam whatsoever and no chance of any propagation of spam codes or links.
  3. The spammed databases, for public wikis, can be sent to a central archive so that all of the spam IPs and strategies can be added to centralized blacklists and so on - this is more or less how it works in the email world. Wikimedia is the logical entity to keep this list current, and can charge commercial clients perhaps for consulting on spam protection.
  4. For full accountability page history can include a link at the end of an entry that indicates that this version was spammed and did not accordingly appear visible for as long as its page history date suggests - the date it was spammed should be indicated and it should be possible to have the dates that the article existed in a spam state listed in page history, for those who really need to know what was visible and when (for legal cases and such).
  5. It is far more important to block spam IPs than open proxies, most spammers do not bother with proxies, at least not yet. This feature should be also in the core build.

work with Chongqing ?

[edit]

Chongqing has some good ideas about this.

Startup

[edit]

Please leave your comments on this proposal. If your are interested in collaboration, please leave contact information on my talk page. --jwalling 01:53, 20 January 2006 (UTC)[reply]

I've been thinking about something like this for a while. A cleaner version of the spam blacklist extension which doesn't force Wikimedia's blacklist upon people who don't want it. Also, editable through the MediaWiki namespace (assuming appropriate configuration) and handling more than URLs. I'd also consider having the namespaces affected customisable. Why haven't I done it? Because I haven't got round to it yet. ;-) Perhaps now's the time to think about that. Rob Church Talk 19:16, 20 January 2006 (UTC)[reply]
Should add that I also would have it be able to handle external blacklists in a known format, from URLs, and also parse files containing multiple regexes too. Just for maximum flexibility. Rob Church Talk 19:18, 20 January 2006 (UTC)[reply]
I think it is a good idea not to force the WikiMedia blacklist, but it should be turned on by default when a user turns on this (or the SpamBlacklist) extension. No point in having spam protection if it requires much setup. Advanced options like listed above and ability to specify different blacklists would be great, but there needs to be a good level of protection right out of the box.
This is a bit off topic, but related. From what the readme says of the SpamBlacklist Extension, it updates every 10-15 minutes. That seems an unnecessary waste of bandwidth. Look at how often the blacklist is actually updated, a quick look at the history shows at most 12 times a day and usually much lower. For ease of maintainance this new version of the blacklist would probably not be stored in one file/page. If each of these requires seperate downloads that would be a lot of updating.
--JoeChongq 08:15, 21 January 2006 (UTC)[reply]
I'd go for some sort of local caching of remote lists, too. Rob Church Talk 19:07, 21 January 2006 (UTC)[reply]

Adjustable threshold

[edit]
I received a suggestion to allow an adjustable threshold for a regex with multiple key words. This would help to reduce the number of false positives. For example, if the threshold was set to 3, a spammer with 50 links for cialis, would be blocked. Another refinement would be to require the key words to be unique. In that case, if the spammer had links for cialis, levitra, and viagra the post would be blocked. If multiple regex strings were evaluated, each one could have it's own minumum match threshold. The more specific key words would need a lower threshold. The more generic key words would need higher thresholds. Keep the ideas coming!
--jwalling 21:12, 20 January 2006 (UTC)[reply]
I really like this idea. It would prevent a lot of false positives (which keyword blocking will tend to cause). Spammers will adapt to it if this extension becomes widely used (as in part of MediaWiki by default), but along with URL based rules I think most spam will still be blocked. If spam was only checked for on the text the user edited keyword blocking would be even safer. A minor concern I have about this is that if existing spam somehow made it to the page it would not be detected. Possibly there should be an admin option to run all pages throught the spam filter to see if there are any hits after a new blacklist update. This would allow checking for false positives as well as identifing spam that had gone unnoticed. --JoeChongq 08:49, 21 January 2006 (UTC)[reply]

Proposal submitted to MediaZilla

[edit]
This is an extension request for a user interface to manage $wgSpamRegex. The user would be able to select from a list of regex patterns prepared by spam patrol volunteers on the Meta wiki. Up to 10 regex patterns would be selectable. The lowest level regex filter would be most specific/least sensitive and the highest level regex filter would be least specific/most sensitive.
By enhancing the program logic for filtering with $wgSpamRegex, the user could use selectable thresholds for unique or non-unique pattern matches.
--jwalling 22:32, 20 January 2006 (UTC)[reply]

Block domains not regexes

[edit]

Regex blocks are slow and should be minimised. We should mainly block by domain and path. I've been thinking about implementing an external link table in MediaWiki, indexed by inverted domain (e.g. org.wikimedia.meta/Spam_Filter). Then we can implement fast retroactive spam identification and cleanup. Searches for all pages containing links to a given second-level domain (e.g. org.wikimedia) would be indexed. We could display a list to users on request and even provide a sysop-only interface for single-click cleanup.

By default, the PCRE library limits the size of compiled regexes to 64KB. We're not too far away from that already. With a proper data structure, we could block an unlimited number of domains and paths. -- Tim Starling 06:12, 23 January 2006 (UTC)[reply]

Does the hypothetical external link table work with patterns or only with exact matches? Please define inverted domain. --jwalling 09:45, 23 January 2006 (UTC)[reply]
I told you what an inverted domain is, it is the domain name with the labels reversed, e.g. meta.wikimedia.org becomes org.wikimedia.meta. On the UI side, it could be implemented with glob-style patterns, e.g. *.wikimedia.org. This could be misleading, however, since we could only efficiently support the case where the asterisk is at the start of the string. It might be better to just have an "also block subdomains" checkbox.
I'm not suggesting we do away with regular expressions, just that we minimise their use, by providing a more efficient special case. -- Tim Starling 06:25, 24 January 2006 (UTC)[reply]
Has anyone considered using a w:Bloom filter for blocked domains? Rather than compiling a large regular expression, just have a general domain regexp, and then check the results through the w:Bloom filter. -- Jared Williams 23:56, 7 November 2006 (UTC)[reply]

Sensitivity vs. Specificity

[edit]
  • I am looking for ways to balance sensitivity vs. specificity and ways to put the control of balance into the hands of wiki administrators. Most anti-spam strategies make assumptions about the balance and then impose those assumptions on others. A small wiki administrator with controls in hand may find increased sensitivity does not cause false positives and it will block more unidentified link spam. Large wiki administrators may not mind dealing with unidentified link spam if it means less chance of false positives.
  • Reverse domain lookup provides an efficient way to filter with high specificty. The reverse domain filter is easily defeated by spammers who use subdomains that can be adusted at will. For example 123.x.host.com 124.x.host.com 124.y.host.com etc[8]. If you block host.com, you have increased sensitivity and may produce false positives which have to be offset with a whitelist. You can't escape the conundrum. Spammers use other strategies such as redirection from multiple freehosts to a more permanent URL which is invisible to the anti-spam effort. If those spammers are promoting levitra, viagra, and cialis all those subdomains and temporary domains can be blocked with one regex pattern.
  • I am agnostic. I am not saying regex is a panacea. I am not saying high specificity is not a good way to block spam. I am saying one size does not fit all. Local control is crucial.
  • By coordinating efforts, the regex can be kept smaller and pressures to add more and more domains to the domain blacklist will be reduced.
  • I described the sensitivity vs. specificity conundrum in more detail here: [9]
--jwalling 00:07, 25 January 2006 (UTC)[reply]

Bayesian Filter

[edit]
See also User:Anubhav iitr/Bayesan spam filter
There are already great tools available for spam discrimination (see http://crm114.sourceforge.net/ ). Why not pipe diffs (along with appropriate metadata ? - eg: user,IP) for each edit into such a filter... it could be trained quickly in a distributed fashion (eg: with a 'mark this edit as spam' button) and would result in pretty kickass tool to flag/block spam. Ivar 12:59, 27 February 2007 (UTC)[reply]
Agreed. When cleaning up SPAM, there should be a simple way to train the a Bayesian DB that the new content is SPAM. This should be accessible for any user with the permissions to "undo" or "rollback" those changes or to delete the new page/file. Recent Changes should show the probability to new content is SPAM and should be filterable by that field. Patrolling a change should train the Bayesian DB that the new content is HAM, if the use has that authority. Extensions like ConfirmEdit should be given read-only access to the probability new content is SPAM so they can interpose further user verification or reject the edit outright. A separate Bayesian DB per namespace might be desirable. --76.220.103.20 18:54, 19 January 2012 (UTC)[reply]

Next Comment

[edit]





Project Resources

[edit]

Tech Forums

[edit]
geeklog-spam
  • This list is for discussions on new strategies to fight comment spam.
Wikimedia
Wikipedia mailing lists
  • MediaWiki-l -- MediaWiki announcements and site admin list
MediaZilla (BugZilla)
SourceForge - download
Snapshot Jan 23, 2005
FILE - REV - AGE - AUTHOR - LAST LOG ENTRY
README - 1.2  -  2 months - timstarling - load_lists is no longer required
SpamBlacklist.php - 1.3 - 4 months - avar - * Support for $wgExtensionCredits
SpamBlacklist_body.php - 1.17 - 24 hours - timstarling - Updated DB: for the 1.5 schema, fixed a few bugs
cleanup.php - 1.2 - 2 days - timstarling - some tweaks
[edit]
[edit]

Wiki Resources

[edit]
  • OddMuse Perl wiki used on chongqed wiki, EmacsWiki, and CommunityWiki

Regex Resources

[edit]
Regex
[edit]
http://www.phpfreaks.com/tutorials/52/0.php Introduction to regular expressions
PCRE
[edit]
PCRE - Perl Compatible Regular Expressions
http://www.pcre.org/
http://www.pcre.org/pcre.txt
https://secure.php.net/ref.pcre
https://secure.php.net/reference.pcre.pattern.syntax
http://www.phpguru.org/...PHP PCRE cheat sheet - download PDF
PHP
[edit]
http://www.regular-expressions.info/php.html
https://secure.php.net/function.preg-match
https://secure.php.net/function.preg-match-all
https://secure.php.net/function.preg-quote
https://secure.php.net/function.preg-replace
https://secure.php.net/function.preg-replace-callback
https://secure.php.net/function.preg-split