Jump to content

User:OrenBochman/Search/NGSpec

From mediawiki.org


  • The ultimate goal is to make searching simple and satisfacory.

secondry goals are:

  • improve precision and recall.
  • Evaluate component by the knowldge & intelligence they can expose
  • Use infrastrucure effectiv;ly
  • Low edit to index time


Features

[edit]


Indexing

[edit]

Brainstorm Some Search Problems

[edit]

LSDEAMON vs Apache Solr

[edit]

As search evolves it might be prudent to migrate to Apache SOLR[3] as a stand alone search server instead of the LSDEAMON



Query Expansion

[edit]

Indexing Source as opposed to HTML

[edit]

Problem: Lucene search processes Wikimedia source text and not the outputted HTML.

[edit]

Solution:

  1. Index output HTML (placed into cache)
  2. Stip unwanted tags (while)
  3. boosting things like
  • Headers
  • Interwikis
  • External Links

Problem: HTML also contains CSS, HTML, Script, Comments

[edit]
  1. solution:
    Either index these too, or run a filter to remove them. Some Strategies are:
    1. Discard all markup.
      1. A markup_filter/tokenizer could be used to discard markup.
      2. Tika project can do this.
      3. Other ready made solutions.
    2. Keep all markup
      1. Write a markup-analyzer that would be used to compress the page to reduce storage requirements.
        (interesting if one wants to also compress output for integrating into DB or Cache.
    3. Selective processing
      1. A table_template_map extension could be used in a strategy to identify structured information for deeper indexing.
      2. This is the most promising it can detect/filter out unapproved markup (Javascripts, CSS, Broken XHTML).

Problem: Indexing offline and online

[edit]
  1. solr can access the DB directly...?
  2. real-time "only" - slowly build index in background
  3. offline "only" - used dedicated machine/cloud to dump and index offline.
  4. dua - each time the linguistic component becomes significantly better (or there is a bug fix) it would be desire able to upgrade search. How this would be done would depend much on the architecture of the analyzer. One possible approach would be
    1. production of a linguistic/entity data or a new software milestone.
    2. offline analysis from dump (xml,or html)
    3. online processing newest to oldest updates with a (Poisson wait time prediction model)

NG Search Features

[edit]

Soluition 2 - specialized Language Support

[edit]

Integrate new resources for languages analysing as they become available.

  1. contrib location for
    1. lucene
    2. solr
  2. external resources
language resource status Comments
Arabic Stemmer - algorithmic TestArabicNormalizationFilter.java at https://issues.apache.org/jira/secure/attachment/12391029/LUCENE-1406.patch
Arabic Stemmer - data based http://savannah.nongnu.org/projects/aramorph
Chinese SmartChineseSentenceTokenizerFactory.java and SmartChineseWordTokenFilterFactory.java
Hungarian morphology identified
Finish morphology http://gna.org/projects/omorfi
Hebrew morphology identified
Japanese morphology identified
Polish StempelPolishStemFilterFactory.java
  1. Benchmarking
  2. TestSuite (check resource against N-Gram)
  3. Acceptence Test
    1. Ranking Suite based on "did you know..." glosses and thier articles

How can search be made more interactive via Facets?

[edit]
  1. SOLR instead of Lucene could provide faceted search involving categories.
  2. The single most impressive change to search could be via facets.
  3. Facets can be generated via categories (Though they work best in multiple shallow hierarchies).
  4. Facets can be generated via template analysis.
  5. Facets can be generated via semantic extensions. (explore)
  6. Focus on culture (local,wiki), sentiment(), importance, popularity (edit,view,revert) my be refreshing.
  7. Facets can also be generated using named entity and relational analysis.
  8. Facets may have substantial processing cost if done wrong.
  9. A Cluster map interface might be popular.

How Can Search Resolve Unexpected Title Ambiguity

[edit]
  • The The Art Of War prescribes the following advice "know the enemy and know yourself and you shall emerge victorious in 1000 searches". (Italics are mine).
  • Google called it "I'm feeling lucky".

Ambiguity can come from:

  • The Lexical form of the query (bank - river, money)
  • From the result domain - the top search result is an exact match of a disambiguation page.

In either case the search engine should be able to make a good (measured) guess as to what the user meant and give them the desired result.

The following data is available:

  • Squid Chace access is sampled at 1 to a 1000
  • All edits are logged too.
[edit]
  • If we wanted to collect intelligence we could instrument all links to jump to a redirect page which logs

<source,target,user/ip-cookie,timestamp> than fetches the required page.

  • It would be interesting to have these stats for all pages.
  • It would be really interesting to have these stats for disambiguation/redirect pages.
  • Some of this may be available from the site logs (are there any)

Use case 1. General browsing history stats available for disambiguation pages

[edit]

Here is a resolution heuristic

  1. use intelligence vector of <target,frequency> to jump to the most popular (80% solution) - call it "I hate disambiguation" preference.
  2. use intelligence vector <source,target,frequency> to produce document term vector projections of source vs target to match most related source and target pages. (should index source).

Use case 2. crowd source local interest

[edit]

Search Patterns are often affected by television etc. This call for analyzing search data and producing the following intelligence vector <top memes, geo location>. This would be produced every N<=15 minutes.

  1. use inteligence vector <source,target,target freshness,frequency> together with <top memes, geo location> if significant on the search term to steer to the current interest.

Use case 3. Use specific browsing history also available

[edit]
  1. use <source,target,frequency> and as above but with a memory <my top memes + edit history> weighed by time to fetch personalised search results.

How can search be made more relevant via Intelligence?

[edit]
  1. Use current page (AKA referer)
  2. Use browsing history
  3. Use search history
  4. Use Profile
  5. API for serving ads/fundraising

How Can Search Be Made More Relevant via metadata extraction ?

[edit]

While semantic wiki is one approach to metadata collections, the Apache UIMA offers a possibility of extraction of metadata from free text as well (as templates).

  • entity detection.

How To Test Quality of Search Results ?

[edit]

Ideally one would like to have a list of queries + top result, highlight etc for different wikis and test the various algorithms. Since data can change one would like to use something that is stable over time.

  1. generated Q&A corpus.
  2. snapshot corpus.
  3. real world Q&A (less robust since a real world wiki test results will change over time).
  4. some queries are easy targets (unique article) while others are harder to find (many results).

Personalised Results via ResponseTrackingFilter

[edit]
  • Users post search action should be tracked anonymously to test and evaluate the ranking to their needs.
  • Users should be able to opt in for personalised tracking based on their view/edit history.
  • This information should be integrated into the tracking algorithm as a component that can filter search.
[edit]

External Links should be scanned once they are added. This will facilitate

  • testing is a link is alive.
  • testing if the content has changed.

The links should be doctored for frequency count.

[edit]
  • index a cross language field with N=200 words from each language version of wikipedia in it.
  • the run PLSI alorithem on it.
  • this will produce a matrix that associates phrases with cross language meaning.
  • so it should then be possible to use the out put of this index to do xross language search.

Payloads

[edit]
  • payloads allow storing and retrieving arbitrary tokens for each token.
  • payloads can be used to boost at the term level (using function queries)

What might go into payloads?

  1. Html (Logical) Markup Info that is stripped (e.g.)
    1. isHeader
    2. isEmphesized
    3. isCode
  2. WikiMarkUp
    1. isLinkText
    2. isImageDesc
    3. TemplateNestingLevel
  3. Linguistic data
    1. LangId
    2. LemmaId - Id for base form
    3. MorphState - Lemma's Morphological State
    4. ProbPosNN - probability it is a noun
    5. ProbPosVB - probability it is a noun
    6. ProbPosADJ - probability it is a noun
    7. ProbPosADV - probability it is a noun
    8. ProbPosPROP - probability it is a noun
    9. PropPosUNKOWN - probability it is Other/Unknown
  4. Semantic data
    1. ContextBasedSeme (if disambiguated)
  5. LanguageIndependentSemeId
  6. isWikiTitle
  7. Reputation
    1. Owner(ID,Rank)
    2. TokenReputation
  • some can be used for ranking.
  • some can be used for cross language search.
  • some can be used to improve precision.
  • some can be used to increase recall.

References

[edit]