SlideShare a Scribd company logo
SearchLand:
search quality for beginners




Valeria de Paiva
Santa Clara University
Nov 2010
Check http://www.parc.com/event/934/adventures-in-searchland.html
Searchland: Search quality for Beginners
Outline
    SearchLand
    Search engine basics
    Measuring search quality
    Conclusions...
    and Opportunities
SearchLand?
“Up until now most search
  engine development has
  gone on at companies with
  little publication of technical
  details. This causes search
  engine technology to remain
  largely a black art and to be
  advertising oriented.’’ Brin
  and Page, “The anatomy of
  a search engine”, 1998
  Disclaimer: This talk presents the guesswork of the author.
  It does not reflect the views of my previous employers or practices at work.

                                                          Thanks kxdc2007!
SearchLand
    Twelve years later the
     complaint remains…
    Gap between research and
     practice widened
    Measuring SE quality is
     `adversarial computing’
    Many dimensions of quality:
     pictures, snippets, categories,
     suggestions, timeliness, speed,
     etc…
SearchLand: Draft Map
Based on slides for Croft,
 Metzler and Strohman’s
 “Search Engines:
 Information Retrieval in
 Practice”, 2009 the tutorial
 `Web Search Engine
 Metrics’ by Dasdan,
 Tsioutsiouliklis and
 Velipasaoglu for WWW09/10
and Hugh Williams slides for
 ACM SIG on Data Mining

          THANKS GUYS!!…
Search Engine Basics...
Web search engines don’t
 search the web:
  They search a copy of the web
  They crawl documents from the
   web
  They index the documents, and
   provide a search interface
   based on that index
  They present short snippets that
   allow users to judge relevance
  Users click on links to visit the
   actual web document
Search Engines Basics
       Basic architecture




From Web Search Engine Metrics for Measuring User Satisfaction
Tutorial at WWW conference, by Dasdan et al 2009,2010
Crawling
“you don’t need a lot of thinking to do crawling; you need bandwidth”




         CRAWLERS
         Fetch new resources from new domains or pages
         Fetch new resources from existing pages
         Re-fetch existing resources that have changed

     Prioritization is essential:
        There are far more URLs than available fetching bandwidth
        For large sites, it’s difficult to fetch all resources
        Essential to balance re-fetch and discovery
        Essential to balance new site exploration with old site
           exploration

         Snapshot or incremental? How broad? How seeded?
Writing/running a crawler
      isn’t straightforward….


                Crawler Challenges
Crawlers shouldn’t overload or overvisit sites
Must respect robots.txt exclusion standard

Many URLs exist for the same resource
URLs redirect to other resources (often)
Dynamic pages can generate loops, unending lists, and
 other traps
Not Found pages often return ok HTTP codes
DNS failures
Pages can look different to end-user browsers and crawlers
Pages can require JavaScript processing
Pages can require cookies
Pages can be built in non-HTML environments
To index or not to index…
After crawling need to convert documents into
  index terms… to create an index.




You need to decide how much of the page you
 index and which `features or signals' you care
 about.
Also which kinds of file to index? Text, sure.
Pdfs, jpegs, audio, torrents, videos, Flash???
The obvious, the ugly and the
                  clever…



Some obvious:
hundreds of billions of web pages…
Neither practical nor desirable to index all. Must remove:
spam pages, illegal page, malware, repetitive or duplicate pages,
crawler traps, pages that no longer exist, pages that have
substantially changed, etc…
Most search engines index in the range of 20 to 50 billion
documents (Williams)
How many pages each engine indexes, and how many pages are
on the web are hard research problems…
Indexing: the obvious…
   which are the right pages?
   pages that users want (duh…)
   Pages: popular in the web link graph
         match queries
         from popular sites
         clicked on in search results
         shown by competitors
         in the language or market of the users
         distinct from other pages
         change at a moderate rate, etc…
The head is stable, The tail consists of billions of
 candidate pages with similar scores
Indexing: more obvious…features!
How to index pages so that we
 can find them?
A feature (signal) is an attribute of a
  document that a computer can
  detect, and that we think may
  indicate relevance.

Some features are obvious, some are
 secret sauce and how to use them
 is definitely a trade secret for each
 search engine.
Features: the obvious…
 Term matching                         Term frequency
  The system should prefer                  The system should prefer
   documents that contain                    documents that contain the
   the query terms.                          query terms many times.




Inverse document frequency
Rare words are more important than frequent words

TD/IDF Karen Sparck-Jones
Indexing: some ugly…
Proximity                          Term Location
Words that are close together in   Prefer documents that contain
  the query should be close           query words in the title or
  together in relevant                headings.
  documents.




      Prefer documents that contain query words in
       the URL.
      www.whitehouse.gov
      www.linkedin.com/in/valeriadepaiva
Indexing: some cleverness?


Prefer documents that are
 authoritative and popular.

 HOW?
PageRank
 in 3 easy bullets:
1.  The random surfer is a
    hypothetical user that clicks
    on web links at random.
2. Popular pages connect…
3: leverage connections?
  Think of a gigantic
    graph (connected
    pages) and its
    transition matrix
  Make the matrix
    probabilistic
  Those correspond to
    the random surfer
    choices
  Find its principal eigen-
    vector= pagerank
PageRank is the proportion of time that a random
surfer would jump to a particular page.
Indexing: summing up
Store (multiple copies of?) the web in a
   document store
Iterate over the document store to choose
   documents
Create an index, and ship it to the index
   serving nodes
Repeat…
Sounds easy? It isn’t!

Three words: scale-up, parallelism, time
Selection and Ranking



Quality of Search: what do we want to do?
Optimize for user satisfaction in each
 component of the pipeline
Need to measure user satisfaction
Evaluating Relevance is HARD!
  Effectiveness, efficiency and
cost are related - difficult trade-off
  Two main kinds of approach:


     IR traditional evaluations
     click data evaluation & log
     analysis
    Many books on IR, many
     patents/trade secrets from
     search engines…
    Another talk
IR evaluation vs Search Engine
    TREC competitions (NIST and
     U.S. Department of Defense),
     since 1992
    Goal: provide the infrastructure
     necessary for large-scale
     evaluation of text retrieval
     methodologies
    several tracks, including a Web
     track, using ClueWeb09, one
     billion webpages
    TREC results are baseline, do
     they work for search engines?
Evaluating Relevance in IR…
if universe small you can check easily precision and recall
IR vs Search Engines
Evaluating the whole system…
        Relevance metrics not based on users
         experiences or tasks
        SCoCveCoverCome attempts: Co

Coverage metrics
Latency and Discovery metrics
Diversity metrics
Freshness metrics
Freshness of snippets?
Measuring change of all internet?
Presentation metrics:
suggestions, spelling corrections, snippets, tabs, categories, definitions, images,
videos, timelines, maplines, streaming results, social results, …
Conclusions
    Relevance is elusive, but
     essential.
    Improvement requires
     metrics and analysis,
     continuously
    Gone over a rough map of
     the issues and some
     proposed solutions
    Many thanks to Dasdan, Tsioutsiouliklis and
     Velipasaoglu, Croft, Metzler and Strohman,
     (especially Strohman!) and Hugh Williams
     for slides/pictures   .
Coda
    There are many search engines.
     Their results tend to be very
     similar. (how similar?)
    Are we seeing everything?
     Reports estimate we can see
     only 15% of the existing web.
    Probing the web is mostly
     popularity based. You're likely to
     see what others have seen
     before. But your seeing
     increases the popularity of what
     you saw, thereby reducing the
     pool of available stuff. Vicious or
     virtuous circle? How to measure?
Searchland: Search quality for Beginners
References
Croft, Metzler and Strohman’s “Search
 Engines: Information Retrieval in Practice”,
 2009, Addison-Wesley
the tutorial `Web Search Engine Metrics’ by
 Dasdan, Tsioutsiouliklis and Velipasaoglu for
 WWW09/10, available from http://
 www2010.org/www/program/tutorials/
Hugh Williams slides for ACM Data Mining
 May 2010
Anna Patterson:
 Why Writing Your Own Search Engine Is Hard
ACM Q, 2004

More Related Content

PDF
PPT
Search Analytics: Diagnosing what ails your site
PPT
Search Systems
PPT
Faceted Navigation (LACASIS Fall Workshop 2005)
PPT
Best practices for building usable & accessible Web content
PPTX
Semtech bizsemanticsearchtutorial
PDF
Enterprise Information Architecture: Because users don't care about your org...
Search Analytics: Diagnosing what ails your site
Search Systems
Faceted Navigation (LACASIS Fall Workshop 2005)
Best practices for building usable & accessible Web content
Semtech bizsemanticsearchtutorial
Enterprise Information Architecture: Because users don't care about your org...

What's hot (19)

PPTX
Advanced google searching (1)
PPT
Search Engines
PPT
tagging idea
PPT
Trekking through the world of information
PPT
PR 313 - Public Relations & the World Wide Web
PPTX
Smoke Signals and Social Signals: A look at the patents and papers
PPTX
Think Like A Librarian
PPTX
Think Like A Librarian
PPTX
How search engines work Anand Saini
PPTX
Search engines
PPT
Search Engine Strategies
PPT
Research power point for students
PPTX
Ordering the chaos: Creating websites with imperfect data
PPTX
User centred design and students' library search behaviours
PPT
Web Servers
PPT
Search Engines
PPTX
SharePoint 2010 Findability
PPT
HT06, Position Paper, Tagging, Taxonomy, Flickr, Academic Article, ToRead, Pr...
Advanced google searching (1)
Search Engines
tagging idea
Trekking through the world of information
PR 313 - Public Relations & the World Wide Web
Smoke Signals and Social Signals: A look at the patents and papers
Think Like A Librarian
Think Like A Librarian
How search engines work Anand Saini
Search engines
Search Engine Strategies
Research power point for students
Ordering the chaos: Creating websites with imperfect data
User centred design and students' library search behaviours
Web Servers
Search Engines
SharePoint 2010 Findability
HT06, Position Paper, Tagging, Taxonomy, Flickr, Academic Article, ToRead, Pr...
Ad

Similar to Searchland: Search quality for Beginners (20)

PDF
Charting Searchland, ACM SIG Data Mining
PPT
How search engines work
PDF
Search Engine Google
PPTX
Web Search Engine, Web Crawler, and Semantics Web
PPT
Business Intelligence Solution Using Search Engine
PPT
Searching tech2
PDF
IRJET - Review on Search Engine Optimization
PDF
sunny-slides
PDF
PDF
Not Your Mom's SEO
PPT
Web Search and Mining
PDF
Web search engines and search technology
PDF
PARC Forum 2009: Adventures in SearchLand
PDF
Search V Next Final
PPT
3 Understanding Search
PDF
Everything You Wish You Knew About Search
PDF
SEARCH ENGINE 2015_11111111111111111.pdf
PPTX
Introduction to Information Retrieval
PPT
Web Search Engine
PPTX
IRT Unit_4.pptx
Charting Searchland, ACM SIG Data Mining
How search engines work
Search Engine Google
Web Search Engine, Web Crawler, and Semantics Web
Business Intelligence Solution Using Search Engine
Searching tech2
IRJET - Review on Search Engine Optimization
sunny-slides
Not Your Mom's SEO
Web Search and Mining
Web search engines and search technology
PARC Forum 2009: Adventures in SearchLand
Search V Next Final
3 Understanding Search
Everything You Wish You Knew About Search
SEARCH ENGINE 2015_11111111111111111.pdf
Introduction to Information Retrieval
Web Search Engine
IRT Unit_4.pptx
Ad

More from Valeria de Paiva (20)

PDF
Dialectica Comonoids
PDF
Dialectica Categorical Constructions
PDF
Logic & Representation 2021
PDF
Constructive Modal and Linear Logics
PDF
Dialectica Categories Revisited
PDF
PLN para Tod@s
PDF
Networked Mathematics: NLP tools for Better Science
PDF
Going Without: a modality and its role
PDF
Problemas de Kolmogorov-Veloso
PDF
Natural Language Inference: for Humans and Machines
PDF
Dialectica Petri Nets
PDF
The importance of Being Erneast: Open datasets in Portuguese
PDF
Negation in the Ecumenical System
PDF
Constructive Modal and Linear Logics
PDF
Semantics and Reasoning for NLP, AI and ACT
PDF
NLCS 2013 opening slides
PDF
Dialectica Comonads
PDF
Categorical Explicit Substitutions
PDF
Logic and Probabilistic Methods for Dialog
PDF
Intuitive Semantics for Full Intuitionistic Linear Logic (2014)
Dialectica Comonoids
Dialectica Categorical Constructions
Logic & Representation 2021
Constructive Modal and Linear Logics
Dialectica Categories Revisited
PLN para Tod@s
Networked Mathematics: NLP tools for Better Science
Going Without: a modality and its role
Problemas de Kolmogorov-Veloso
Natural Language Inference: for Humans and Machines
Dialectica Petri Nets
The importance of Being Erneast: Open datasets in Portuguese
Negation in the Ecumenical System
Constructive Modal and Linear Logics
Semantics and Reasoning for NLP, AI and ACT
NLCS 2013 opening slides
Dialectica Comonads
Categorical Explicit Substitutions
Logic and Probabilistic Methods for Dialog
Intuitive Semantics for Full Intuitionistic Linear Logic (2014)

Recently uploaded (20)

PDF
Hindi spoken digit analysis for native and non-native speakers
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PDF
NewMind AI Weekly Chronicles - August'25-Week II
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Mushroom cultivation and it's methods.pdf
PDF
Approach and Philosophy of On baking technology
PDF
Microsoft Solutions Partner Drive Digital Transformation with D365.pdf
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
PDF
Hybrid model detection and classification of lung cancer
PDF
Accuracy of neural networks in brain wave diagnosis of schizophrenia
PDF
August Patch Tuesday
PDF
Univ-Connecticut-ChatGPT-Presentaion.pdf
PDF
Enhancing emotion recognition model for a student engagement use case through...
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
A comparative analysis of optical character recognition models for extracting...
PPTX
cloud_computing_Infrastucture_as_cloud_p
PDF
Encapsulation theory and applications.pdf
Hindi spoken digit analysis for native and non-native speakers
Assigned Numbers - 2025 - Bluetooth® Document
NewMind AI Weekly Chronicles - August'25-Week II
Building Integrated photovoltaic BIPV_UPV.pdf
Mushroom cultivation and it's methods.pdf
Approach and Philosophy of On baking technology
Microsoft Solutions Partner Drive Digital Transformation with D365.pdf
Unlocking AI with Model Context Protocol (MCP)
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Encapsulation_ Review paper, used for researhc scholars
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
Hybrid model detection and classification of lung cancer
Accuracy of neural networks in brain wave diagnosis of schizophrenia
August Patch Tuesday
Univ-Connecticut-ChatGPT-Presentaion.pdf
Enhancing emotion recognition model for a student engagement use case through...
Programs and apps: productivity, graphics, security and other tools
A comparative analysis of optical character recognition models for extracting...
cloud_computing_Infrastucture_as_cloud_p
Encapsulation theory and applications.pdf

Searchland: Search quality for Beginners

  • 1. SearchLand: search quality for beginners Valeria de Paiva Santa Clara University Nov 2010 Check http://www.parc.com/event/934/adventures-in-searchland.html
  • 3. Outline   SearchLand   Search engine basics   Measuring search quality   Conclusions...   and Opportunities
  • 4. SearchLand? “Up until now most search engine development has gone on at companies with little publication of technical details. This causes search engine technology to remain largely a black art and to be advertising oriented.’’ Brin and Page, “The anatomy of a search engine”, 1998 Disclaimer: This talk presents the guesswork of the author. It does not reflect the views of my previous employers or practices at work. Thanks kxdc2007!
  • 5. SearchLand   Twelve years later the complaint remains…   Gap between research and practice widened   Measuring SE quality is `adversarial computing’   Many dimensions of quality: pictures, snippets, categories, suggestions, timeliness, speed, etc…
  • 6. SearchLand: Draft Map Based on slides for Croft, Metzler and Strohman’s “Search Engines: Information Retrieval in Practice”, 2009 the tutorial `Web Search Engine Metrics’ by Dasdan, Tsioutsiouliklis and Velipasaoglu for WWW09/10 and Hugh Williams slides for ACM SIG on Data Mining THANKS GUYS!!…
  • 7. Search Engine Basics... Web search engines don’t search the web: They search a copy of the web They crawl documents from the web They index the documents, and provide a search interface based on that index They present short snippets that allow users to judge relevance Users click on links to visit the actual web document
  • 8. Search Engines Basics Basic architecture From Web Search Engine Metrics for Measuring User Satisfaction Tutorial at WWW conference, by Dasdan et al 2009,2010
  • 9. Crawling “you don’t need a lot of thinking to do crawling; you need bandwidth” CRAWLERS Fetch new resources from new domains or pages Fetch new resources from existing pages Re-fetch existing resources that have changed Prioritization is essential: There are far more URLs than available fetching bandwidth For large sites, it’s difficult to fetch all resources Essential to balance re-fetch and discovery Essential to balance new site exploration with old site exploration Snapshot or incremental? How broad? How seeded?
  • 10. Writing/running a crawler isn’t straightforward…. Crawler Challenges Crawlers shouldn’t overload or overvisit sites Must respect robots.txt exclusion standard Many URLs exist for the same resource URLs redirect to other resources (often) Dynamic pages can generate loops, unending lists, and other traps Not Found pages often return ok HTTP codes DNS failures Pages can look different to end-user browsers and crawlers Pages can require JavaScript processing Pages can require cookies Pages can be built in non-HTML environments
  • 11. To index or not to index… After crawling need to convert documents into index terms… to create an index. You need to decide how much of the page you index and which `features or signals' you care about. Also which kinds of file to index? Text, sure. Pdfs, jpegs, audio, torrents, videos, Flash???
  • 12. The obvious, the ugly and the clever… Some obvious: hundreds of billions of web pages… Neither practical nor desirable to index all. Must remove: spam pages, illegal page, malware, repetitive or duplicate pages, crawler traps, pages that no longer exist, pages that have substantially changed, etc… Most search engines index in the range of 20 to 50 billion documents (Williams) How many pages each engine indexes, and how many pages are on the web are hard research problems…
  • 13. Indexing: the obvious… which are the right pages? pages that users want (duh…) Pages: popular in the web link graph match queries from popular sites clicked on in search results shown by competitors in the language or market of the users distinct from other pages change at a moderate rate, etc… The head is stable, The tail consists of billions of candidate pages with similar scores
  • 14. Indexing: more obvious…features! How to index pages so that we can find them? A feature (signal) is an attribute of a document that a computer can detect, and that we think may indicate relevance. Some features are obvious, some are secret sauce and how to use them is definitely a trade secret for each search engine.
  • 15. Features: the obvious… Term matching Term frequency The system should prefer The system should prefer documents that contain documents that contain the the query terms. query terms many times. Inverse document frequency Rare words are more important than frequent words TD/IDF Karen Sparck-Jones
  • 16. Indexing: some ugly… Proximity Term Location Words that are close together in Prefer documents that contain the query should be close query words in the title or together in relevant headings. documents. Prefer documents that contain query words in the URL. www.whitehouse.gov www.linkedin.com/in/valeriadepaiva
  • 17. Indexing: some cleverness? Prefer documents that are authoritative and popular. HOW? PageRank in 3 easy bullets: 1.  The random surfer is a hypothetical user that clicks on web links at random. 2. Popular pages connect…
  • 18. 3: leverage connections? Think of a gigantic graph (connected pages) and its transition matrix Make the matrix probabilistic Those correspond to the random surfer choices Find its principal eigen- vector= pagerank PageRank is the proportion of time that a random surfer would jump to a particular page.
  • 19. Indexing: summing up Store (multiple copies of?) the web in a document store Iterate over the document store to choose documents Create an index, and ship it to the index serving nodes Repeat… Sounds easy? It isn’t! Three words: scale-up, parallelism, time
  • 20. Selection and Ranking Quality of Search: what do we want to do? Optimize for user satisfaction in each component of the pipeline Need to measure user satisfaction
  • 21. Evaluating Relevance is HARD!   Effectiveness, efficiency and cost are related - difficult trade-off   Two main kinds of approach: IR traditional evaluations click data evaluation & log analysis   Many books on IR, many patents/trade secrets from search engines…   Another talk
  • 22. IR evaluation vs Search Engine   TREC competitions (NIST and U.S. Department of Defense), since 1992   Goal: provide the infrastructure necessary for large-scale evaluation of text retrieval methodologies   several tracks, including a Web track, using ClueWeb09, one billion webpages   TREC results are baseline, do they work for search engines?
  • 23. Evaluating Relevance in IR… if universe small you can check easily precision and recall
  • 24. IR vs Search Engines
  • 25. Evaluating the whole system… Relevance metrics not based on users experiences or tasks SCoCveCoverCome attempts: Co Coverage metrics Latency and Discovery metrics Diversity metrics Freshness metrics Freshness of snippets? Measuring change of all internet? Presentation metrics: suggestions, spelling corrections, snippets, tabs, categories, definitions, images, videos, timelines, maplines, streaming results, social results, …
  • 26. Conclusions   Relevance is elusive, but essential.   Improvement requires metrics and analysis, continuously   Gone over a rough map of the issues and some proposed solutions   Many thanks to Dasdan, Tsioutsiouliklis and Velipasaoglu, Croft, Metzler and Strohman, (especially Strohman!) and Hugh Williams for slides/pictures .
  • 27. Coda   There are many search engines. Their results tend to be very similar. (how similar?)   Are we seeing everything? Reports estimate we can see only 15% of the existing web.   Probing the web is mostly popularity based. You're likely to see what others have seen before. But your seeing increases the popularity of what you saw, thereby reducing the pool of available stuff. Vicious or virtuous circle? How to measure?
  • 29. References Croft, Metzler and Strohman’s “Search Engines: Information Retrieval in Practice”, 2009, Addison-Wesley the tutorial `Web Search Engine Metrics’ by Dasdan, Tsioutsiouliklis and Velipasaoglu for WWW09/10, available from http:// www2010.org/www/program/tutorials/ Hugh Williams slides for ACM Data Mining May 2010 Anna Patterson: Why Writing Your Own Search Engine Is Hard ACM Q, 2004