IR evaluation

to get instant updates about 'IR Evaluation' on your MyPage. Meet other similar minded people. Its Free!


All Updates


IR Evaluation

The evaluation of information retrieval system is the process of assessing how well a system meets the information needs of its users. Traditional evaluation metrics, designed for Boolean retrieval or top-k retrieval, include precision and recall.

  • Precision is the fraction of retrieved documents that are relevant to the query:

<math> mbox=frac </math>

  • Recall is the fraction of the documents relevant to the query that are successfully retrieved:

<math> mbox=frac </math>

For modern (Web-scale) information retrieval, recall is no longer a meaningful metric, as many queries have thousands of relevant documents, and few users will be interested in reading all of them. Precision at k documents (P@k) is still a useful metric (e.g., P@10 corresponds to the number of relevant results on the first search results page), but fails to take into account the positions of the relevant documents among the top k.

Virtually all modern evaluation metrics (e.g., mean average precision, discounted cumulative gain) are designed for ranked retrieval without any explicit rank cutoff, taking into account the relative order of the documents retrieved by the search engines and giving more weight to documents returned at higher ranks.

See also

Further reading

  • Christopher D. Manning, Prabhakar Raghavan, and Hinrich Sch&uuml;tze. . Cambridge University Press,......
  • ...

Read More

No feeds found

Posting your question. Please wait!...

No updates available.
No messages found
Suggested Pages
Tell your friends >
about this page
 Create a new Page
for companies, colleges, celebrities or anything you like.Get updates on MyPage.
Create a new Page
 Find your friends
  Find friends on MyPage from