METEOR

to get instant updates about 'METEOR' on your MyPage. Meet other similar minded people. Its Free!

X 

All Updates


Description:
<!-- PLEASE DO NOT CONVERT REFERENCES WITHOUT DISCUSSING ON TALK PAGE. SEE http://bugzilla.wikimedia.org/show_bug.cgi?id=5885 -->

METEOR (Metric for Evaluation of Translation with Explicit ORdering) is a metric for the evaluation of machine translation output. The metric is based on the harmonic mean of unigram precision and recall, with recall weighted higher than precision. It also has several features that are not found in other metrics, such as stemming and synonymy matching, along with the standard exact word matching. The metric was designed to fix some of the problems found in the more popular BLEU metric, and also produce good correlation with human judgement at the sentence or segment level This differs from the BLEU metric in that BLEU seeks correlation at the corpus level.Results have been presented which give correlation of up to 0.964 with human judgement at the corpus level, compared to BLEU's achievement of 0.817 on the same data set. At the sentence level, the maximum correlation with human judgement achieved was 0.403.

Algorithm

As with BLEU, the basic unit of evaluation is the sentence, the algorithm first creates an alignment (see illustrations) between two sentence, the candidate translation string, and the reference translation string. The alignment is a set of mappings between unigrams. A mapping can be thought of as a line between a unigram in one string, and a unigram in another string. The constraints are as follows; every unigram in the...
Read More

No feeds found

All
wait Posting your question. Please wait!...


No updates available.
No messages found
Tell your friends >
about this page
 Create a new Page
for companies, colleges, celebrities or anything you like.Get updates on MyPage.
Create a new Page
 Find your friends
  Find friends on MyPage from