From 2bee099c6c6f8e9b63528eb385c7cf4642142834 Mon Sep 17 00:00:00 2001 From: Pavan Mandava Date: Sun, 26 Apr 2020 23:49:16 +0200 Subject: [PATCH 1/5] Update README.md --- README.md | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/README.md b/README.md index 86a265c..6341154 100644 --- a/README.md +++ b/README.md @@ -1,2 +1,19 @@ # citation-analysis Project repo for Computational Linguistics Team Laboratory at the University of Stuttgart + + +### Evaluation +we plan to implement and use ***f1_score*** metric for evaluation + +> F1 score is a weighted average of Precision and Recall(or Harmonic Mean between Precision and Recall) +> The formula for F1 Score is: +> F1 = 2 * (precision * recall) / (precision + recall) + +```python +eval.metrics.f1_score(y_true, y_pred, labels, average) +``` +#### Parameters: +**y_true** : 1-d array or list of gold class values +**y_pred** : 1-d array or list of estimated values returned by a classifier +**labels** : list of labels/classes +**average**: string - [None, 'micro', 'macro'] From dea631ed458680f05cec0e2455d2f7b0b99f1021 Mon Sep 17 00:00:00 2001 From: Pavan Mandava Date: Sun, 26 Apr 2020 23:49:42 +0200 Subject: [PATCH 2/5] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 6341154..e44605d 100644 --- a/README.md +++ b/README.md @@ -5,7 +5,7 @@ Project repo for Computational Linguistics Team Laboratory at the University of ### Evaluation we plan to implement and use ***f1_score*** metric for evaluation -> F1 score is a weighted average of Precision and Recall(or Harmonic Mean between Precision and Recall) +> F1 score is a weighted average of Precision and Recall(or Harmonic Mean between Precision and Recall). > The formula for F1 Score is: > F1 = 2 * (precision * recall) / (precision + recall) From 5946c40038817693163030e48820e36a8fb71865 Mon Sep 17 00:00:00 2001 From: Pavan Mandava Date: Sun, 26 Apr 2020 23:50:11 +0200 Subject: [PATCH 3/5] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index e44605d..0084c54 100644 --- a/README.md +++ b/README.md @@ -3,7 +3,7 @@ Project repo for Computational Linguistics Team Laboratory at the University of ### Evaluation -we plan to implement and use ***f1_score*** metric for evaluation +we plan to implement and use ***f1_score*** metric for evaluation of our classifier > F1 score is a weighted average of Precision and Recall(or Harmonic Mean between Precision and Recall). > The formula for F1 Score is: From 3fd9f6489857521877868285cfe164617818d150 Mon Sep 17 00:00:00 2001 From: Pavan Mandava Date: Sun, 26 Apr 2020 23:50:33 +0200 Subject: [PATCH 4/5] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 0084c54..a914518 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ Project repo for Computational Linguistics Team Laboratory at the University of Stuttgart -### Evaluation +### **Evaluation** we plan to implement and use ***f1_score*** metric for evaluation of our classifier > F1 score is a weighted average of Precision and Recall(or Harmonic Mean between Precision and Recall). From c44692816b25c659d7972d30c4c390b4b553876e Mon Sep 17 00:00:00 2001 From: Pavan Mandava Date: Mon, 27 Apr 2020 00:03:35 +0200 Subject: [PATCH 5/5] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index a914518..f1140af 100644 --- a/README.md +++ b/README.md @@ -16,4 +16,4 @@ eval.metrics.f1_score(y_true, y_pred, labels, average) **y_true** : 1-d array or list of gold class values **y_pred** : 1-d array or list of estimated values returned by a classifier **labels** : list of labels/classes -**average**: string - [None, 'micro', 'macro'] +**average**: string - [None, 'micro', 'macro'] If None, the scores for each class are returned.