From 2bee099c6c6f8e9b63528eb385c7cf4642142834 Mon Sep 17 00:00:00 2001 From: Pavan Mandava Date: Sun, 26 Apr 2020 23:49:16 +0200 Subject: [PATCH] Update README.md --- README.md | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/README.md b/README.md index 86a265c..6341154 100644 --- a/README.md +++ b/README.md @@ -1,2 +1,19 @@ # citation-analysis Project repo for Computational Linguistics Team Laboratory at the University of Stuttgart + + +### Evaluation +we plan to implement and use ***f1_score*** metric for evaluation + +> F1 score is a weighted average of Precision and Recall(or Harmonic Mean between Precision and Recall) +> The formula for F1 Score is: +> F1 = 2 * (precision * recall) / (precision + recall) + +```python +eval.metrics.f1_score(y_true, y_pred, labels, average) +``` +#### Parameters: +**y_true** : 1-d array or list of gold class values +**y_pred** : 1-d array or list of estimated values returned by a classifier +**labels** : list of labels/classes +**average**: string - [None, 'micro', 'macro']