Project repo for CL Team Laboratory at the University of Stuttgart.
Mirror from GitHub repo => https://github.com/pavan245/citation-analysis
You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
|
|
5 years ago | |
|---|---|---|
| classifier | 5 years ago | |
| configs | 6 years ago | |
| data | 6 years ago | |
| eval | 6 years ago | |
| feature_extraction | 6 years ago | |
| testing | 6 years ago | |
| utils | 6 years ago | |
| .allennlp_plugins | 6 years ago | |
| .gitignore | 6 years ago | |
| README.md | 6 years ago | |
| Structural Scaffolds for Citation Intent Classification in Scientific Publications.pdf | 6 years ago | |
README.md
citation-analysis
Project repo for Computational Linguistics Team Laboratory at the University of Stuttgart
Evaluation
we plan to implement and use f1_score metric for evaluation of our classifier
F1 score is a weighted average of Precision and Recall(or Harmonic Mean between Precision and Recall).
The formula for F1 Score is: F1 = 2 * (precision * recall) / (precision + recall)
eval.metrics.f1_score(y_true, y_pred, labels, average)
Parameters:
y_true : 1-d array or list of gold class values
y_pred : 1-d array or list of estimated values returned by a classifier
labels : list of labels/classes
average: string - [None, 'micro', 'macro'] If None, the scores for each class are returned.