Project repo for CL Team Laboratory at the University of Stuttgart. Mirror from GitHub repo => https://github.com/pavan245/citation-analysis
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Pavan Mandava bd559e0d8a
plot path print
5 years ago
classifier Commented unnecessary code 5 years ago
configs Commented unnecessary code 5 years ago
data Added Data from IMS Server, 6 years ago
eval Saving Confusion Matrix Plot PNG 5 years ago
feature_extraction WIP : Code Documentation & README Documentation 5 years ago
testing added confusion matrix and plot 5 years ago
utils added confusion matrix and plot 5 years ago
.allennlp_plugins Config file changes for IMS Machines 6 years ago
.gitignore Added Data from IMS Server, 6 years ago
README.md Update README.md 6 years ago
Structural Scaffolds for Citation Intent Classification in Scientific Publications.pdf added paper 6 years ago
predict.py plot path print 5 years ago

README.md

citation-analysis

Project repo for Computational Linguistics Team Laboratory at the University of Stuttgart

Evaluation

we plan to implement and use f1_score metric for evaluation of our classifier

F1 score is a weighted average of Precision and Recall(or Harmonic Mean between Precision and Recall).
The formula for F1 Score is: F1 = 2 * (precision * recall) / (precision + recall)

eval.metrics.f1_score(y_true, y_pred, labels, average)

Parameters:

y_true : 1-d array or list of gold class values
y_pred : 1-d array or list of estimated values returned by a classifier
labels : list of labels/classes
average: string - [None, 'micro', 'macro'] If None, the scores for each class are returned.