Project repo for CL Team Laboratory at the University of Stuttgart. Mirror from GitHub repo => https://github.com/pavan245/citation-analysis
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Pavan Mandava c6440b2553
Added some more Lexicons
6 years ago
classifier Model Testing Code added 6 years ago
data Added Data from IMS Server, 6 years ago
eval Comparision with sklearn metrics done - testing 6 years ago
feature_extraction Added some more Lexicons 6 years ago
testing Added some more Lexicons 6 years ago
utils Perceptron and Multi-Class Perceptron done 6 years ago
.gitignore Added Data from IMS Server, 6 years ago
README.md Update README.md 6 years ago
Structural Scaffolds for Citation Intent Classification in Scientific Publications.pdf added paper 6 years ago

README.md

citation-analysis

Project repo for Computational Linguistics Team Laboratory at the University of Stuttgart

Evaluation

we plan to implement and use f1_score metric for evaluation of our classifier

F1 score is a weighted average of Precision and Recall(or Harmonic Mean between Precision and Recall).
The formula for F1 Score is: F1 = 2 * (precision * recall) / (precision + recall)

eval.metrics.f1_score(y_true, y_pred, labels, average)

Parameters:

y_true : 1-d array or list of gold class values
y_pred : 1-d array or list of estimated values returned by a classifier
labels : list of labels/classes
average: string - [None, 'micro', 'macro'] If None, the scores for each class are returned.