Project repo for CL Team Laboratory at the University of Stuttgart. Mirror from GitHub repo => https://github.com/pavan245/citation-analysis
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
yelircaasi 12c9610f0b
added a few comments
5 years ago
classifier added a few comments 5 years ago
configs Deleted LateX old files from the repo 6 years ago
data Added Data from IMS Server, 6 years ago
eval Minor changes to model and Added Results to Presentation Slides 6 years ago
feature_extraction data readers added 6 years ago
testing changed metric prints 6 years ago
utils moved Isaac's commit to the current package 6 years ago
.allennlp_plugins Config file changes for IMS Machines 6 years ago
.gitignore Added Data from IMS Server, 6 years ago
README.md Update README.md 6 years ago
Structural Scaffolds for Citation Intent Classification in Scientific Publications.pdf added paper 6 years ago

README.md

citation-analysis

Project repo for Computational Linguistics Team Laboratory at the University of Stuttgart

Evaluation

we plan to implement and use f1_score metric for evaluation of our classifier

F1 score is a weighted average of Precision and Recall(or Harmonic Mean between Precision and Recall).
The formula for F1 Score is: F1 = 2 * (precision * recall) / (precision + recall)

eval.metrics.f1_score(y_true, y_pred, labels, average)

Parameters:

y_true : 1-d array or list of gold class values
y_pred : 1-d array or list of estimated values returned by a classifier
labels : list of labels/classes
average: string - [None, 'micro', 'macro'] If None, the scores for each class are returned.