From 3d6f24bf5bb831ba985577499e6f5aac4ef36f16 Mon Sep 17 00:00:00 2001 From: yelircaasi Date: Mon, 3 Aug 2020 11:21:34 +0200 Subject: [PATCH] Update README.md --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index e78c8a7..ef77880 100644 --- a/README.md +++ b/README.md @@ -143,10 +143,10 @@ Check the source [code](/classifier/nn_ff.py) for more details on the implementa [Link](/testing/ff_model_testing.py) to the test source code. All the Hyperparameters can be modified to experiment with. ### Evaluation -As in theperceptron, we used ***f1_score*** metric for evaluation of our baseline classifier. +As in the perceptron classifier, we used ***f1_score*** metric for evaluation of our baseline classifier. ### Results -Confusion Matrix Plot +Confusion Matrix Plot ### 3) BiLSTM + Attention with ELMo (AllenNLP Model) The Bi-directional Long Short Term Memory (BiLSTM) model built using the [AllenNLP](https://allennlp.org/) library. For word representations, we used 100-dimensional [GloVe](https://nlp.stanford.edu/projects/glove/) vectors trained on a corpus of 6B tokens from Wikipedia. For contextual representations, we used [ELMo](https://allennlp.org/elmo) Embeddings which have been trained on a dataset of 5.5B tokens. This model uses the entire input text, as opposed to selected features in the text, as in the first two models. It has a single-layer BiLSTM with a hidden dimension size of 50 for each direction.