From 1da0fa1dad063276caa45b9e6a6e91ccb5dbfcff Mon Sep 17 00:00:00 2001 From: Pavan Mandava Date: Mon, 5 Dec 2022 13:30:41 +0000 Subject: [PATCH] Updated README.md --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 4ebe702..22ad7c9 100644 --- a/README.md +++ b/README.md @@ -189,7 +189,7 @@ cd prompt-learning pip install -r requirements.txt ``` -### Data +### Training Data The data for training the prompt learning model is available under [data/prompt-learning](data/prompt-learning) directory. `create_dataset.py` ([link](utils/create_dataset.py)) has the scripts for converting/creating the data for training the prompt-based model. @@ -205,7 +205,7 @@ Value candidates are extracted from the user dialog history and are utilized in - Stop words and repeated candidate values are filtered out > **Note:** -> Running `create_dataset.py` can take some time as it needs to download, install and run Stanford CoreNLP `stanza` package. This script also downloads coreNLP files of size about `~1GB` and requires significant amount of RAM and processor capabilities to run this efficiently. +> Running `create_dataset.py` can take some time as it needs to download, install and run Stanford CoreNLP `stanza` package. This script also downloads coreNLP files of size about `~1GB` and requires significant amount of RAM and processor capabilities to run it efficiently. > > All the data required for training the prompt-based model is already available under the [data](data) directory of this repo. For reproducing the results, it's not required to run this script.