From f0446b8cdc13e06d025692858df3d7bd9e8d6608 Mon Sep 17 00:00:00 2001 From: Pavan Mandava Date: Sun, 27 Nov 2022 10:59:58 +0100 Subject: [PATCH] Updated README.md --- README.md | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/README.md b/README.md index d13da7e..4b51966 100644 --- a/README.md +++ b/README.md @@ -70,19 +70,19 @@ tar -xvf /path/to/folder/gtg_pretrained.tar.gz ``` #### Clone the repository -Clone the repository for source code +Clone the repository source code ```shell git clone https://git.pavanmandava.com/pavan/master-thesis.git -``` - -Pull the changes from remote (if local is behind the remote) -```shell -git pull -``` +``` Change directory ```shell cd master-thesis +``` + +Pull the changes from remote (if local is behind the remote) +```shell +git pull ``` #### Set Environment variables @@ -99,6 +99,8 @@ Edit the [set_env.sh](set_env.sh) file and set the paths (as required) for the f `SAVED_MODELS_PROMPT` - Path for saving the trained prompt-based models (after each epoch) `OUTPUTS_DIR_PROMPT` - Path for storing the prompt model outputs (generations) + +> :information_source: **Note**: Change the paths of each environment variable and make sure it matches your local system. Invalid/Wrong paths may lead to errors in the training/testing script. ```shell nano set_env.sh @@ -179,7 +181,7 @@ The data for training the prompt learning model is available under [data/prompt- `create_dataset.py` ([link](utils/create_dataset.py)) has the scripts for converting/creating the data for training the prompt-based model. > **Note:** -> Running `create_dataset.py` can take some time as it has to download, install and run Stanford CoreNLP `stanza` package. +> Running `create_dataset.py` can take some time as it needs to download, install and run Stanford CoreNLP `stanza` package. > All the data required for training the prompt-based model is available under [data](data) directory of this repo. ### Install the requirements