\section{Introduction}\label{sec:intro} \paragraph{} Dialog State Tracking (DST) is an essential module in dialog systems, which is responsible for tracking the user goals in the form of dialog states based on the entire dialog history. In dialog systems, \textquote{dialog states} - also known as \textquote{belief states} contains a set of \textit{(slot, value)} pairs for each turn of the dialog history. The \textit{(slot, value)} pairs hold specific pieces of information required for the dialog system to perform the task and help in generating the responses. The values of the slots can change when the user provides more information or accepts the system recommendations. Existing data-driven methods and neural models for individual dialog modules (NLU, DST, NLG) and end-to-end dialog systems show promising results, but they need large amounts of task-specific training data, which is rarely available for new tasks. These neural DST models do not generalize well on new domains with limited data \citep{li2021coco}. For task-specific DST, collecting dialog state labels can be costly and time-consuming, requiring domain experts to indicate all possible (\textit{slot, value}) pairs for each turn of the dialog history. A typical task-oriented dialog system contains an ontology for each domain, with a pre-defined set of slots and all possible values for each domain. In real-world applications, defining all possible slots and values for DST is difficult due to the increasing number of new domains and the evolving needs of the users. \paragraph{} Prompt-based learning \textit{(\textquote{pre-train, prompt, and predict})} is a new paradigm in NLP that aims to predict the probability of text directly from the pre-trained LM. This framework is powerful as it allows the language model to be \textit{pre-trained} on massive amounts of raw text, and by defining a new prompting function the model can perform \textit{few-shot} or even \textit{zero-shot} learning \citep{liu2021ppp}. The large pre-trained language models (PLMs) are supposed to be useful in few-shot scenarios where the task-related training data is limited, as they can be probed for task-related knowledge efficiently by using a prompt. One example of such large pre-trained language models is GPT-3 \citep{brown2020gpt3} - \textit{\textquote{Language Models are Few-Shot Learners}}. \citet{madotto2021fsb} created an end-to-end chatbot (Few-Shot Bot) using \textit{prompt-based few-shot learning} learning and achieved comparable results to those of state-of-the-art. Prompting methods are particularly helpful in few-shot learning where domain-related data is limited. \textit{Fixed-prompt LM tuning} is a fine-tuning strategy for downstream tasks, where the LM parameters are tuned with fixed prompts to help LM understand the task. This can be achieved by applying a discrete textual prompt template to the data used for fine-tuning the PLM. \paragraph{} Prompt-based learning for few-shot DST with limited labeled domains is still under-explored. Recently, \citet{yang2022prompt} proposed a new prompt learning framework for few-shot DST. This work designed a \textit{value-based prompt} and an \textit{inverse prompt} mechanism to efficiently train a DST model for domains with limited training data. This approach doesn't depend on the ontology of slots and the results show that it can generate slots by prompting the tuned PLM and outperforms the existing state-of-the-art methods under few-shot settings. In this thesis, the prompt-based few-shot methods for DST are explored by implementing the following three tasks: \begin{enumerate} \item Prompt-based few-shot DST - reproduce the results from \citet{yang2022prompt} \begin{itemize} \item[--] Implement prompt-based methods for DST task under few-shot settings \item[--] Implement a baseline model for comparing the prompt-based methods \end{itemize} \item Evaluation and analyses of belief state predictions \begin{itemize} \item[--] Evaluate the DST task using Joint Goal Accuracy (JGA) metric \item[--] Improvements observed from the prompt-based methods \item[--] Drawbacks of the prompt-based methods \end{itemize} \item Extend prompt-based methods to utilize various \textit{multi-prompt} techniques \begin{itemize} \item[--] Can different multi-prompt techniques help the PLM better understand the DST task? \item[--] Evaluation of multi-prompt methods and what's the influence of various multi-prompt techniques? \end{itemize} \end{enumerate} \clearpage