\section{Introduction}\label{sec:intro} \paragraph{} Dialog State Tracking (DST) is an essential module in dialog systems, which is responsible for tracking the user goals in the form of dialog states based on the entire dialog history. In dialog systems, \textquote{dialog states} - also known as \textquote{belief states} contains a set of \textit{(slot, value)} pairs for each turn of the dialog history. The \textit{(slot, value)} pairs hold specific pieces of information required for the dialog system to perform the task and help in generating the responses. The values of the slots can change when the user provides more information or accepts the system recommendations. Existing data-driven methods and neural models for individual dialog modules (NLU, DST, NLG) and end-to-end dialog systems show promising results, but they need large amounts of task-specific training data, which is rarely available for new tasks. These neural DST models do not generalize well on new domains with limited data \citep{li2021coco}. For task-specific DST, collecting dialog state labels can be costly and time-consuming, requiring domain experts to indicate all possible (\textit{slot, value}) pairs for each turn of the dialog history. A typical task-oriented dialog system contains an ontology for each domain, with a pre-defined set of slots and all possible values for each domain. In real-world applications, defining all possible slots and values for DST is difficult due to the increasing number of new domains and the evolving needs of the users. \paragraph{} Prompt-based learning \textit{(\textquote{pre-train, prompt, and predict})} is a new paradigm in NLP that aims to predict the probability of text directly from the pre-trained LM. This framework is powerful as it allows the language model to be \textit{pre-trained} on massive amounts of raw text, and by defining a new prompting function the model can perform \textit{few-shot} or even \textit{zero-shot} learning \citep{liu2021ppp}. The large pre-trained language models (PLMs) are supposed to be useful in few-shot scenarios where the task-related training data is limited, as they can be probed for task-related knowledge efficiently by using a prompt. One example of such large pre-trained language models is GPT-3 \citep{brown2020gpt3} - \textit{\textquote{Language Models are Few-Shot Learners}}. \citet{madotto2021fsb} created an end-to-end chatbot (Few-Shot Bot) using \textit{prompt-based few-shot learning} learning and achieved comparable results to those of state-of-the-art. Prompting methods are particularly helpful in few-shot learning where domain-related data is limited. \textit{Fixed-prompt LM tuning} is a fine-tuning strategy for downstream tasks, where the LM parameters are tuned with fixed prompts to help LM understand the task. This can be achieved by applying a discrete textual prompt template to the data used for fine-tuning the PLM. \paragraph{} Prompt-based learning for few-shot DST with limited labeled domains is still under-explored. Recently, \citet{yang2022prompt} proposed a new prompt learning framework for few-shot DST. This work designed a \textit{value-based prompt} and an \textit{inverse prompt} mechanism to efficiently train a DST model for domains with limited training data. This approach doesn't depend on the ontology of slots and the results show that it can generate slots by prompting the tuned PLM and outperforms the existing state-of-the-art methods under few-shot settings. \paragraph{} The main research objective of this thesis is to investigate the effectiveness of prompt-based methods for DST and to understand the limitations of this approach. Prompt-based methods are adopted for the DST task to answer the following research questions: \textsf{Q:} Can the dialogue belief states be extracted directly from PLM using prompt-based methods? \textsf{Q:} Can the prompt-based methods learn the DST task under low-resource settings without depending on the ontology of domains? \textsf{Q:} How does the prompt-based approach perform overall compared to a baseline model? \textsf{Q:} What are the drawbacks and limitations of prompt-based methods? \textsf{Q:} Can different multi-prompt techniques help the PLM understand the DST task better? \textsf{Q:} What impact do various multi-prompt methods have on the performance of the DST task? \paragraph{} To accomplish the research objectives, the prompt learning framework designed by \citet{yang2022prompt}, which includes a \textit{value-based prompt} and \textit{inverse prompt}, is utilized to generate the belief states by prompting the PLM. Few-shot experiments are performed on different proportions of data to evaluate the prompt-based methods under low-resource settings. A baseline model, which also does not depend on the ontology of dialogue domains, is trained on the DST task to compare with the prompt-based methods. A detailed error analysis is conducted to identify the limitations of prompt-based methods. Further, multi-prompt methods are adopted to help the PLM better understand the DST task. \paragraph{} This section introduced the overview of the thesis topic, motivation, and research objectives. The next section presents the background and related work (section \ref{sec:background}) with details on the following topics: dialog state tracking (DST), pre-trained language models (PLMs), the baseline model, prompting methods, and the dataset used. The description of the research methods used in the thesis experiments, including the few-shot experiments of baseline and prompt-based methods, multi-prompt methods, and evaluation metrics, are detailed in section \ref{sec:methods}. Section \ref{sec:results} provides all the few-shot experimental results of the research methods adopted. Analysis and discussion of results are presented in section \ref{sec:analysis}. Finally, the conclusion (section \ref{sec:conclusion}) highlights the summary of the main findings. \clearpage