\section{Conclusion}\label{sec:conclusion} This work explored the use of prompt-based methods for dialog state tracking (DST) in task-oriented dialogue systems. The prompt-based methods, which include value-based prompt and inverse prompt, learned the DST task efficiently under low-resource few-shot settings without relying on the pre-defined set of slots and values. Experiments show that the prompt-based methods significantly outperformed the baseline \textsc{Soloist} model under low-resource settings. Analysis of generated belief states shows the prompt-based approach has some limitations. Additionally, multi-prompt methods such as prompt ensembling and prompt augmentation are applied to the DST task. Results show that the prompt ensemble model achieved minor improvements, and the performance of prompt augmentation is limited due to the bias in answered prompts. Error analysis of value extraction highlights the limitations of the rule-based methods. Further research is necessary to overcome the limitations of prompt-based methods and value extraction methods. \paragraph{} In conclusion, prompt-based methods can be used to solve the DST task directly by prompting the language models. However, further research is necessary to improve this prompt learning framework. Future work can explore automated prompt search methods for choosing the right prompts instead of manually creating the templates. Future work can improve the value extraction methods by considering it as a few-shot text summarization and semantic tagging task. Another interesting area is to explore if bigger language models can perform better in solving the DST tasks.