近来作为NLP领域新宠,Prompt频频出现在大众视野,小编整理了一些优质论文,欢迎大家一起交流! 1.论文名称:P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks链接:https://www.aminer.cn/pub/6168f1a35244ab9dcbe2ffc62.论文名称:Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing链接:https://www.aminer.cn/pub/61037d155244ab9dcb7a075f3.论文名称:CPT: COLORFUL PROMPT TUNING FOR PRE-TRAINED VISION-LANGUAGE MODELS链接:https://www.aminer.cn/pub/6152b8a49e795ed0113b6d604.论文名称:AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts链接:https://www.aminer.cn/pub/5f9fcf9091e0112e85ce90255.论文名称:Prefix-Tuning: Optimizing Continuous Prompts for Generation链接:https://www.aminer.cn/pub/5ff4336291e01130648dc2f46.论文名称:The Power of Scale for Parameter-Efficient Prompt Tuning链接:https://www.aminer.cn/pub/607ffd8d91e011772654f7127.论文名称:P-Adapters: Robustly Extracting Factual Information from Language Models with Diverse Prompts链接:https://www.aminer.cn/pub/6168f19d5244ab9dcbe2fa618.论文名称:PTR: Prompt Tuning with Rules for Text Classification链接:https://www.aminer.cn/pub/60acd8d391e011a8376737039.论文名称:Prompt-Learning for Fine-Grained Entity Typing链接:https://www.aminer.cn/pub/6125b0135244ab9dcb38b52810.论文名称:The Power of Scale for Parameter-Efficient Prompt Tuning链接:https://www.aminer.cn/pub/607ffd8d91e011772654f71211.论文名称:How Many Data Points is a Prompt Worth?链接:https://www.aminer.cn/pub/60509c4891e0111e1cd46d1c12.论文名称:Learning How to Ask: Querying LMs with Mixtures of Soft Prompts链接:https://www.aminer.cn/pub/60782e5091e011f5ecc9dc1d13.论文名称:Knowledgeable Prompt-tuning:Incorporating Knowledge into Prompt Verbalizer for Text Classification链接:https://www.aminer.cn/pub/610b82399e795e7e7539d8fa