大数跨境
0
0

EMNLP2020 | 近期必读Question Answering精选论文

EMNLP2020 | 近期必读Question Answering精选论文 AMiner AI
2020-11-03
0
导读:问答系统,自然语言处理的明日之星。

AMiner平台由清华大学计算机系研发,拥有我国完全自主知识产权。平台包含了超过2.3亿学术论文/专利和1.36亿学者的科技图谱,提供学者评价、专家发现、智能指派、学术地图等科技情报专业化服务。系统2006年上线,吸引了全球220个国家/地区1000多万独立IP访问,数据下载量230万次,年度访问量超过1100万,成为学术搜索和社会网络挖掘研究的重要数据和实验平台。           


AMiner平台:https://www.aminer.cn


导语:EMNLP,自然语言处理经验方法会议(Conference on Empirical Methods in Natural Language Processing),是由国际语言学会(ACL)下属的SIGDAT小组主办的自然语言处理领域的顶级国际会议,也是自然语言算法的A类会议。EMNLP2020共审阅论文3359篇,接收754篇,接收率为22.4%。

Question answering(问答系统),是自然语言处理的明日之星,也是学者们目前研究的重点。问答系统外部的行为上来看,其与目前主流资讯检索技术有两点不同:首先是查询方式为完整而口语化的问句,再来则是其回传的为高精准度网页结果或明确的答案字串。从系统内部来看,问答系统使用了大量有别于传统资讯检索系统自然语言处理技术,如自然语言剖析、问题分类、专名辨识等等。

根据AMiner-EMNLP2020词云图和论文可以看出,Question Answering在本次会议中也有许多不凡的工作,下面我们一起看看Question Answering主题的相关论文。


1.论文名称:Hierarchical Graph Network for Multi-hop Question Answering

论文链接:https://www.aminer.cn/pub/5dca89783a55ac77dcb01e28?conf=emnlp2020

作者:Fang Yuwei, Sun Siqi, Gan Zhe, Pillai Rohit, Wang Shuohang, Liu Jingjing

简介:

  • In contrast to one-hop question answering, where answers can be derived from a single paragraph, recent studies have more and more focused on multihop reasoning across multiple documents or paragraphs for question answering.

  • The authors propose a new approach, Hierarchical Graph Network (HGN), for multi-hop question answering.

  • To capture clues from different granularity levels, the HGN model weaves heterogeneous nodes into a single unified graph.

  • Experiments with detailed analysis demonstrate the effectiveness of the proposed model, which achieves state-of-the-art performance on HotpotQA benchmark.


2.论文名称:Scalable Multi-Hop Relational Reasoning for Knowledge-Aware Question Answering

论文链接:https://www.aminer.cn/pub/5eb78919da5629cf244303cf?conf=emnlp2020

作者:Feng Yanlin, Chen Xinyue, Lin Bill Yuchen, Wang Peifeng, Yan Jun, Ren Xiang

简介:

  • Many recently proposed question answering tasks require machine comprehension of the question and context, and relational reasoning over entities and their relationships based by referencing external knowledge.

  • We propose a novel graph encoding architecture, Multi-hop Graph Relation Networks (MHGRN), which combines the strengths of path-based models and graph neural network (GNN).

  • The proposed MHGRN generalizes and combines the advantages of GNNs and path-based reasoning models.

  • It explicitly performs multi-hop relational reasoning and is empirically shown to outperform existing methods with superior scalablility and interpretability.


3.论文名称:Training Question Answering Models From Synthetic Data

论文链接:https://www.aminer.cn/pub/5e54f17a3a55acae32a25cd9?conf=emnlp2020

作者:Puri Raul, Spring Ryan, Patwary Mostofa, Shoeybi Mohammad, Catanzaro Bryan

简介:

  • One of the limitations of developing models for question answering, or any Deep Learning application for that matter, is the availability and cost of labeled training data.

  • The authors build upon existing work in large scale language modeling and question generation to push the quality of synthetic question generation.

  • Finetuning the resulting model on real SQUAD1.1 data further boosts the EM score to 89.4.

  • The authors generate synthetic text from a Wikipedia-finetuned GPT-2 model, generate answer candidates and synthetic questions based on those answers, and train a BERT-Large model to achieve similar question answering accuracy without directly using any real data at all.


4.论文名称:Look at the First Sentence: Position Bias in Question Answering

论文链接:https://www.aminer.cn/pub/5eabf34391e011664ffd289c?conf=emnlp2020

作者:Ko Miyoung, Lee Jinhyuk, Kim Hyunjae, Kim Gangwoo, Kang Jaewoo

简介:

  • Most QA studies frequently utilize start and end positions of answers as training targets without much considerations.

  • The authors' study shows that most QA models fail to generalize over different positions when trained on datasets having answers in a specific position.

  • The authors introduce several de-biasing methods to make models to ignore the spurious positional cues, and find out that the sentence-level answer prior is very useful.

  • The authors' findings generalize to different positions and different datasets.


5.论文名称:Infusing Disease Knowledge into BERT for Health Question Answering, Medical Inference, Disease Name Recognition

论文链接:https://www.aminer.cn/pub/5f7fe6d80205f07f68973229?conf=emnlp2020

作者:Yun He, Ziwei Zhu, Yin Zhang, Qin Chen, James Caverlee

简介:

  • Human disease is “a disorder of structure or function in a human that produces specific signs or symptoms”.

  • The authors propose a new disease infusion training procedure to augment BERT-like pre-trained language models with disease knowledge.

  • The authors conduct this training procedure on a suite of BERT models and evaluate them over disease related tasks.

  • Experimental results show that these models can be enhanced by this disease infusion method in most cases.


6.论文名称:Self-supervised Knowledge Triplet Learning for Zero-shot Question Answering

论文链接:https://www.aminer.cn/pub/5eafe7e091e01198d398664a?conf=emnlp2020

作者:Banerjee Pratyay, Baral Chitta

简介:

  • The ability to understand natural language and answer questions is one of the core focus in the field of natural language processing.

  • The authors achieve state-of-the-art results for zero-shot and propose a strong baseline for the few-shot question answering task.

  • The authors can see the Transformer encoder trained on KTL perform significantly better than the baseline models in this setting.

  • The authors' framework achieves state-of-the-art in the zero-shot question answering task and sets a strong baseline in the few-shot question answering task.


想要了解更多EMNLP2020论文,可以关注公众号或者链接直达EMNLP2020专题,最前沿的研究方向和最全面的论文数据等你来~

扫码了解更多EMNLP2020会议信息


添加“小脉”微信,留言“EMNLP”,即可加入【EMNLP会议交流群】,与更多论文作者学习交流!


阅读原文,直达“EMNLP2020”会议专题,了解更多会议论文!

【声明】内容源于网络
0
0
AMiner AI
AI帮你理解科学
内容 212
粉丝 0
AMiner AI AI帮你理解科学
总阅读0
粉丝0
内容212