**Research Papers Presented at the ACL Conference**
The Association for Computational Linguistics (ACL) conference is an important event in the field of natural language processing (NLP). Amazon researchers have made significant contributions to this year’s conference, with over 65 papers covering a wide range of topics. In this article, we will highlight some of the key research papers presented at the ACL conference.
**Automatic Speech Recognition**
One area of research presented at the ACL conference is automatic speech recognition. Amazon researchers have introduced a new approach called masked audio text encoders, which has shown to be effective in multi-modal rescorers. The researchers involved in this study are Jason Cai, Monica Sunkara, Xilai Li, Anshu Bhatia, Xiao Pan, and Sravan Bodapati.
Another interesting topic discussed at the ACL conference is code generation. A group of researchers from Amazon conducted a static evaluation of code completion by large language models. The study was conducted by Hantian Ding, Varun Kumar, Yuchen Tian, Zijian Wang, Rob Kwiatkowski, Xiaopeng Li, Murali Krishna Ramanathan, Baishakhi Ray, Sudipta Sengupta, Dan Roth, and Bing Xiang.
Code switching, the practice of mixing two or more languages in a single conversation, is an important area of research in NLP. At the ACL conference, researchers from Amazon presented two papers on code-switched text synthesis in unseen language pairs. The researchers involved in these studies are I-Hung Hsu, Avik Ray, Shubham Garg, Nanyun Peng, and Jing Huang. In addition, another paper titled “CoMix: Guide transformers to code-mix using POS structure and phonetics” was presented by Gaurav Arora, Srujana Merugu, and Vivek Sembium.
Continual learning, the ability of a model to adapt and learn from new data while retaining previously learned knowledge, is a challenging problem in NLP. At the ACL conference, researchers from Amazon presented a paper titled “Characterizing and measuring linguistic dataset drift”. The authors of this study are Tyler A. Chang, Kishaloy Halder, Neha Anna John, Yogarshi Vyas, Yassine Benajiba, Miguel Ballesteros, and Dan Roth.
Transforming data or tables into human-readable text is an essential task in various domains. Amazon researchers presented a paper titled “An inner table retriever for robust table question answering” at the ACL conference. The authors of this study are Weizhe Lin, Rexhina Blloshmi, Bill Byrne, Adrià de Gispert, and Gonzalo Iglesias. Additionally, another paper titled “Few-shot data-to-text generation via unified representation and multi-source learning” was presented by Alexander Hanbo Li, Mingyue Shang, Evangelia Spiliopoulou, Jie Ma, Patrick Ng, Zhiguo Wang, Bonan Min, William Wang, Kathleen McKeown, Vittorio Castelli, Dan Roth, and Bing Xiang. Furthermore, a paper titled “Improving cross-task generalization of unified table-to-text models with compositional task configurations” was presented by Jifan Chen, Yuhao Zhang, Lan Liu, Rui Dong, Xinchi Chen, Patrick Ng, William Wang, and Zhiheng Huang. Lastly, a paper titled “LI-RAGE: Late interaction retrieval augmented generation with explicit signals for open-domain table question answering” was presented by Weizhe Lin, Rexhina Blloshmi, Bill Byrne, Adrià de Gispert, and Gonzalo Iglesias.
Dialogue systems have gained significant attention in recent years, and several research papers on this topic were presented at the ACL conference. One paper titled “Diable: Efficient dialogue state tracking as operations on tables” was presented by Pietro Lesci, Yoshinari Fujinuma, Momchil Hardalov, Chao Shang, and Lluis Marquez. Additionally, a paper titled “NatCS: Eliciting natural customer support dialogues” was presented by James Gung, Emily Moeng, Wesley Rose, Arshit Gupta, Yi Zhang, and Saab Mansour. Furthermore, a paper titled “Schema-guided user satisfaction modeling for task-oriented dialogues” was presented by Yue Feng, Yunlong Jiao, Animesh Prasad, Nikolaos Aletras, Emine Yilmaz, and Gabriella Kazai. Lastly, a paper titled “Toward more accurate and generalizable evaluation metrics for task-oriented dialogs” was presented by Abi Komma, Nagesh Panyam, Timothy Leffel, Anuj Goyal, Angeliki Metallinou, Spyros Matsoukas, and Aram Galstyan.
Explainable AI, the