Sign in to view your account details and order history
A Special Issue for Information Processing & Management (IP&M), Elsevier
Note: This special issue is a Thematic Track at IP&MC2022.
For more information about IP&MC2022, please visit:
https://www.elsevier.com/events/conferences/information-processing-and-management-conference
Managing Guest Editor:
Guest Editors:
The last several years showed an explosive popularity of neural language models, especially large pretrained language models based on the transformer architecture [1]. The field of Natural Language Processing (NLP) and Computational Linguistics (CL) experienced a shift from static language models, such as Bag-of-Words [2], word2vec [3], or GloVe [4], to more contextually-aware language models, such as ELMo [5], or more recently, BERT [6], or GPT [7] including their improvements and derivatives [8,9]. The general high performance obtained by BERT-based models in various tasks even convinced Google to apply it as a default backbone in its search engine query expansion module [10], thus making BERT-based models a mainstream, and a strong baseline in NLP/CL research. The popularity of large pretrained language models also allowed a major growth of companies providing freely available repositories of such models [11], and, more recently, the founding of Stanford University’s Center for Research on Foundation Models (CRFM) [12].
However, despite the overwhelming popularity and undeniable performance of large pretrained language models, or “foundation models”, the specific inner-workings of those models have been notoriously difficult to analyze and the causes of – usually unexpected and unreasonable – errors they make, difficult to untangle and mitigate [13]. As the pretrained language models keep gaining in popularity while expanding into the area of multimodality by incorporating visual [14] and speech [15] information, it has become more important to thoroughly analyze, fully explain, and understand the inner workings of neural language models. In other words, the science behind neural language models needs to be developed.
With the above background in mind, we propose the following Information Processing & Management Conference 2022 (IP&MC2022) Thematic Track and Information Processing & Management Journal Special Issue on Science Behind Neural Language Models.
The TT/SI will focus on topics deepening the knowledge on how the neural language models work. Therefore, instead of taking up basic topics from the fields of CL and NLP, such as improvement of part-of-speech tagging, or standard sentiment analysis, regardless of whether they apply neural language models in practice, we will focus on promoting research that specifically aims at analyzing and understanding the “bells and whistles” of neural language models, for which the generally perceived science has not been established yet.
The TT/SI will aim at the audience of scientists, researchers, scholars, and students performing research on the analysis of neural language model in general, and pretrained language models in particular, with a specific focus on explainable approaches to language models, analysis of errors such models make, methods for debiasing, detoxification and other methods of improvement of the neural language models.
The TT/SI will not accept research on basic NLP/CL topics for which the field has been well established, such as improvement of part-of-speech tagging, sentiment analysis, etc., even if they apply foundation models unless they directly contribute to furthering the understanding and explanation of the inner workings of large scale pretrained language models.
The Thematic Track / Special Issue will invite papers on topics listed, but not limited to the following:
Submit your manuscript to the Special Issue category (VSI: IPMC2022 HCICTS) through the online submission system of Information Processing & Management.
https://www.editorialmanager.com/ipm/
Authors will prepare the submission following the Guide for Authors on IP&M journal at (https://www.elsevier.com/journals/information-processing-and-management/0306-4573/guide-for-authors). All papers will be peer-reviewed following the IP&MC2022 reviewing procedures.
The authors of accepted papers will be obligated to participate in IP&MC 2022 and present the paper to the community to receive feedback. The accepted papers will be invited for revision after receiving feedback on the IP&MC 2022 conference. The submissions will be given premium handling at IP&M following its peer-review procedure and, (if accepted), published in IP&M as full journal articles, with also an option for a short conference version at IP&MC2022.
Please see this infographic for the manuscript flow:
https://www.elsevier.com/__data/assets/pdf_file/0003/1211934/IPMC2022Timeline10Oct2022.pdf
For more information about IP&MC2022, please visithttps://www.elsevier.com/events/conferences/information-processing-and-management-conference.
The initial list of the Organizing Committee / EBM is presented below.
Managing Guest Editor:
Guest Editors:
Editorial Board Members List for the TT/SI:
Michal Ptaszynski received a master’s degree from the University of Adam Mickiewicz, Poznan, Poland, in 2006, and PhD in information science and technology from Hokkaido University, Japan in 2010. From 2010 to 2012 he was a JSPS postdoctoral research fellow at the High-Tech Research Center, Hokkai-Gakuen University, Japan. Currently, he is an associate professor at the Kitami Institute of Technology. His research interests include natural language processing, affect analysis, sentiment analysis, HCI, and information retrieval. He is a senior member of IEEE, and a member of AAAI, ACL, AAR, ANLP, JSAI, and IPSJ.
Rafal Rzepka received a master’s degree from the University of Adam Mickiewicz, Poznan, Poland in 1999 and a PhD from Hokkaido University, Japan, in 2004. Currently, he is an assistant professor in the Graduate School of Information Science and Technology at Hokkaido University. His research interests include natural language processing, common sense knowledge retrieval, dialogue processing, artificial general intelligence, affect and sentiment analysis, and machine ethics. He is a member of AAAI, JSAI, JCSS, and ANLP.
Anna Rogers received her PhD in computational linguistics from the University of Tokyo (Japan) in 2017. She then was a postdoctoral associate at the University of Massachusetts (USA), and since 2020 – at the University of Copenhagen (Denmark), as well as a visiting researcher with the RIKEN Center for Computational Science (Japan). Her main research area is NLP, in particular model analysis and evaluation of natural language understanding systems.
Karol Nowakowski received his master’s degree from the Adam Mickiewicz University, Poznan, Poland in 2012, and PhD in Engineering from the Kitami Institute of Technology, Japan in 2020. Currently, he is a lecturer at the Tohoku University of Community Service and Science. His main research area is natural language processing, particularly topics related to NLP for low-resource languages such as Ainu, a critically endangered language spoken in northern parts of Japan.
Solutions
Researchers
Subjects
About Elsevier
How can we help?
Copyright © 2022 Elsevier, except certain content provided by third parties
Cookies are used by this site.
Terms and Conditions Privacy Policy Cookie Notice Sitemap
Elsevier.com visitor survey
We are always looking for ways to improve customer experience on Elsevier.com.
We would like to ask you for a moment of your time to fill in a short questionnaire, at the end of your visit.
If you decide to participate, a new browser tab will open so you can complete the survey after you have completed your visit to this website.
Thanks in advance for your time.