Abstracts,Class "Camera-based text entry using American Sign Language (ASL) fingerspelling has become more feasible due to recent advancements in recognition technology. However, there are numerous situations where camera-based text entry may not be ideal or acceptable. To address this, we present FingerSpeller, a solution that enables camera-free text entry using smart rings. FingerSpeller utilizes accelerometers embedded in five smart rings from TapStrap, a commercially available wearable keyboard, to track finger motion and recognize fingerspelling. A Hidden Markov Model (HMM) based backend with continuous Gaussian modeling facilitates accurate recognition as evaluated in a real-world deployment. In offline isolated word recognition experiments conducted on a 1,164-word dictionary, FingerSpeller achieves an average character accuracy of 91% and word accuracy of 87% across three participants. Furthermore, we demonstrate that the system can be downsized to only two rings while maintaining an accuracy level of approximately 90% compared to the original configuration. This reduction in form factor enhances user comfort and significantly improves the overall usability of the system.",Sign Language and Fingerspelling Recognition "Sign language is designed as a natural communication method for the deaf community to convey messages and connect with society. In American sign language, twenty-six special sign gestures from the alphabet are used for the fingerspelling of proper words. The purpose of this research is to classify the hand gestures in the alphabet and recognize a sequence of gestures in the fingerspelling using an inertial hand motion capture system. In this work, time and time-frequency domain features and angle-based features are extracted from the raw data for classification with convolutional neural network-based classifiers. In fingerspelling recognition, we explore two kinds of models: connectionist temporal classification and encoder-decoder structured sequence recognition model. The study reveals that the classification model achieves an average accuracy of 74.8% for dynamic ASL gestures considering user independence. Moreover, the proposed two sequence recognition models achieve 55.1%, 93.4% accuracy in word-level evaluation, and 86.5%, 97.9% in the letter-level evaluation of fingerspelling. The proposed method has the potential to recognize more hand gestures of sign language with highly reliable inertial data from the device.",Sign Language and Fingerspelling Recognition "India has the largest deaf population in the world and sign language is the principal medium for such persons to share information with normal people and among themselves. Yet, normal people do not have any knowledge of such language. As a result, there is a huge communication barrier between normal and deaf-dumb persons. Again, sign language interpreters are not easily available and it is a very costly solution for a long period. The sign language recognition system reduces the communication gaps between normal and deaf-dumb persons. The methodologies to recognize Indian sign language are recently in the developing stage and there is no approach to recognize signs in real-time. Here, we have proposed a fingerspelling recognition system of static signs for the Indian sign language alphabet using convolutional neural networks combined with data augmentation, batch normalization, dropout, stochastic pooling, and diffGrad optimizer. To continue the research, a total of 62,400 images of 26 static signs have been taken from various users. The proposed method achieves the highest training and validation accuracy of 99.76% and 99.64%, respectively , that outperforms other examined systems.",Sign Language and Fingerspelling Recognition "The goal of this work is to detect and recognize sequences of letters signed using fingerspelling in British Sign Language (BSL). Previous fingerspelling recognition methods have not focused on BSL, which has a very different signing alphabet (e.g., two-handed instead of one-handed) to American Sign Language (ASL). They also use manual annotations for training. In contrast to previous methods, our method only uses weak annotations from subtitles for training. We localize potential instances of fingerspelling using a simple feature similarity method, then automatically annotate these instances by querying subtitle words and searching for corresponding mouthing cues from the signer. We propose a Transformer architecture adapted to this task, with a multiple-hypothesis CTC loss function to learn from alternative annotation possibilities. We employ a multi-stage training approach, where we make use of an initial version of our trained model to extend and enhance our training data before re-training again to achieve better performance. Through extensive evaluations, we verify our method for automatic annotation and our model architecture. Moreover, we provide a human expert annotated test set of 5K video clips for evaluating BSL fingerspelling recognition methods to support sign language research.",Sign Language and Fingerspelling Recognition "Sign language has always been a major tool for communication among people with disabilities. In this paper, a sign language fingerspelling alphabet identification system would be developed by using image processing technique, supervised machine learning and deep learning. In particular, 24 alphabetical symbols are presented by several combinations of static gestures (excluding 2 motion gestures J and Z). Histogram of Oriented Gradients (HOG) and Local Binary Pattern (LBP) features of each gesture will be extracted from training images. Then Multiclass Support Vector Machines (SVMs) will be applied to train these extracted data. Also, an end-to-end Convolutional Neural Network (CNN) architecture will be applied to the training dataset for comparison. After that, a further combination of CNN as feature descriptor and SVM produces an acceptable result. The Massey Dataset is implemented in the training and testing phases of the whole system.",Sign Language and Fingerspelling Recognition This paper presents a new method in fingerspelling recognition in highly dynamic video sequences. Sign language videos are labeled only in a video sequence level. A deep learning network extracts spatial features of video frames with the AlexNet and uses them to derive a language model with the Long-Short Term Memory (LSTM) network. The results of this deep learning network are the predicted fingerspelling gestures at a frame level. The recognition results of testing video sequences with 100 percent accuracy are used to improve spatial features of video frames. We construct a Siamese network from the recognition results in the first recognition pass. A network deployed in the Siamese network is the ResNet-50. We employ the Siamese network to derive the efficient representation of each fingerspelling gesture. The derived features corresponding to each video frame are fed to the LSTM network to predict fingerspelling gestures. Our proposed method can outperform state of the art fingerspelling recognition algorithms by almost four percent in recognition accuracy from our experimental results.,Sign Language and Fingerspelling Recognition "We describe the development and initial validation of the __SL Fingerspelling and Number Comprehension Test_ (ASL FaN-CT), a test of recognition proficiency for fingerspelled words in American Sign Language (ASL). Despite the relative frequency of fingerspelling in ASL discourse, learners commonly struggle to produce and perceive fingerspelling more than they do other facets of ASL. However, assessments of fingerspelling knowledge are highly underrepresented in the testing literature for signed languages. After first describing the construct, we describe test development, piloting, revisions, and evaluate the strength of the test__ validity argument vis-_-vis its intended interpretation and use as a screening instrument for current and future employees. The results of a pilot on 79 ASL learners provide strong evidence that the revised test is performing as intended and can be used to make accurate decisions about ASL learners_ proficiency in fingerspelling recognition. We conclude by describing the item properties observed in our current test, and our plans for continued validation and analysis with respect to a battery of tests of ASL proficiency currently in development.",Sign Language and Fingerspelling Recognition "This paper proposes a novel method to improve the accuracy of the American Sign Language fingerspelling recognition. Video sequences from the training set of the __hicagoFSWild_ dataset are first utilized for training a deep neural network of weakly supervised learning to generate frame labels from a sequence label automatically. The network of weakly supervised learning contains the AlexNet and the LSTM. This trained network generates a collection of frame-labeled images from the training video sequences that have Levenshtein distance between the predicted sequence and the sequence label equal to zero. The negative and positive pairs of all fingerspelling gestures are randomly formed from the collected image set. These pairs are adopted to train the Siamese network of the ResNet-50 and the projection function to produce efficient feature representations. The trained Resnet-50 and the projection function are concatenated with the bidirectional LSTM, a fully connected layer, and a softmax layer to form a deep neural network for the American Sign Language fingerspelling recognition. With the training video sequences, video frames corresponding to the video sequences that have Levenshtein distance between the predicted sequence and the sequence label equal to zero are added to the collected image set. The updated collected image set is used to train the Siamese network. The training process, from training the Siamese network to the update of the collected image set, is iterated until the image recognition performance is not further enhanced. The experimental results from the __hicagoFSWild_ dataset show that the proposed method surpasses the existing works in terms of the character error rate.",Sign Language and Fingerspelling Recognition "As the size of the population of sign language users increased, the importance of breaking the barrier between those who can use sign language and those who can not in the Arabic community increased. In this paper, We present ESMAANI, a computational solution that enables sign language recognition while utilizing machine learning and deep learning techniques. The proposed system aims to contribute to the study of the challenges and complexities associated with sign language recognition, specifically Arabic sign language. The proposed models present a non-intrusive computer vision approach to building a system specialized in Arabic sign language recognition translating the input sign gestures from a camera stream or video input into text output. Supporting static sign language input, which is common in fingerspelling and alphabet representation and dynamic sign language input which is employed for signing at the word level. The paper also presents a person and environment-independent dataset that's capable of generalizing to include further the various versions of ArSL the proposed static sign recognition system achieved an overall accuracy of 99.7%. And For the proposed dynamic sign recognition system achieved maximum recognition validation accuracy of 97% suggesting strong generalization.",Sign Language and Fingerspelling Recognition "In this work, we are proposing a new technique for visual recognition of fingerspelling of a sign language by fusing multiple spatial and spectral representations of manual gesture images using a convolutional neural network. This problem is gaining prominence in communication between hearing-impaired people and human-machine interaction. The proposed technique computes Gabor spectral representations of spatial images of hand sign gestures and uses an optimized convolutional neural network to classify the gestures in the joint space into corresponding classes. Various ways to combine both types of modalities are explored to identify the model that improves the robustness and recognition accuracy. The proposed system is evaluated using three databases (MNIST-ASL, ArSL, and MUASL) under different conditions and the attained results outperformed the state-of-the-art techniques.",Sign Language and Fingerspelling Recognition "Sign language is a need for deaf pupils to communicate with one another. People who are not deaf often do not learn sign language to interact with deaf people. It is also necessary to have an interpreter to explain the sign's meaning to others who are unfamiliar with it. Several unresolved issues, such as uncontrolled signing situations, various types of light, and varying degrees of partial occlusion, have adversely impacted hand gesture recognition efficacy. The suggested technique is unusual because it employs integrated features created by combining features obtained using conventional handcrafted feature extraction methods with deep learning models. Understandably the combined characteristics will include some repetitive and unnecessary characteristics, increasing computation time and wasting resources. We prevent this by using feature selection (FS) before providing the classifier with the merged features. We present the improved version of the newly developed Battle Royale Optimisation, IBROA, for feature selection. The characteristics are fed into a classifier for classification. Experiments were carried out, and the findings show that the proposed IBROA, which utilises integrated features and feature selection, outperforms classifiers and shows novel and efficient techniques for feature selection in Sign language classification.",Sign Language and Fingerspelling Recognition "With the enrichment and improvement of gesture language, sign Language becomes more and more important for communication between hearing impaired and ordinary people, and American Sign Language(ASL) fingerspelling recognition using an uncalibrated visual camera is a meaningful attempt. This paper uses the low-cost Kinect depth camera to obtain RGBD images of sign language. Through the proposed image segmentation algorithm SD-Segment, the pixels of RGB and depth images are aligned when the camera__ internal parameters are unknown, and the image is accurately segmented at the same time. A dual-path feature blending attention network (DFANet) is developed to obtain the fine-grained distinguishing features related to gestures. In order to take advantage of the complementarity between depth and RGB image, a depth-pixel aware module (DPAM) is developed to utilize the pixel relationship in the RGBD feature maps. According to the experimental results, compared with the state-of-the-art methods on the publicly available ASL fingerspelling dataset, the accuracy rate(+2.40%) of the network is greatly improved, and the highest accuracy rate of 98.16% is obtained on the self-built 8-person dataset. DPAM helps the network highlight distinctive hand regions in RGBD images, and DPAM does not significantly increase the number of parameters and computational overhead of the network.",Sign Language and Fingerspelling Recognition "Computer vision based sign language translation is usually based on using thousands of images or video sequences for model training. This is not an issue in the case of widely used languages such as American Sign Language. However, in case of languages with low resources such as Sinhala Sign Language, it__ challenging to use similar methods for developing translators since there are no known data sets available for such studies.In this study we have contributed a new dataset and developed a sign language translation method for the Sinhala Fingerspelling Alphabet. Our approach for recognizing fingerspelling signs involve decoupling pose classification from pose estimation and using postural synergies to reduce the dimensionality of features. As shown by our experiments, our method can achieve an average accuracy of over 87%. The size of the data set used is less than 12% of the size of data sets used in methods which have comparable accuracy. We have made the source code and the dataset publicly available.",Sign Language and Fingerspelling Recognition "Deaf people communicate naturally using sign languages, and because of this, they have difficulty in communicating using oral or written languages. To minimize this problem, one alternative is to use machine translation systems from sign languages into oral languages, especially for scenarios where human interpreters are not viable or unavailable. In this work, we address this problem by proposing a solution for fingerspelling recognition in Brazilian Sign Language (Libras) using Convolutional Neural Networks. The system uses a 224000 images dataset created by our team, which represents the letters of the Libras alphabet signed by 12 people in different backgrounds, body arm, hand positions, and lighting patterns. The results show that the solution had an average accuracy of approximately 99% in a dependent person scenario, and had an average accuracy of 71% in an independent person scenario. This type of solution can be used together with machine translators from Brazilian Portuguese to Libras, to assist in the communication of Brazilian deaf people.",Sign Language and Fingerspelling Recognition "The reliance of deep learning algorithms on large scale datasets is a significant challenge for sign language recognition (SLR). The shortage of data resources for training SLR models inevitably leads to poor generalisation, especially for low-resource languages. We propose novel data augmentation and preprocessing techniques based on synthetic data generation to overcome these generalisation difficulties. Using these methods, our models achieved a top-1 accuracy of 86.7% and a top-2 accuracy of 95.5% when evaluated against an unseen corpus of Irish Sign Language (ISL) fingerspelling video recordings. We believe that this constitutes a state-of-the-art performance baseline for an Irish Sign Language recognition model when tested on an unseen dataset.",Sign Language and Fingerspelling Recognition "The goal of the project is to create a machine learning model that can classify the numerous hand motions used in sign language fingerspelling. Communication with deaf and dumb persons is frequently difficult. A variety of hand, finger, and arm motions that assist the deaf and hard of hearing in communicating with others and vice versa. Classification machine learning algorithms are taught on a set of image data in this userindependent model, and testing is done on a completely other set of data. For some people with particular needs, sign language is their only means of communicating their thoughts and feelings. It enables individuals to understand the world around them by visual descriptions and hence contribute to society. As a result, our model aids us in solving the problem more broadly. By watching the user__ hand gestures, this transforms sign language to regular words.",Sign Language and Fingerspelling Recognition "Fingerspelling recognition of Chinese sign language rendered an opportunity to smooth the communication barriers of hearing-impaired people and health people, which occupies an important position in sign language recognition. This study proposed an eight-layer convolutional neural network, combined with three advanced techniques: batch normalization, dropout, and stochastic pooling. The output of the stochastic pooling was obtained via sampling from a multinomial distribution formed from the activations of each pooling region. In addition, we used data augmentation method to enhance the training set. In total 10 runs were implemented with the hold-out randomly set for each run. Our method achieved the highest accuracy of 90.91% and overall accuracy of 89.32__±â1.07%, which was superior to three state-of-the-art approaches compared.",Sign Language and Fingerspelling Recognition "Non-verbal communication frameworks such as Sign Language and Makaton serve as a vital means of communication for millions of people with hearing impairments. The development of accurate and efficient recognition systems for non-verbal communication is of great importance towards fostering inclusion through accessible systems. In this paper, we propose a novel approach to improving fingerspelling recognition through the application of neuroevolution as a means to hyperheuristically improve deep neural networks. We propose the use of these algorithms to optimise the classification of low-dimensional datasets given the mixed levels of computational resources in the community setting. A dataset of 1678 images comprised of seven subjects performing ASL fingerspelling is processed into normalised keypoints, and three neuroevolution simulations are executed to search the problem space for the most effective topology. The results show that the simulation finds a promising set of hyperparameters, achieving a mean 10-fold cross-validation accuracy of 97.44% by using a total of 1478 hidden units within four layers. Our neuroevolution approach demonstrates remarkable potential for the enhancement of finger-spelling recognition in non-verbal communication systems, paving the way for more inclusive technologies in the future.",Sign Language and Fingerspelling Recognition "We present a large multi-signer video corpus for the Greek Sign Language (GSL), suitable for the development and evaluation of GSL recognition algorithms. The database has been collected as part of the __L-ReDu_ project that focuses on the education use-case of systematic teaching of GSL as a second language (L2). The project aims to assist this process by allowing self-monitoring and objective assessment of GSL learners_ productions through the use of recognition technology, thus requiring suitable data resources relevant to the aforementioned use-case. To this end, we present the SL-ReDu GSL corpus, an extensive RGB+D video collection of 21 informants with a duration of 36 hours, recorded under studio conditions, consisting of: (i) isolated signs; (ii) continuous signing (annotated at the sentence level); and (iii) fingerspelling of words. We provide a detailed description of the design and acquisition methods used to develop it, along with corpus statistics and a comparison to existing sign language datasets. The SL-ReDu GSL corpus, as well as proposed frameworks for recognition experiments on it, are publicly available at https://www.sl-redu.e-ce.uth.gr/corpus.",Sign Language and Fingerspelling Recognition "The limited global competency in sign language makes the objective of improving communication for the deaf and hard-of-hearing community through computational processing both vital and necessary. In an effort to address this problem, our research leverages the Irish Sign Language hand shape (ISL-HS) dataset and state-of-the-art deep learning architectures to recognize the Irish Sign Language alphabet. We streamline the feature extraction methodology and pave the way for the efficient use of Convolutional Neural Networks (CNNs) by using Motion History Images (MHIs) for monitoring the sign language motions. The effectiveness of numerous powerful CNN architectures in deciphering the intricate patterns of motion captured in MHIs is investigated in this research. The process includes generating MHIs from the ISL dataset and then using these images to train several CNN neural network models and evaluate their ability to recognize the Irish Sign Language alphabet. The results demonstrate the possibility of investigating MHIs with advanced CNNs to enhance sign language recognition, with a noteworthy accuracy percentage. By contributing to the development of language processing tools and technologies for Irish Sign Language, this research has the potential to address the lack of technological communicative accessibility and inclusion for the deaf and hard-of-hearing community in Ireland.",Sign Language and Fingerspelling Recognition "Deep learning has completely changed approaches to machine translation. The initial ways of building machine translation software were based on rules, the next stage was based on statistics and probability theory. But nowadays, with new researches in the deep learning field has created simple solutions based on machine learning that outperform the best expert systems. This paper overviews the main features of machine translation for analyzing open data in legal cases based on recurrent neural networks. The advantages of systems based on RNN using the sequence-to-sequence model against statistical translation systems are also highlighted in the article. Two machine translation systems based on the sequence-to-sequence model were constructed using Keras and PyTorch machine learning libraries. Based on the obtained results, libraries' analysis was done, and their performance comparison. ",Rule-based MT (RBMT) "Could translation be fully automated? We must first acknowledge the complexity, ambiguity, and diversity of natural languages. These aspects of natural languages, when combined with a particular dilemma known as the computational dilemma, appear to imply that the machine translator faces certain obstacles that a human translator has already managed to overcome. At the same time, science has not yet solved the problem of how human brains process natural languages and how human beings come to acquire natural language understanding. We will then distinguish between the task of translation and the responsibility of the translator. Thereafter, we will conduct a survey of the methods of machine translation (viz. RBMT, SMT, NMT, foundation models or large language models). These methods will then be critically evaluated both in general and relative to Bar-Hillel’s hypothesis about the impossibility of fully automatic, high-quality machine translation (FAHQMT). Some concluding remarks will be made about the scope, prospects, and limits of machine translation.",Rule-based MT (RBMT) "The Word Sense Disambiguation (WSD) is a process of disambiguating the sense of the text according to its context. Machine translation is one of the challenging task since it requires effective representation of the text to capture semantic relation between Hindi lyrics in English normal language behaviour. This paper focuses on WSD methods to deal with dialects that convert Hindi lyrics to English in its syntactic structure of the words. WSD is a phenomenon for disambiguating the text so that machine would be capable to deduce correct sense of individual given words. WSD is critical for solving natural language tasks such as Machine Translation (MT) and speech processing. The distinguishing proof of significant words in Hindi as the language is not as simple as that of dialects in English. The interpretations of sonnets through the machines are exceptionally essential and deliberate about mind-blowing events. The interpretation of English ballads into other local dialects can turn out to be very straightforward, however, vice-versa is troublesome. This is due to the assortment of structures, classes, and feelings of the local dialects. Various endeavours have been connected far and wide towards the programmed interpretation of ballads from local dialects into English. In this paper, we propose a half breed MT (HBMT) procedure driven by the standard based MT together with measurements based on statistical machine translation (SMT) and rule-based machine translation (RBMT) for WSD in natural script Hindi in English Lyrics. This proposed method improves the semantic and syntactic accuracy of a machine interpretation framework. Finally, the proposed approach result is compared with the machine translation methods such a Google and Microsoft Bing Babylonian and HMT translators provided achieves a better outcome compared to the existing standards.",Rule-based MT (RBMT) "Rule-based machine translation (RBMT) captures linguistic information about the source and target languages. This information is retrieved from (bilingual) dictionaries and grammar rules. This paper proposes an active learning (AL) method to grow structural transfer rules at the chunker level. To this end, two sets of experiments are performed based on two types of sentences extracted from Mizan English-Persian Parallel Corpus which are selected manually and randomly. The results show adding newly written chunker rules to the transformation file using pool-based AL technique improves translation system more compared to a random chunker rule selection baseline.",Rule-based MT (RBMT) "The world is united socially and technologically with means of languages. Hence there is a big requirement for transfer of information from one language to another. Sanskrit is considered as an important language in the Indo-European family. A lot of work is still required to explore the potential of this language to open vistas in the computational linguistic domain. Currently, Sanskrit-Hindi translation system uses rule-based and statistical approaches. These approaches are not adequate for extending the system to generic and huge domains. In order to remove this problem, an efficient system is required to be developed which would cover various domains. Therefore, a hybrid system combining the best of Neural Machine Translation (NMT) and Rule-Based Machine Translation (RBMT) is developed and presented in this paper. The proposed hybrid model has a BLEU score of 61.2% which is higher than other existing systems i.e 41%. This approach uses deep learning feature to overcome drawbacks of the existing systems. Experimental results show that the proposed hybrid system using deep learning model has a high accuracy of 99%. It is also evaluated that it has less response time and more speed than existing systems.",Rule-based MT (RBMT) "The Word Sense Disambiguation (WSD) is a process of disambiguating the sense of the text according to its context. Machine translation is one of the challenging task since it requires effective representation of the text to capture semantic relation between Hindi lyrics in English normal language behaviour. This paper focuses on WSD methods to deal with dialects that convert Hindi lyrics to English in its syntactic structure of the words. WSD is a phenomenon for disambiguating the text so that machine would be capable to deduce correct sense of individual given words. WSD is critical for solving natural language tasks such as Machine Translation (MT) and speech processing. The distinguishing proof of significant words in Hindi as the language is not as simple as that of dialects in English. The interpretations of sonnets through the machines are exceptionally essential and deliberate about mind-blowing events. The interpretation of English ballads into other local dialects can turn out to be very straightforward, however, vice-versa is troublesome. This is due to the assortment of structures, classes, and feelings of the local dialects. Various endeavours have been connected far and wide towards the programmed interpretation of ballads from local dialects into English. In this paper, we propose a half breed MT (HBMT) procedure driven by the standard based MT together with measurements based on statistical machine translation (SMT) and rule-based machine translation (RBMT) for WSD in natural script Hindi in English Lyrics. This proposed method improves the semantic and syntactic accuracy of a machine interpretation framework. Finally, the proposed approach result is compared with the machine translation methods such a Google and Microsoft Bing Babylonian and HMT translators provided achieves a better outcome compared to the existing standards.",Rule-based MT (RBMT) "Social media contains valuable information about any individual, political entity, or brand since users posted impromptu and honest opinions which benefit a better data-driven. Social media text is unstructured and contains some issues such as slang, dialect, and short form. Most work addresses the text issue using an approach called text normalization. Text normalization is a preprocessing step to transform noisy words into their standard form and improve the performance of NLP applications, such as Sentiment Analysis, Part-of-speech Tagging, and Name Entity Recognition. However, less work utilizes Neural Machine Translation (NMT) approach to address text problems, specifically for short form and slang. In this work, three different NMT architectures are used to address the text problems. We conducted our experiments by training three different models under the same synthetic dataset. We prepared the dataset by collecting a local Malay news site which contains 46k of sentences and 800k words. The sentences are further processed using rule-based to create a parallel text. We then split the parallel text into training, validation, and test sets. The best model that utilize Transformer architecture achieved 99% accuracy on the validation set which provides an advantage in taking the sentence context during the text normalization process.",Rule-based MT (RBMT) "Neural machine translation (NMT) is often heralded as the most effective approach to machine translation due to its success on language pairs with large parallel corpora. However, neural methods produce less than ideal results on low-resource languages when their performance is evaluated using accuracy metrics like the Bilingual Evaluation Understudy (BLEU) score. One alternative to NMT is rule-based machine-translation (RBMT), but it too has drawbacks. Furthermore, little research has been done to compare the two approaches on criteria beyond their respective accuracies. This thesis evaluates RBMT and NMT systems holistically based on efficacy, ethicality, and utility to low-resource language communities. Using the language Karachay-Balkar as a case-study, the latter half of this thesis investigates how two free and open-source machine translation packages, Apertium (rule-based) and JoeyNMT (neural), might support community-driven machine translation development. While neither platform is found to be ideal, this thesis finds that the Apertium is more conducive to a community driven machine translation development process than JoeyNMT when evaluated on the criteria of efficiency, accessibility, ease of deployment, and interpretability.",Rule-based MT (RBMT) "We consider a low-resource translation task from Finnish into Northern Sámi. Collecting all available parallel data between the languages, we obtain around 30,000 sentence pairs. However, there exists a significantly larger monolingual Northern Sámi corpus, as well as a rule-based machine translation (RBMT) system between the languages. To make the best use of the monolingual data in a neural machine translation (NMT) system, we use the backtranslation approach to create synthetic parallel data from it using both NMT and RBMT systems. Evaluating the results on an in-domain test set and a small out-of-domain set, we find that the RBMT backtranslation outperforms NMT backtranslation clearly for the out-of-domain test set, but also slightly for the in-domain data, for which the NMT backtranslation model provided clearly better BLEU scores than the RBMT. In addition, combining both backtranslated data sets improves the RBMT approach only for the in-domain test set. This suggests that the RBMT system provides general-domain knowledge that cannot be found from the relative small parallel training data.",Rule-based MT (RBMT) "The field of machine translation (MT) has advanced over the years, with three major approaches dominating the field: Rule-Based Machine Translation (RBMT), Statistical Machine Translation (SMT), and Neural Machine Translation (NMT). This research paper provides an extensive review of these approaches, including their development, advantages, and disadvantages. Initially, RBMT represented a cutting-edge technology, performing translation using dictionaries and explicit language rules. However, dealing with intricate linguistic patterns was severely hindered by its rigidity and limited scalability, which gave rise to SMT. In order to provide more flexible translations, SMT used statistical models to extract patterns from large multilingual corpora. This method became well-known since it was data-driven, but it still had problems with domain adaptation and a lack of high-quality parallel data. With the introduction of NMT, a breakthrough occurred in the field of translation. NMT uses deep learning techniques including recurrent neural networks and Transformer-based neural networks to produce more meaningful and accurate information. Since NMT is end-to-end, it increases translation efficiency and has no RBMT or SMT limitations, especially for low-resource languages and complex sentence structures. In this paper, we discuss these approaches used by the numerous researchers in this field, and highlight their respective advantages and disadvantages.",Rule-based MT (RBMT) "This paper presents a comparison of post-editing (PE) changes performed on English-to-Finnish neural (NMT), rule-based (RBMT) and statistical machine translation (SMT) output, combining a product-based and a process-based approach. A total of 33 translation students acted as participants in a PE experiment providing both post-edited texts and edit process data. Our product-based analysis of the post-edited texts shows statistically significant differences in the distribution of edit types between machine translation systems. Deletions were the most common edit type for the RBMT, insertions for the SMT, and word form changes as well as word substitutions for the NMT system. The results also show significant differences in the correctness and necessity of the edits, particularly in the form of a large number of unnecessary edits in the RBMT output. Problems related to certain verb forms and ambiguity were observed for NMT and SMT, while RBMT was more likely to handle them correctly. Process-based comparison of effort indicators shows a slight increase of keystrokes per word for NMT output, and a slight decrease in average pause length for NMT compared to RBMT and SMT in specific text blocks. A statistically significant difference was observed in the number of visits per sub-segment, which is lower for NMT than for RBMT and SMT. The results suggest that although different types of edits were needed to outputs from NMT, RBMT and SMT systems, the difference is not necessarily reflected in process-based effort indicators.",Rule-based MT (RBMT) "Translating words and sentences is a major challenge, let alone a whole book. Tradition-ally, publishers used to manually translate a whole book. As computers and Internet came into being, it was used for translating words and sentences from one language to other using various methods. Dictionary based techniques are one of the oldest machine translation techniques. Mod-ern day apps use statistical machine translation techniques along with neural networks and/or natural language processing methods, where there is a high chance of error as they focus more on the language diversity. The major mo-tivation of this research work is to improve the English learning skills of ru-ral South Indian school students. As per a survey conducted by the authors, the rural students lack in command over English, due to the difficulty in translation services. The authors created a dataset consisting of English words from Tamil Nadu State textbooks of classes up to grade 10, to execute the translation. In this paper, we have used dictionary mapping, such as lin-ear, binary and Trie search methods, for performing word translation from English to Tamil, and compared them in different conditions to identify the best one. It is observed that the Binary search method performs better in all cases and hence it was selected to be implemented in the translation app. It is then clubbed with natural lan-guage processing (NLP) techniques and Rule Based Machine Translation (RBMT) techniques to carry out the trans-lation of a whole sentence. This whole technique is integrat-ed into an app, which is intended for students who are not fluent in English and requires an assistance.",Rule-based MT (RBMT) "Arabic is one of the six major world languages. It originated in the area currently known as the Arabian Peninsula. Arabic is the joint official language in Middle Eastern and African states. Large communities of Arabic speakers have existed outside of the Middle East since the end of the last century, particularly in the United States and Europe. So finding a quick and efficient Arabic machine translator has become an urgent necessity, due to the differences between the languages spoken in the world's communities and the vast development that has occurred worldwide. Arabic combines many of the significant challenges of other languages like word order and ambiguity. The word ordering problem because of Arabic has four sentence structures which allow different word orders. Ambiguity in the Arabic language is a notorious problem because of the richness and complexity of Arabic morphology. The core problems in machine translation are reordering the words and estimating the right word translation among many options in the lexicon. The Rule-Based Machine translation (RBMT) approach is the way to reorder words, and the statistical approach, such as Expectation Maximisation (EM), is the way to select right word translations and count word frequencies. Combining RBMT with EM plays an impotent role in generating a good-quality MT. This paper presents a combination of the rule-based machine translation (RBMT) approach with the Expectation Maximisation (EM) algorithm. These two techniques have been applied successfully to word ordering and ambiguity problems in Arabic-to-English machine translation.",Rule-based MT (RBMT) "Neural machine translation (NMT) was shown to produce more fluent output than phrase-based statistical (PBMT) and rule-based machine translation (RBMT). However, improved fluency makes it more difficult for post editors to identify and correct adequacy errors, because unlike RBMT and SMT, in NMT adequacy errors are frequently not anticipated by fluency errors. Omissions and additions of content in otherwise flawlessly fluent NMT output are the most prominent types of such adequacy errors, which can only be detected with reference to source texts. This contribution explores the degree of semantic similarity between source texts, NMT output and post edited output. In this way, computational semantic similarity scores (cosine similarity) are related to human quality judgments. The analyses are based on publicly available NMT post editing data annotated for errors in three language pairs (EN-DE, EN-LV, EN-HR) with the Multidimensional Quality Metrics (MQM). Methodologically, this contribution tests whether cross-language aligned word embeddings as the sole source of semantic information mirror human error annotation.",Rule-based MT (RBMT) "Morphology is a branch of linguistics that deals with the internal structure of words in a natural language. Any word in a natural language is comprised of one or more morphemes. A morpheme is a smallest linguistic unit that forms a word. A morphological analyzer is a tool that analysis a given input word and outputs its internal structure along with its different morphemes. Conversely, a morphological generator creates the possible word(s) given the morphemes. This paper presents a design of a morphological generator for an English to Malayalam and English to Hindi rule-based machine translation system using declension rules. Declensions also termed as inflections are the different variations or inflected forms of a particular word in a language. Morphological generator is an essential part in the machine translation process that creates inflected words from the root word according to the morphological rules of a language. Machine translation is the branch of computational linguistics that automatically translates human language to another. The language to be translated is labeled as source language (SL) and the language into which translation is done is termed as target language (TL). The declension rule-based machine translation is accomplished by using grammar rules according to the word inflections of the target language. The proposed morphological generator module is elucidated with its framework and each of its modules and their working are expatiated in detail. The input and the output to/from the module are also illustrated using examples.",Rule-based MT (RBMT) "In this paper we present a set of experiments performing machine translation related to low-resourced Arabic dialects in addition to a zero-resourced dialect (Berber). For this, we extended the parallel PADIC corpus by adding the Berber dialect corpus and translating manually more than 6000 Arabic sentences. We applied both Rule-based Machine Translation (RBMT) and Statistical Machine Translation (SMT) with and without a transliteration process. The average overall BLEU score is 42.68% with RBMT and 61.94% with SMT.",Rule-based MT (RBMT) "Machine Translation (MT) is used for giving a translation from a source language to a target language. Machine translation simply translates text or speech from one language to another language, but this process is not sufficient to give the perfect translation of a text due to the requirement of identification of whole expressions and their direct counterparts. Neural Machine Translation (NMT) is one of the most standard machine translation methods, which has made great progress in the recent years especially in non-universal languages. However, local language translation software for other foreign languages is limited and needs improving. In this paper, the Chinese language is translated to the Urdu language with the help of Open Neural Machine Translation (OpenNMT) in Deep Learning. Firstly, a Chineseto Urdu language sentences datasets were established and supported with Seven million sentences. After that, these datasets were trained by using the Open Neural Machine Translation (OpenNMT) method. At the final stage, the translation was compared to the desired translation with the help of the Bleu Score Method.",Rule-based MT (RBMT) "The article considers the issues related to the semantic, grammatical, stylistic and technical difficulties currently present in machine translation and compares its four main approaches: Rule-based (RBMT), Corpora-based (CBMT), Neural (NMT), and Hybrid (HMT). It also examines some ""open systems"", which allow the correction or augmentation of content by the users themselves (""crowdsourced translation""). The authors of the article, native speakers presenting different countries (Russia, Greece, Malaysia, Japan and Serbia), tested the translation quality of the most representative phrases from the English, Russian, Greek, Malay and Japanese languages by using different machine translation systems: PROMT (RBMT), Yandex. Translate (HMT) and Google Translate (NMT). The test results presented by the authors show low ""comprehension level"" of semantic, linguistic and pragmatic contexts of translated texts, mistranslations of rare and culture-specific words,unnecessary translation of proper names, as well as a low rate of idiomatic phrase and metaphor recognition. It is argued that the development of machine translation requires incorporation of literal, conceptual, and content-and-contextual forms of meaning processing into text translation expansion of metaphor corpora and contextological dictionaries, and implementation of different types and styles of translation, which take into account gender peculiarities, specific dialects and idiolects of users. The problem of untranslatability ('linguistic relativity') of the concepts, unique to a particular culture, has been reviewed from the perspective of machine translation. It has also been shown, that the translation of booming Internet slang, where national languages merge with English, is almost impossible without human correction.",Rule-based MT (RBMT) "The Madurese language is a local Indonesian culture that needs to be preserved. The morphology of Madurese words is unique, and there are several forms of words: affixation, root word, degree modifier, and reduplication word. Each word has a pattern that Rule-Based Machine Translation (RBMT) uses to ensure accurate translation. RBMT requires a stemming process to convert each word into its root word. The Madurese language’s morphology has several unique characteristics that need careful study, including the affix. An affix attached to a word can have different meanings depending on the conditions. This paper develops a new stemming algorithm called modified ECS (mECS), i.e., a modification of ECS stemming combined with the concept of Rule-Based word morphology. The system was tested using 50 Madurese language sentences randomly selected from a Madurese language textbook for the 5th-grade elementary school Madura language learning textbook. The accuracy of proposed RBMT system that implements the newly developed mECS stemming algorithm is 85%.",Rule-based MT (RBMT) "We propose a new paradigm for machine translation that is particularly useful for no-resource languages (those without any publicly available bilingual or monolingual corpora): LLM-RBMT (LLM-Assisted Rule Based Machine Translation). Using the LLM-RBMT paradigm, we design the first language education/revitalization-oriented machine translator for Owens Valley Paiute (OVP), a critically endangered Indigenous American language for which there is virtually no publicly available data. We present a detailed evaluation of the translator's components: a rule-based sentence builder, an OVP to English translator, and an English to OVP translator. We also discuss the potential of the paradigm, its limitations, and the many avenues for future research that it opens up.",Rule-based MT (RBMT) "The powerful modeling capabilities of all-attention-based transformer architectures often cause overfitting and-for natural language processing tasks-lead to an implicitly learned internal language model in the autoregressive transformer decoder complicating the integration of external language models. In this paper, we explore relaxed attention, a simple and easy-to-implement smoothing of the attention weights, yielding a two-fold improvement to the general transformer architecture: First, relaxed attention provides regularization when applied to the self-attention layers in the encoder. Second, we show that it naturally supports the integration of an external language model as it suppresses the implicitly learned internal language model by relaxing the cross attention in the decoder. We demonstrate the benefit of relaxed attention across several tasks from different applications with clear improvement in combination with recent benchmark approaches using various transformer model variants and sizes. Specifically, we exceed the former state-of-the-art performance of 26.90% word error rate on the largest public lip-reading LRS3 benchmark with a word error rate of 26.31%, as well as we achieve a top-performing BLEU score of 37.67 on the IWSLT14 (DE → EN) machine translation task without external language models and virtually no additional model parameters.",Transformer Models "Increasingly larger and better Transformer models keep advancing state-of-the-art accuracy and capability for Natural Language Processing applications. These models demand more computational power, storage, and energy. Mokey reduces the footprint of state-of-the-art 32-bit or 16-bit floating-point transformer models by quantizing all values to 4-bit indexes into dictionaries of representative 16-bit fixed-point centroids. Mokey does not need fine-tuning, an essential feature as often the training resources or datasets are not available to many. Exploiting the range of values that naturally occur in transformer models, Mokey selects centroid values to also fit an exponential curve. This unique feature enables Mokey to replace the bulk of the original multiply-accumulate operations with narrow 3b fixed-point additions resulting in an area- and energy-efficient hardware accelerator design. Over a set of state-of-the-art transformer models, the Mokey accelerator delivers an order of magnitude improvements in energy efficiency over a Tensor Cores-based accelerator while improving performance by at least 4× and as much as 15× depending on the model and on-chip buffering capacity. Optionally, Mokey can be used as memory compression assist for any other accelerator transparently stashing wide floating-point or fixed-point activations or weights into narrow 4-bit indexes. Mokey proves superior to prior state-of-the-art quantization methods for Transformers.",Transformer Models "Backdoors can be injected to NLP models such that they misbehave when the trigger words or sentences appear in an input sample. Detecting such backdoors given only a subject model and a small number of benign samples is very challenging because of the unique nature of NLP applications, such as the discontinuity of pipeline and the large search space. Existing techniques work well for backdoors with simple triggers such as single character/word triggers but become less effective when triggers and models become complex (e.g., transformer models). We propose a new backdoor scanning technique. It transforms a subject model to an equivalent but differentiable form. It then uses optimization to invert a distribution of words denoting their likelihood in the trigger. It leverages a novel word discriminativity analysis to determine if the subject model is particularly discriminative for the presence of likely trigger words. Our evaluation on 3839 NLP models from the TrojAI competition and existing works with 7 state-of-art complex structures such as BERT and GPT, and 17 different attack types including two latest dynamic attacks, shows that our technique is highly effective, achieving over 0.9 detection accuracy in most scenarios and substantially outperforming two state-of-the-art scanners. Our submissions to TrojAI leaderboard achieve top performance in 2 out of the 3 rounds for NLP backdoor scanning.",Transformer Models "In the past few years we have seen the meteoric appearance of dozens of foundation models of the Transformer family, all of which have memorable and sometimes funny, but not self-explanatory, names. The goal of this paper is to offer a somewhat comprehensive but simple catalog and classification of the most popular Transformer models. The paper also includes an introduction to the most important aspects and innovations in Transformer models. Our catalog will include models that are trained using self-supervised learning (e.g., BERT or GPT3) as well as those that are further trained using a human-in-the-loop (e.g. the InstructGPT model used by ChatGPT).",Transformer Models "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.",Transformer Models "Punctuation restoration is a common post-processing problem for Automatic Speech Recognition (ASR) systems. It is important to improve the readability of the transcribed text for the human reader and facilitate NLP tasks. Current state-of-art address this problem using different deep learning models. Recently, transformer models have proven their success in downstream NLP tasks, and these models have been explored very little for the punctuation restoration problem. In this work, we explore different transformer based models and propose an augmentation strategy for this task, focusing on high-resource (English) and low-resource (Bangla) languages. For English, we obtain comparable state-of-the-art results, while for Bangla, it is the first reported work, which can serve as a strong baseline for future work. We have made our developed Bangla dataset publicly available for the research community.",Transformer Models "Abstract The effectiveness of deep learning methods can be largely attributed to the automated extraction of relevant features from raw data. In the field of functional genomics, this generally concerns the automatic selection of relevant nucleotide motifs from DNA sequences. To benefit from automated learning methods, new strategies are required that unveil the decision-making process of trained models. In this paper, we present a new approach that has been successful in gathering insights on the transcription process in Escherichia coli. This work builds upon a transformer-based neural network framework designed for prokaryotic genome annotation purposes. We find that the majority of subunits (attention heads) of the model are specialized towards identifying transcription factors and are able to successfully characterize both their binding sites and consensus sequences, uncovering both well-known and potentially novel elements involved in the initiation of the transcription process. With the specialization of the attention heads occurring automatically, we believe transformer models to be of high interest towards the creation of explainable neural networks in this field.",Transformer Models "This paper is responding to the MIA-COV19 challenge to classify COVID from non-COVID based on CT lung images. The COVID-19 virus has devastated the world in the last eighteen months by infecting more than 182 million people and causing over 3.9 million deaths. The overarching aim is to predict the diagnosis of the COVID-19 virus from chest radiographs, through the development of explainable vision transformer deep learning techniques, leading to population screening in a more rapid, accurate and transparent way. In this competition, there are 5381 three-dimensional (3D) datasets in total, including 1552 for training, 374 for evaluation and 3455 for testing. While most of the data volumes are in axial view, there are a number of subjects' data are in coronal or sagittal views with 1 or 2 slices are in axial view. Hence, while 3D data based classification is investigated, in this competition, 2D images remains the main focus. Two deep learning methods are studied, which are vision transformer (ViT) based on attention models and DenseNet that is built upon conventional convolutional neural network (CNN). Initial evaluation results based on validation datasets whereby the ground truth is known indicate that ViT performs better than DenseNet with F1 scores being 0.76 and 0.72 respectively. Codes are available at GitHub at .",Transformer Models "The dot product self-attention is known to be central and indispensable to state-of-the-art Transformer models. But is it really required? This paper investigates the true importance and contribution of the dot product-based self-attention mechanism on the performance of Transformer models. Via extensive experiments, we find that (1) random alignment matrices surprisingly perform quite competitively and (2) learning attention weights from token-token (query-key) interactions is useful but not that important after all. To this end, we propose \textsc{Synthesizer}, a model that learns synthetic attention weights without token-token interactions. In our experiments, we first show that simple Synthesizers achieve highly competitive performance when compared against vanilla Transformer models across a range of tasks, including machine translation, language modeling, text generation and GLUE/SuperGLUE benchmarks. When composed with dot product attention, we find that Synthesizers consistently outperform Transformers. Moreover, we conduct additional comparisons of Synthesizers against Dynamic Convolutions, showing that simple Random Synthesizer is not only $60\%$ faster but also improves perplexity by a relative $3.5\%$. Finally, we show that simple factorized Synthesizers can outperform Linformers on encoding only tasks.",Transformer Models "Automated essay scoring (AES) is gaining increasing attention in the education sector as it significantly reduces the burden of manual scoring and allows ad hoc feedback for learners. Natural language processing based on machine learning has been shown to be particularly suitable for text classification and AES. While many machine-learning approaches for AES still rely on a bag of words (BOW) approach, we consider a transformer-based approach in this paper, compare its performance to a logistic regression model based on the BOW approach, and discuss their differences. The analysis is based on 2088 email responses to a problem-solving task that were manually labeled in terms of politeness. Both transformer models considered in the analysis outperformed without any hyperparameter tuning of the regression-based model. We argue that, for AES tasks such as politeness classification, the transformer-based approach has significant advantages, while a BOW approach suffers from not taking word order into account and reducing the words to their stem. Further, we show how such models can help increase the accuracy of human raters, and we provide a detailed instruction on how to implement transformer-based models for one’s own purposes.",Transformer Models "The dot product self-attention is known to be central and indispensable to state-of-the-art Transformer models. But is it really required? This paper investigates the true importance and contribution of the dot product-based self-attention mechanism on the performance of Transformer models. Via extensive experiments, we find that (1) random alignment matrices surprisingly perform quite competitively and (2) learning attention weights from token-token (query-key) interactions is not that important after all. To this end, we propose \textsc{Synthesizer}, a model that learns synthetic attention weights without token-token interactions. Our experimental results show that \textsc{Synthesizer} is competitive against vanilla Transformer models across a range of tasks, including MT (EnDe, EnFr), language modeling (LM1B), abstractive summarization (CNN/Dailymail), dialogue generation (PersonaChat) and Multi-task language understanding (GLUE, SuperGLUE).",Transformer Models "Arabic is a Semitic language which is widely spoken with many dialects. Given the success of pre-trained language models, many transformer models trained on Arabic and its dialects have surfaced. While these models have been compared with respect to downstream NLP tasks, no evaluation has been carried out to directly compare the internal representations. We probe how linguistic information is encoded in Arabic pretrained models, trained on different varieties of Arabic language. We perform a layer and neuron analysis on the models using three intrinsic tasks: two morphological tagging tasks based on MSA (modern standard Arabic) and dialectal POS-tagging and a dialectal identification task. Our analysis enlightens interesting findings such as: i) word morphology is learned at the lower and middle layers ii) dialectal identification necessitate more knowledge and hence preserved even in the final layers, iii) despite a large overlap in their vocabulary, the MSA-based models fail to capture the nuances of Arabic dialects, iv) we found that neurons in embedding layers are polysemous in nature, while the neurons in middle layers are exclusive to specific properties.",Transformer Models "We investigated the effect of different training scenarios on predicting the (retro)synthesis of chemical compounds using text-like representation of chemical reactions (SMILES) and Natural Language Processing (NLP) neural network Transformer architecture. We showed that data augmentation, which is a powerful method used in image processing, eliminated the effect of data memorization by neural networks and improved their performance for prediction of new sequences. This effect was observed when augmentation was used simultaneously for input and the target data simultaneously. The top-5 accuracy was 84.8% for the prediction of the largest fragment (thus identifying principal transformation for classical retro-synthesis) for the USPTO-50k test dataset, and was achieved by a combination of SMILES augmentation and a beam search algorithm. The same approach provided significantly better results for the prediction of direct reactions from the single-step USPTO-MIT test set. Our model achieved 90.6% top-1 and 96.1% top-5 accuracy for its challenging mixed set and 97% top-5 accuracy for the USPTO-MIT separated set. It also significantly improved results for USPTO-full set single-step retrosynthesis for both top-1 and top-10 accuracies. The appearance frequency of the most abundantly generated SMILES was well correlated with the prediction outcome and can be used as a measure of the quality of reaction prediction. Development of algorithms to predict reactant and reagents given a target molecule is key to accelerate retrosynthesis approaches. Here the authors demonstrate that applying augmentation techniques to the SMILE representation of target data significantly improves the quality of the reaction predictions.",Transformer Models "Transformer-based deep learning models have become a ubiquitous vehicle to drive a variety of Natural Language Processing (NLP) related tasks beyond their accuracy ceiling. However, these models also suffer from two pronounced challenges, that is, gigantic model size and prolonged turnaround time. To this end, we introduce E.T. that rE-thinks self-attention computation for Transformer models on GPUs with the following contributions: First, we introduce a novel self-attention architecture, which encompasses two tailored self-attention operators with corresponding sequence length-aware optimizations, and operation reordering optimizations. Second, we present an attention-aware pruning design which judiciously uses various pruning algorithms to reduce more computations hence achieves significantly shorter turnaround time. For the pruning algorithms, we not only revamp the existing pruning algorithms, but also tailor new ones for transformer models. Taken together, we evaluate E.T. across a variety of benchmarks for Transformer, BERTBASE and DistilBERT, where E.T. presents superior performance over the mainstream projects, including the popular Nvidia Enterprise solutions, i.e., TensorRT and FasterTransformer.",Transformer Models "We propose TandA, an effective technique for fine-tuning pre-trained Transformer models for natural language tasks. Specifically, we first transfer a pre-trained model into a model for a general task by fine-tuning it with a large and high-quality dataset. We then perform a second fine-tuning step to adapt the transferred model to the target domain. We demonstrate the benefits of our approach for answer sentence selection, which is a well-known inference task in Question Answering. We built a large scale dataset to enable the transfer step, exploiting the Natural Questions dataset. Our approach establishes the state of the art on two well-known benchmarks, WikiQA and TREC-QA, achieving the impressive MAP scores of 92% and 94.3%, respectively, which largely outperform the the highest scores of 83.4% and 87.5% of previous work. We empirically show that TandA generates more stable and robust models reducing the effort required for selecting optimal hyper-parameters. Additionally, we show that the transfer step of TandA makes the adaptation step more robust to noise. This enables a more effective use of noisy datasets for fine-tuning. Finally, we also confirm the positive impact of TandA in an industrial setting, using domain specific datasets subject to different types of noise.",Transformer Models "Traditional (unstructured) pruning methods for a Transformer model focus on regularizing the individual weights by penalizing them toward zero. In this work, we explore spectral-normalized identity priors (SNIP), a structured pruning approach which penalizes an entire residual module in a Transformer model toward an identity mapping. Our method identifies and discards unimportant non-linear mappings in the residual connections by applying a thresholding operator on the function norm, and is applicable to any structured module including a single attention head, an entire attention blocks, or a feed-forward subnetwork. Furthermore, we introduce spectral normalization to stabilize the distribution of the post-activation values of the Transformer layers, further improving the pruning effectiveness of the proposed methodology. We conduct experiments with BERT on 5 GLUE benchmark tasks to demonstrate that SNIP achieves effective pruning results while maintaining comparable performance. Specifically, we improve the performance over the state-of-the-art by 0.5 to 1.0% on average at 50% compression ratio.",Transformer Models "Meaningful exploration of the chemical space of druglike molecules in drug design is a highly challenging task due to a combinatorial explosion of possible modifications of molecules. In this work, we address this problem with transformer models, a type of machine learning (ML) model originally developed for machine translation. By training transformer models on pairs of similar bioactive molecules from the public ChEMBL data set, we enable them to learn medicinal-chemistry-meaningful, context-dependent transformations of molecules, including those absent from the training set. By retrospective analysis on the performance of transformer models on ChEMBL subsets of ligands binding to COX2, DRD2, or HERG protein targets, we demonstrate that the models can generate structures identical or highly similar to most active ligands, despite the models having not seen any ligands active against the corresponding protein target during training. Our work demonstrates that human experts working on hit expansion in drug design can easily and quickly employ transformer models, originally developed to translate texts from one natural language to another, to ""translate"" from known molecules active against a given protein target to novel molecules active against the same target.",Transformer Models "In this work we study the presence of expert units in pre-trained Transformer Models (TM), and how they impact a model's performance. We define expert units to be neurons that are able to classify a concept with a given average precision, where a concept is represented by a binary set of sentences containing the concept (or not). Leveraging the OneSec dataset (Scarlini et al., 2019), we compile a dataset of 1641 concepts that allows diverse expert units in TM to be discovered. We show that expert units are important in several ways: (1) The presence of expert units is correlated ($r^2=0.833$) with the generalization power of TM, which allows ranking TM without requiring fine-tuning on suites of downstream tasks. We further propose an empirical method to decide how accurate such experts should be to evaluate generalization. (2) The overlap of top experts between concepts provides a sensible way to quantify concept co-learning, which can be used for explainability of unknown concepts. (3) We show how to self-condition off-the-shelf pre-trained language models to generate text with a given concept by forcing the top experts to be active, without requiring re-training the model or using additional parameters.",Transformer Models "Language model based pre-trained models such as BERT have provided significant gains across different NLP tasks. In this paper, we study different types of transformer based pre-trained models such as auto-regressive models (GPT-2), auto-encoder models (BERT), and seq2seq models (BART) for conditional data augmentation. We show that prepending the class labels to text sequences provides a simple yet effective way to condition the pre-trained models for data augmentation. Additionally, on three classification benchmarks, pre-trained Seq2Seq model outperforms other data augmentation methods in a low-resource setting. Further, we explore how different pre-trained model based data augmentation differs in-terms of data diversity, and how well such methods preserve the class-label information.",Transformer Models "Closing the gap between measurable genetic information and observable traits is a longstand-ing challenge in genomics. Yet, the prediction of molecular phenotypes from DNA sequences alone remains limited and inaccurate, often driven by the scarcity of annotated data and the inability to transfer learnings between prediction tasks. Here, we present an extensive study of foundation models pre-trained on DNA sequences, named the Nucleotide Transformer, rang-ing from 50M up to 2.5B parameters and integrating information from 3,202 diverse human genomes, as well as 850 genomes selected across diverse phyla, including both model and non-model organisms. These transformer models yield transferable, context-specific representations of nucleotide sequences, which allow for accurate molecular phenotype prediction even in low-data settings. We show that the developed models can be fine-tuned at low cost and despite low available data regime to solve a variety of genomics applications. Despite no supervision, the transformer models learned to focus attention on key genomic elements, including those that regulate gene expression, such as enhancers. Lastly, we demonstrate that utilizing model rep-resentations can improve the prioritization of functional genetic variants. The training and ap-plication of foundational models in genomics explored in this study provide a widely applicable stepping stone to bridge the gap of accurate molecular phenotype prediction from DNA sequence. Code and weights available at: https://github.com/instadeepai/nucleotide-transformer in Jax and https://huggingface.co/InstaDeepAI in Pytorch. Example notebooks to apply these models to any downstream task are available on HuggingFace.",Transformer Models "Recurrent neural networks (RNNs) are another specialized scheme of neural network architectures. RNNs are developed to solve learning problems where information about the past (i.e., past instants/events) is directly linked to making future predictions. Such sequential examples play up frequently in many real-world tasks such as language modeling where the previous words in the sentence are used to determine what the next word will be. Also in stock market prediction, the last hour/day/week stock prices define the future stock movement. RNNs are particularly tuned for time series or sequential tasks.",Recurrent Neural Networks (RNNs) "Recurrent Neural Networks (RNNs) were recently successfully used to model the way neural activity drives task-related behavior in animals, operating under the implicit assumption that the obtained solutions are universal. Observations in both neuroscience and machine learning challenge this assumption. Animals can approach a given task with a variety of strategies, and training machine learning algorithms introduces the phenomenon of underspecification. These observations imply that every task is associated with a space of solutions. To date, the structure of this space is not understood, limiting the approach of comparing RNNs with neural data. Here, we characterize the space of solutions associated with various tasks. We first study a simple two-neuron network on a task that leads to multiple solutions. We trace the nature of the final solution back to the network's initial connectivity and identify discrete dynamical regimes that underlie this diversity. We then examine three neuroscience-inspired tasks: Delayed and interval discrimination, and Time reproduction. For each task, we find a rich set of solutions. Variability can be found directly in the neural activity of the networks, and additionally by testing the trained networks' ability to extrapolate, as a perturbation to a system often reveals hidden structure. Furthermore, we relate extrapolation patterns to specific dynamical objects and effective algorithms found by the networks. We introduce a tool to derive the reduced dynamics of networks by generating a compact directed graph describing the essence of the dynamics with regards to behavioral inputs and outputs. Using this representation, we can partition the solutions to each task into a handful of types and partially predict them from neural features. Our results shed light on the concept of the space of solutions and its uses in Machine learning and in Neuroscience.",Recurrent Neural Networks (RNNs) "In this paper, an intelligent leader-following consensus formation control method using recurrent neural networks (RNNs) is presented for a team of uncertain small-size unmanned helicopters (SSUHs). After a brief description of the dynamic model of each uncertain SSUH by a set of multivariable fourth-order state equations, the leader–follower multi-SSUH system with a virtual leader is modeled by the directed graph theory. An intelligent adaptive formation control approach is proposed to fly together all the follower SSUHs in formation by using RNN to online learn the system uncertainties, consensus tracking, and the Lyapunov stability theory. The four simulations on three cooperating SSUHs are conducted to exemplify the effectiveness and merits of the proposed control method.",Recurrent Neural Networks (RNNs) "The stability analysis of recurrent neural networks (RNNs) with multiple equilibria has received extensive interest since it is a prerequisite for successful applications of RNNs. With the increasing theoretical results on this topic, it is desirable to review the results for a systematical understanding of the state of the art. This article provides an overview of the stability results of RNNs with multiple equilibria including complete stability and multistability. First, preliminaries on the complete stability and multistability analysis of RNNs are introduced. Second, the complete stability results of RNNs are summarized. Third, the multistability results of various RNNs are reviewed in detail. Finally, future directions in these interesting topics are suggested.",Recurrent Neural Networks (RNNs) "Previous works have proved that recurrent neural networks (RNNs) are Turing-complete. However, in the proofs, the RNNs allow for neurons with unbounded precision, which is neither practical in implementation nor biologically plausible. To remove this assumption, we propose a dynamically growing memory module made of neurons of fixed precision. The memory module dynamically recruits new neurons when more memories are needed, and releases them when memories become irrelevant. We prove that a 54-neuron bounded-precision RNN with growing memory modules can simulate a Universal Turing Machine, with time complexity linear in the simulated machine’s time and independent of the memory size. The result is extendable to various other stack-augmented RNNs. Furthermore, we analyze the Turing completeness of both unbounded-precision and bounded-precision RNNs, revisiting and extending the theoretical foundations of RNNs.",Recurrent Neural Networks (RNNs) "Recurrent neural networks (RNNs) have achieved state-of-the-art performances on various applications. However, RNNs are prone to be memory-bandwidth limited in practical applications and need both long periods of training and inference time. The aforementioned problems are at odds with training and deploying RNNs on resource-limited devices where the memory and floating-point operations (FLOPs) budget are strictly constrained. To address this problem, conventional model compression techniques usually focus on reducing inference costs, operating on a costly pre-trained model. Recently, dynamic sparse training has been proposed to accelerate the training process by directly training sparse neural networks from scratch. However, previous sparse training techniques are mainly designed for convolutional neural networks and multi-layer perceptron. In this paper, we introduce a method to train intrinsically sparse RNN models with a fixed number of parameters and floating-point operations (FLOPs) during training. We demonstrate state-of-the-art sparse performance with long short-term memory and recurrent highway networks on widely used tasks, language modeling, and text classification. We simply use the results to advocate that, contrary to the general belief that training a sparse neural network from scratch leads to worse performance than dense networks, sparse training with adaptive connectivity can usually achieve better performance than dense models for RNNs.",Recurrent Neural Networks (RNNs) "As a new type of currency introduced in the new millennium, cryptocurrency has established its ecosystems and attracts many people to use and invest in it. However, cryptocurrencies are highly dynamic and volatile, making it challenging to predict their future values. In this research, we use a multivariate prediction approach and three different recurrent neural networks (RNNs), namely the long short-term memory (LSTM), the bidirectional LSTM (Bi-LSTM), and the gated recurrent unit (GRU). We also propose simple three layers deep networks architecture for the regression task in this study. From the experimental results on five major cryptocurrencies, i.e., Bitcoin (BTC), Ethereum (ETH), Cardano (ADA), Tether (USDT), and Binance Coin (BNB), we find that both Bi-LSTM and GRU have similar performance results in terms of accuracy. However, in terms of the execution time, both LSTM and GRU have similar results, where GRU is slightly better and has lower variation results on average.",Recurrent Neural Networks (RNNs) "The excellent accuracy of Recurrent Neural Networks (RNNs) for time-series and natural language processing comes at the cost of computational complexity. Therefore, the choice between edge and cloud computing for RNN inference, with the goal of minimizing response time or energy consumption, is not trivial. An edge approach must deal with the aforementioned complexity, while a cloud solution pays large time and energy costs for data transmission. Collaborative inference is a technique that tries to obtain the best of both worlds, by splitting the inference task among a network of collaborating devices. While already investigated for other types of neural networks, collaborative inference for RNNs poses completely new challenges, such as the strong influence of input length on processing time and energy, and is greatly unexplored. In this article, we introduce a Collaborative RNN Inference Mapping Engine (CRIME), which automatically selects the best inference device for each input. CRIME is flexible with respect to the connection topology among collaborating devices, and adapts to changes in the connections statuses and in the devices loads. With experiments on several RNNs and datasets, we show that CRIME can reduce the execution time (or end-node energy) by more than 25 percent compared to any single-device approach.",Recurrent Neural Networks (RNNs) "Recurrent neural networks (RNNs) are powerful models for processing time-series data, but it remains challenging to understand how they function. Improving this understanding is of substantial interest to both the machine learning and neuroscience communities. The framework of reverse engineering a trained RNN by linearizing around its fixed points has provided insight, but the approach has significant challenges. These include difficulty choosing which fixed point to expand around when studying RNN dynamics and error accumulation when reconstructing the nonlinear dynamics with the linearized dynamics. We present a new model that overcomes these limitations by co-training an RNN with a novel switching linear dynamical system (SLDS) formulation. A first-order Taylor series expansion of the co-trained RNN and an auxiliary function trained to pick out the RNN's fixed points govern the SLDS dynamics. The results are a trained SLDS variant that closely approximates the RNN, an auxiliary function that can produce a fixed point for each point in state-space, and a trained nonlinear RNN whose dynamics have been regularized such that its first-order terms perform the computation, if possible. This model removes the post-training fixed point optimization and allows us to unambiguously study the learned dynamics of the SLDS at any point in state-space. It also generalizes SLDS models to continuous manifolds of switching points while sharing parameters across switches. We validate the utility of the model on two synthetic tasks relevant to previous work reverse engineering RNNs. We then show that our model can be used as a drop-in in more complex architectures, such as LFADS, and apply this LFADS hybrid to analyze single-trial spiking activity from the motor system of a non-human primate.",Recurrent Neural Networks (RNNs) "Data assimilation (DA) is integrated with machine learning in order to perform entirely data‐driven online state estimation. To achieve this, recurrent neural networks (RNNs) are implemented as pretrained surrogate models to replace key components of the DA cycle in numerical weather prediction (NWP), including the conventional numerical forecast model, the forecast error covariance matrix, and the tangent linear and adjoint models. It is shown how these RNNs can be initialized using DA methods to directly update the hidden/reservoir state with observations of the target system. The results indicate that these techniques can be applied to estimate the state of a system for the repeated initialization of short‐term forecasts, even in the absence of a traditional numerical forecast model. Further, it is demonstrated how these integrated RNN‐DA methods can scale to higher dimensions by applying domain localization and parallelization, providing a path for practical applications in NWP.",Recurrent Neural Networks (RNNs) "We provide a general framework for studying recurrent neural networks (RNNs) trained by injecting noise into hidden states. Specifically, we consider RNNs that can be viewed as discretizations of stochastic differential equations driven by input data. This framework allows us to study the implicit regularization effect of general noise injection schemes by deriving an approximate explicit regularizer in the small noise regime. We find that, under reasonable assumptions, this implicit regularization promotes flatter minima; it biases towards models with more stable dynamics; and, in classification tasks, it favors models with larger classification margin. Sufficient conditions for global stability are obtained, highlighting the phenomenon of stochastic stabilization, where noise injection can improve stability during training. Our theory is supported by empirical results which demonstrate that the RNNs have improved robustness with respect to various input perturbations.",Recurrent Neural Networks (RNNs) "Recurrent Neural Networks are ubiquitous and pervasive in many artificial intelligence applications such as speech recognition, predictive healthcare, creative art, and so on. Although they provide accurate superior solutions, they pose a massive challenge “training havoc.” Current expansion of IoT demands intelligent models to be deployed at the edge. This is precisely to handle increasing model sizes and complex network architectures. Design efforts to meet these for greater performance have had inverse effects on portability on edge devices with real-time constraints of memory, latency, and energy. This article provides a detailed insight into various compression techniques widely disseminated in the deep learning regime. They have become key in mapping powerful RNNs onto resource-constrained devices. While compression of RNNs is the main focus of the survey, it also highlights challenges encountered while training. The training procedure directly influences model performance and compression alongside. Recent advancements to overcome the training challenges with their strengths and drawbacks are discussed. In short, the survey covers the three-step process, namely, architecture selection, efficient training process, and suitable compression technique applicable to a resource-constrained environment. It is thus one of the comprehensive survey guides a developer can adapt for a time-series problem context and an RNN solution for the edge.",Recurrent Neural Networks (RNNs) "This paper presents novel reconfigurable architectures for reducing the latency of recurrent neural networks (RNNs) that are used for detecting gravitational waves. Gravitational interferometers such as the LIGO detectors capture cosmic events such as black hole mergers which happen at unknown times and of varying durations, producing time-series data. We have developed a new architecture capable of accelerating RNN inference for analyzing time-series data from LIGO detectors. This architecture is based on optimizing the initiation intervals (II) in a multi-layer LSTM (Long Short-Term Memory) network, by identifying appropriate reuse factors for each layer. A customizable template for this architecture has been designed, which enables the generation of low-latency FPGA designs with efficient resource utilization using high-level synthesis tools. The proposed approach has been evaluated based on two LSTM models, targeting a ZYNQ 7045 FPGA and a U250 FPGA. Experimental results show that with balanced II, the number of DSPs can be reduced up to 42% while achieving the same IIs. When compared to other FPGA-based LSTM designs, our design can achieve about 4.92 to 12.4 times lower latency.",Recurrent Neural Networks (RNNs) "Deblurring images captured in dynamic scenes is challenging as the motion blurs are spatially varying caused by camera shakes and object movements. In this paper, we propose a spatially varying neural network to deblur dynamic scenes. The proposed model is composed of three deep convolutional neural networks (CNNs) and a recurrent neural network (RNN). The RNN is used as a deconvolution operator on feature maps extracted from the input image by one of the CNNs. Another CNN is used to learn the spatially varying weights for the RNN. As a result, the RNN is spatial-aware and can implicitly model the deblurring process with spatially varying kernels. To better exploit properties of the spatially varying RNN, we develop both one-dimensional and two-dimensional RNNs for deblurring. The third component, based on a CNN, reconstructs the final deblurred feature maps into a restored image. In addition, the whole network is end-to-end trainable. Quantitative and qualitative evaluations on benchmark datasets demonstrate that the proposed method performs favorably against the state-of-the-art deblurring algorithms.",Recurrent Neural Networks (RNNs) "Recurrent Neural Networks (RNNs) offer fast inference on long sequences but are hard to optimize and slow to train. Deep state-space models (SSMs) have recently been shown to perform remarkably well on long sequence modeling tasks, and have the added benefits of fast parallelizable training and RNN-like fast inference. However, while SSMs are superficially similar to RNNs, there are important differences that make it unclear where their performance boost over RNNs comes from. In this paper, we show that careful design of deep RNNs using standard signal propagation arguments can recover the impressive performance of deep SSMs on long-range reasoning tasks, while also matching their training speed. To achieve this, we analyze and ablate a series of changes to standard RNNs including linearizing and diagonalizing the recurrence, using better parameterizations and initializations, and ensuring proper normalization of the forward pass. Our results provide new insights on the origins of the impressive performance of deep SSMs, while also introducing an RNN block called the Linear Recurrent Unit that matches both their performance on the Long Range Arena benchmark and their computational efficiency.",Recurrent Neural Networks (RNNs) "Neural networks need the right representations of input data to learn. Here we ask how gradient-based learning shapes a fundamental property of representations in recurrent neural networks (RNNs)—their dimensionality. Through simulations and mathematical analysis, we show how gradient descent can lead RNNs to compress the dimensionality of their representations in a way that matches task demands during training while supporting generalization to unseen examples. This can require an expansion of dimensionality in early timesteps and compression in later ones, and strongly chaotic RNNs appear particularly adept at learning this balance. Beyond helping to elucidate the power of appropriately initialized artificial RNNs, this fact has implications for neurobiology as well. Neural circuits in the brain reveal both high variability associated with chaos and low-dimensional dynamical structures. Taken together, our findings show how simple gradient-based learning rules lead neural networks to solve tasks with robust representations that generalize to new cases. Neural networks in the brain often exhibit chaotic dynamics that can be captured by a small number of dimensions. Farrell et al. find that recurrent neural networks trained with gradient-based learning rules exhibit similar features. This helps form robust but generalizable input representations.",Recurrent Neural Networks (RNNs) "Recurrent Neural Networks (RNNs) have demonstrated their effectiveness in learning and processing sequential data (e.g., speech and natural language). However, due to the black-box nature of neural networks, understanding the decision logic of RNNs is quite challenging. Some recent progress has been made to approximate the behavior of an RNN by weighted automata. They provide better interpretability, but still suffer from poor scalability. In this paper, we propose a novel approach to extracting weighted automata with the guidance of a target RNN's decision and context information. In particular, we identify the patterns of RNN's step-wise predictive decisions to instruct the formation of automata states. Further, we propose a state composition method to enhance the context-awareness of the extracted model. Our in-depth evaluations on typical RNN tasks, including language model and classification, demonstrate the effectiveness and advantage of our method over the state-of-the-arts. The evaluation results show that our method can achieve accurate approximation of an RNN even on large-scale tasks.",Recurrent Neural Networks (RNNs) "Over the last decade, the amount of Arabic content created on websites and social media has grown significantly. Opinions are shared openly and freely on social media and thus provide a rich source for trend analyses, which are accomplished by conventional methods of language interpretation, such as sentiment analysis. Due to its accuracy in studying unstructured data, deep learning has been increasingly used to test opinions. Recurrent neural networks (RNNs) are a promising approach in textual analysis and exhibit large morphological variations. In total, 193 studies used RNNs in English-language sentiment analysis, and 24 studies used RNNs in Arabic-language sentiment analysis. Those studies varied in the areas they address, the functionality and weaknesses of the models, and the number and scale of the available datasets for different dialects. Such variations are worthy of attention and monitoring; thus, this paper presents a systematic examination of the literature to label, evaluate, and identify state-of-the-art studies using RNNs for Arabic sentiment analysis.",Recurrent Neural Networks (RNNs) "Accurate and real-time forecasting of the price of oil plays an important role in the world economy. Research interest in forecasting this type of time series has increased considerably in recent decades, since, due to the characteristics of the time series, it was a complicated task with inaccurate results. Concretely, deep learning models such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) have appeared in this field with promising results compared to traditional approaches. To improve the performance of existing networks in time series forecasting, in this work two types of neural networks are brought together, combining the characteristics of a Graph Convolutional Network (GCN) and a Bidirectional Long Short-Term Memory (BiLSTM) network. This is a novel evolution that improves existing results in the literature and provides new possibilities in the analysis of time series. The results confirm a better performance of the combined BiLSTM-GCN approach compared to the BiLSTM and GCN models separately, as well as to the traditional models, with a lower error in all the error metrics used: the Root Mean Squared Error (RMSE), the Mean Squared Error (MSE), the Mean Absolute Percentage Error (MAPE) and the R-squared (R2). These results represent a smaller difference between the result returned by the model and the real value and, therefore, a greater precision in the predictions of this model.",Recurrent Neural Networks (RNNs) "This paper deals with the extended dissipativity and non-fragile synchronization of delayed recurrent neural networks (RNNs) with multiple time-varying delays and sampled-data control. A suitable Lyapunov-Krasovskii Functional (LKF) is built up to prove the quadratically stable and extended dissipativity condition of delayed RNNs using Jensen inequality and limited Bessel-Legendre inequality approaches. A non-fragile sampled-data approach is applied to investigate the problem of neural networks with multiple time-varying delays, which ensures that the master system synchronizes with the slave system and is designed with respect to the solutions of Linear Matrix Inequalities (LMIs). The effectiveness of the suggested approach is established by providing suitable simulations using MATLAB LMI control toolbox. Finally, numerical examples and comparative results are provided to illustrate the adequacy of the planned control scheme.",Recurrent Neural Networks (RNNs) "The prediction has served as a crucial scientific method in modern social studies. With the recent advancement of Large Language Models (LLMs), efforts have been made to leverage LLMs to predict the human features in social life, such as presidential voting. These works suggest that LLMs are capable of generating human-like responses. However, we find that the promising performance achieved by previous studies is because of the existence of input shortcut features to the response. In fact, by removing these shortcuts, the performance is reduced dramatically. To further revisit the ability of LLMs, we introduce a novel social prediction task, Soc-PRF Prediction, which utilizes general features as input and simulates real-world social study settings. With the comprehensive investigations on various LLMs, we reveal that LLMs cannot work as expected on social prediction when given general input features without shortcuts. We further investigate possible reasons for this phenomenon that suggest potential ways to enhance LLMs for social prediction.",Large Language Models (LLMs) "Empathy, a cornerstone of human interaction, is a unique quality to humans that Large Language Models (LLMs) are believed to lack. Our study aims to review the literature on the capacity of LLMs in demonstrating empathy Methods: We conducted a literature search on MEDLINE up to July 2023. Seven publications ultimately met the inclusion criteria. Results: All studies included in this review were published in 2023. All studies but one focused on ChatGPT-3.5 by OpenAI. Only one study evaluated empathy based on objective metrics, and all others used subjective human assessment. The studies reported LLMs to exhibits elements of empathy, including emotions recognition and providing emotionally supportive responses in diverse contexts, most of which were related to healthcare. In some cases, LLMs were observed to outperform humans in empathy-related tasks. Conclusion: LLMs demonstrated some aspects of empathy in variable scenarios, mainly related to healthcare. The empathy may be considered cognitive empathy. Social skills are a fundamental aspect of intelligence, thus further research is imperative to enhance these skills in AI.",Large Language Models (LLMs) "Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.",Large Language Models (LLMs) "Large language models (LLMs) show excellent performance but are compute- and memory-intensive. Quantization can reduce memory and accelerate inference. However, existing methods cannot maintain accuracy and hardware efficiency at the same time. We propose SmoothQuant, a training-free, accuracy-preserving, and general-purpose post-training quantization (PTQ) solution to enable 8-bit weight, 8-bit activation (W8A8) quantization for LLMs. Based on the fact that weights are easy to quantize while activations are not, SmoothQuant smooths the activation outliers by offline migrating the quantization difficulty from activations to weights with a mathematically equivalent transformation. SmoothQuant enables an INT8 quantization of both weights and activations for all the matrix multiplications in LLMs, including OPT, BLOOM, GLM, MT-NLG, Llama-1/2, Falcon, Mistral, and Mixtral models. We demonstrate up to 1.56x speedup and 2x memory reduction for LLMs with negligible loss in accuracy. SmoothQuant enables serving 530B LLM within a single node. Our work offers a turn-key solution that reduces hardware costs and democratizes LLMs.",Large Language Models (LLMs) "Learning on Graphs has attracted immense attention due to its wide real-world applications. The most popular pipeline for learning on graphs with textual node attributes primarily relies on Graph Neural Networks (GNNs), and utilizes shallow text embedding as initial node representations, which has limitations in general knowledge and profound semantic understanding. In recent years, Large Language Models (LLMs) have been proven to possess extensive common knowledge and powerful semantic comprehension abilities that have revolutionized existing workflows to handle text data. In this paper, we aim to explore the potential of LLMs in graph machine learning, especially the node classification task, and investigate two possible pipelines: LLMs-as-Enhancers and LLMs-as-Predictors. The former leverages LLMs to enhance nodes' text attributes with their massive knowledge and then generate predictions through GNNs. The latter attempts to directly employ LLMs as standalone predictors. We conduct comprehensive and systematical studies on these two pipelines under various settings. From comprehensive empirical results, we make original observations and find new insights that open new possibilities and suggest promising directions to leverage LLMs for learning on graphs. Our codes and datasets are available at: https://github.com/CurryTang/Graph-LLM .",Large Language Models (LLMs) "Large language models (LLMs) pretrained on vast source code have achieved prominent progress in code intelligence. However, existing code LLMs have two main limitations in terms of architecture and pretraining tasks. First, they often adopt a specific architecture (encoder-only or decoder-only) or rely on a unified encoder-decoder network for different downstream tasks. The former paradigm is limited by inflexibility in applications while in the latter, the model is treated as a single system for all tasks, leading to suboptimal performance on a subset of tasks. Secondly, they often employ a limited set of pretraining objectives which might not be relevant to some downstream tasks and hence result in substantial performance degrade. To address these limitations, we propose ``CodeT5+'', a family of encoder-decoder LLMs for code in which component modules can be flexibly combined to suit a wide range of downstream code tasks. Such flexibility is enabled by our proposed mixture of pretraining objectives to mitigate the pretrain-finetune discrepancy. These objectives cover span denoising, contrastive learning, text-code matching, and causal LM pretraining tasks, on both unimodal and bimodal multilingual code corpora. Furthermore, we propose to initialize CodeT5+ with frozen off-the-shelf LLMs without training from scratch to efficiently scale up our models, and explore instruction-tuning to align with natural language instructions. We extensively evaluate CodeT5+ on over 20 code-related benchmarks in different settings, including zero-shot, finetuning, and instruction-tuning. We observe state-of-the-art (SoTA) model performance on various code-related tasks, such as code generation and completion, math programming, and text-to-code retrieval tasks. Particularly, our instruction-tuned CodeT5+ 16B achieves new SoTA results on HumanEval code generation task against other open code LLMs.",Large Language Models (LLMs) "Training large language models (LLMs) with open-domain instruction following data brings colossal success. However, manually creating such instruction data is very time-consuming and labor-intensive. Moreover, humans may struggle to produce high-complexity instructions. In this paper, we show an avenue for creating large amounts of instruction data with varying levels of complexity using LLM instead of humans. Starting with an initial set of instructions, we use our proposed Evol-Instruct to rewrite them step by step into more complex instructions. Then, we mix all generated instruction data to fine-tune LLaMA. We call the resulting model WizardLM. Human evaluations on a complexity-balanced test bed and Vicuna's testset show that instructions from Evol-Instruct are superior to human-created ones. By analyzing the human evaluation results of the high complexity part, we demonstrate that outputs from our WizardLM are preferred to outputs from OpenAI ChatGPT. In GPT-4 automatic evaluation, WizardLM achieves more than 90\% capacity of ChatGPT on 17 out of 29 skills. Even though WizardLM still lags behind ChatGPT in some aspects, our findings suggest that fine-tuning with AI-evolved instructions is a promising direction for enhancing LLMs. Our code and data are public at https://github.com/nlpxucan/WizardLM",Large Language Models (LLMs) "Large language models (LLMs) have achieved remarkable progress in solving various natural language processing tasks due to emergent reasoning abilities. However, LLMs have inherent limitations as they are incapable of accessing up-to-date information (stored on the Web or in task-specific knowledge bases), using external tools, and performing precise mathematical and logical reasoning. In this paper, we present Chameleon, an AI system that mitigates these limitations by augmenting LLMs with plug-and-play modules for compositional reasoning. Chameleon synthesizes programs by composing various tools (e.g., LLMs, off-the-shelf vision models, web search engines, Python functions, and heuristic-based modules) for accomplishing complex reasoning tasks. At the heart of Chameleon is an LLM-based planner that assembles a sequence of tools to execute to generate the final response. We showcase the effectiveness of Chameleon on two multi-modal knowledge-intensive reasoning tasks: ScienceQA and TabMWP. Chameleon, powered by GPT-4, achieves an 86.54% overall accuracy on ScienceQA, improving the best published few-shot result by 11.37%. On TabMWP, GPT-4-powered Chameleon improves the accuracy by 17.0%, lifting the state of the art to 98.78%. Our analysis also shows that the GPT-4-powered planner exhibits more consistent and rational tool selection via inferring potential constraints from instructions, compared to a ChatGPT-powered planner.",Large Language Models (LLMs) "With the prosperity of e-commerce and web applications, Recommender Systems (RecSys) have become an important component of our daily life, providing personalized suggestions that cater to user preferences. While Deep Neural Networks (DNNs) have made significant advancements in enhancing recommender systems by modeling user-item interactions and incorporating textual side information, DNN-based methods still face limitations, such as difficulties in understanding users' interests and capturing textual side information, inabilities in generalizing to various recommendation scenarios and reasoning on their predictions, etc. Meanwhile, the emergence of Large Language Models (LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI), due to their remarkable abilities in fundamental responsibilities of language understanding and generation, as well as impressive generalization and reasoning capabilities. As a result, recent studies have attempted to harness the power of LLMs to enhance recommender systems. Given the rapid evolution of this research direction in recommender systems, there is a pressing need for a systematic overview that summarizes existing LLM-empowered recommender systems, to provide researchers in relevant fields with an in-depth understanding. Therefore, in this paper, we conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting. More specifically, we first introduce representative methods to harness the power of LLMs (as a feature encoder) for learning representations of users and items. Then, we review recent techniques of LLMs for enhancing recommender systems from three paradigms, namely pre-training, fine-tuning, and prompting. Finally, we comprehensively discuss future directions in this emerging field.",Large Language Models (LLMs) "Artificial intelligence (AI) has created a lot of buzz in recent years. Using machine learning and other AI techniques several intelligent initiatives have been tested. The large language model is one of them. A large language model (LLM) normally refers to a type of AI model that is trained on vast amounts of text data to understand and generate human-like language outputs. These models are designed to capture the statistical patterns and structures present in the training data, enabling them to generate coherent and contextually relevant responses. The widely known ChatGPT is one of the LLMs which can do several tasks and answer many questions. It is trained with a huge number of data sets and a large number of parameters. In addition to ChatGPT, many other LLMs such as the Google Bard, Claude v1, Bison 001, Cohere, Falcon, and Guanaco-65B have surfaced in recent times. In this paper, we study the basic principles and features of LLMs. We go through their brief history, abilities, limitations, challenges and future prospects.",Large Language Models (LLMs) "We investigate the potential implications of large language models (LLMs), such as Generative Pre-trained Transformers (GPTs), on the U.S. labor market, focusing on the increased capabilities arising from LLM-powered software compared to LLMs on their own. Using a new rubric, we assess occupations based on their alignment with LLM capabilities, integrating both human expertise and GPT-4 classifications. Our findings reveal that around 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of LLMs, while approximately 19% of workers may see at least 50% of their tasks impacted. We do not make predictions about the development or adoption timeline of such LLMs. The projected effects span all wage levels, with higher-income jobs potentially facing greater exposure to LLM capabilities and LLM-powered software. Significantly, these impacts are not restricted to industries with higher recent productivity growth. Our analysis suggests that, with access to an LLM, about 15% of all worker tasks in the US could be completed significantly faster at the same level of quality. When incorporating software and tooling built on top of LLMs, this share increases to between 47 and 56% of all tasks. This finding implies that LLM-powered software will have a substantial effect on scaling the economic impacts of the underlying models. We conclude that LLMs such as GPTs exhibit traits of general-purpose technologies, indicating that they could have considerable economic, social, and policy implications.",Large Language Models (LLMs) "How do large language models (LLMs) develop and evolve over the course of training? How do these patterns change as models scale? To answer these questions, we introduce \textit{Pythia}, a suite of 16 LLMs all trained on public data seen in the exact same order and ranging in size from 70M to 12B parameters. We provide public access to 154 checkpoints for each one of the 16 models, alongside tools to download and reconstruct their exact training dataloaders for further study. We intend \textit{Pythia} to facilitate research in many areas, and we present several case studies including novel results in memorization, term frequency effects on few-shot performance, and reducing gender bias. We demonstrate that this highly controlled setup can be used to yield novel insights toward LLMs and their training dynamics. Trained models, analysis code, training code, and training data can be found at \url{https://github.com/EleutherAI/pythia}.",Large Language Models (LLMs) "Large language models (LLMs) have achieved impressive performance on code generation. However, for complex programming tasks, generating the correct solution in one go becomes challenging, thus some prior works have designed program repair approaches to improve code generation performance. In this work, we propose Self-Debugging, which teaches a large language model to debug its predicted program via few-shot demonstrations. In particular, we demonstrate that Self-Debugging can teach the large language model to perform rubber duck debugging; i.e., without any human feedback on the code correctness or error messages, the model is able to identify its mistakes by investigating the execution results and explaining the generated code in natural language. Self-Debugging achieves the state-of-the-art performance on several code generation benchmarks, including the Spider dataset for text-to-SQL generation, TransCoder for C++-to-Python translation, and MBPP for text-to-Python generation. On the Spider benchmark where there are no unit tests to verify the correctness of predictions, Self-Debugging with code explanation consistently improves the baseline by 2-3%, and improves the prediction accuracy on problems of the hardest level by 9%. On TransCoder and MBPP where unit tests are available, Self-Debugging improves the baseline accuracy by up to 12%. Meanwhile, by leveraging feedback messages and reusing failed predictions, Self-Debugging notably improves sample efficiency, and can match or outperform baseline models that generate more than 10x candidate programs.",Large Language Models (LLMs) "Pretrained large language models (LLMs) are widely used in many sub-fields of natural language processing (NLP) and generally known as excellent few-shot learners with task-specific exemplars. Notably, chain of thought (CoT) prompting, a recent technique for eliciting complex multi-step reasoning through step-by-step answer examples, achieved the state-of-the-art performances in arithmetics and symbolic reasoning, difficult system-2 tasks that do not follow the standard scaling laws for LLMs. While these successes are often attributed to LLMs' ability for few-shot learning, we show that LLMs are decent zero-shot reasoners by simply adding""Let's think step by step""before each answer. Experimental results demonstrate that our Zero-shot-CoT, using the same single prompt template, significantly outperforms zero-shot LLM performances on diverse benchmark reasoning tasks including arithmetics (MultiArith, GSM8K, AQUA-RAT, SVAMP), symbolic reasoning (Last Letter, Coin Flip), and other logical reasoning tasks (Date Understanding, Tracking Shuffled Objects), without any hand-crafted few-shot examples, e.g. increasing the accuracy on MultiArith from 17.7% to 78.7% and GSM8K from 10.4% to 40.7% with large InstructGPT model (text-davinci-002), as well as similar magnitudes of improvements with another off-the-shelf large model, 540B parameter PaLM. The versatility of this single prompt across very diverse reasoning tasks hints at untapped and understudied fundamental zero-shot capabilities of LLMs, suggesting high-level, multi-task broad cognitive capabilities may be extracted by simple prompting. We hope our work not only serves as the minimal strongest zero-shot baseline for the challenging reasoning benchmarks, but also highlights the importance of carefully exploring and analyzing the enormous zero-shot knowledge hidden inside LLMs before crafting finetuning datasets or few-shot exemplars.",Large Language Models (LLMs) "Large language models (LLMs) have recently experienced tremendous popularity and are widely used from casual conversations to AI-driven programming. However, despite their considerable success, LLMs are not entirely reliable and can give detailed guidance on how to conduct harmful or illegal activities. While safety measures can reduce the risk of such outputs, adversarial jailbreak attacks can still exploit LLMs to produce harmful content. These jailbreak templates are typically manually crafted, making large-scale testing challenging. In this paper, we introduce GPTFuzz, a novel black-box jailbreak fuzzing framework inspired by the AFL fuzzing framework. Instead of manual engineering, GPTFuzz automates the generation of jailbreak templates for red-teaming LLMs. At its core, GPTFuzz starts with human-written templates as initial seeds, then mutates them to produce new templates. We detail three key components of GPTFuzz: a seed selection strategy for balancing efficiency and variability, mutate operators for creating semantically equivalent or similar sentences, and a judgment model to assess the success of a jailbreak attack. We evaluate GPTFuzz against various commercial and open-source LLMs, including ChatGPT, LLaMa-2, and Vicuna, under diverse attack scenarios. Our results indicate that GPTFuzz consistently produces jailbreak templates with a high success rate, surpassing human-crafted templates. Remarkably, GPTFuzz achieves over 90% attack success rates against ChatGPT and Llama-2 models, even with suboptimal initial seed templates. We anticipate that GPTFuzz will be instrumental for researchers and practitioners in examining LLM robustness and will encourage further exploration into enhancing LLM safety.",Large Language Models (LLMs) "Large language models (LLMs) have been shown to perform well at a variety of syntactic, discourse, and reasoning tasks. While LLMs are increasingly deployed in many forms including conversational agents that interact with humans, we lack a grounded benchmark to measure how well LLMs understand \textit{social} language. Here, we introduce a new theory-driven benchmark, SocKET, that contains 58 NLP tasks testing social knowledge which we group into five categories: humor&sarcasm, offensiveness, sentiment&emotion, and trustworthiness. In tests on the benchmark, we demonstrate that current models attain only moderate performance but reveal significant potential for task transfer among different types and categories of tasks, which were predicted from theory. Through zero-shot evaluations, we show that pretrained models already possess some innate but limited capabilities of social language understanding and training on one category of tasks can improve zero-shot testing on others. Our benchmark provides a systematic way to analyze model performance on an important dimension of language and points to clear room for improvement to build more socially-aware LLMs. The associated resources are released at https://github.com/minjechoi/SOCKET.",Large Language Models (LLMs) "Large language models (LLMs) have demonstrated remarkable zero-shot generalization abilities: state-of-the-art chatbots can provide plausible answers to many common questions that arise in daily life. However, so far, LLMs cannot reliably solve long-horizon planning problems. By contrast, classical planners, once a problem is given in a formatted way, can use efficient search algorithms to quickly identify correct, or even optimal, plans. In an effort to get the best of both worlds, this paper introduces LLM+P, the first framework that incorporates the strengths of classical planners into LLMs. LLM+P takes in a natural language description of a planning problem, then returns a correct (or optimal) plan for solving that problem in natural language. LLM+P does so by first converting the language description into a file written in the planning domain definition language (PDDL), then leveraging classical planners to quickly find a solution, and then translating the found solution back into natural language. Along with LLM+P, we define a diverse set of different benchmark problems taken from common planning scenarios. Via a comprehensive set of experiments on these benchmark problems, we find that LLM+P is able to provide optimal solutions for most problems, while LLMs fail to provide even feasible plans for most problems.",Large Language Models (LLMs) "In this paper, we uncover a systematic bias in the evaluation paradigm of adopting large language models~(LLMs), e.g., GPT-4, as a referee to score and compare the quality of responses generated by candidate models. We find that the quality ranking of candidate responses can be easily hacked by simply altering their order of appearance in the context. This manipulation allows us to skew the evaluation result, making one model appear considerably superior to the other, e.g., Vicuna-13B could beat ChatGPT on 66 over 80 tested queries with ChatGPT as an evaluator. To address this issue, we propose a calibration framework with three simple yet effective strategies: 1) Multiple Evidence Calibration, which requires the evaluator model to generate multiple evaluation evidence before assigning ratings; 2) Balanced Position Calibration, which aggregates results across various orders to determine the final score; 3) Human-in-the-Loop Calibration, which introduces a balanced position diversity entropy to measure the difficulty of each example and seeks human assistance when needed. We also manually annotate the""win/tie/lose""outcomes of responses from ChatGPT and Vicuna-13B in the Vicuna Benchmark's question prompt, and extensive experiments demonstrate that our approach successfully mitigates evaluation bias, resulting in closer alignment with human judgments. We release our code and human annotation at \url{https://github.com/i-Eval/FairEval} to facilitate future research.",Large Language Models (LLMs) "Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate , where to evaluate , and how to evaluate . Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, education, natural and social sciences, agent applications, and other areas. Secondly, we answer the ‘where’ and ‘how’ questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing the performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. ",Large Language Models (LLMs) "The adoption of pre-trained large language models (LLMs), like ChatGPT, across an increasingly diverse range of tasks and domains poses significant challenges for authorial attribution and other basic knowledge organization practices. This paper examines the theoretical and practical issues introduced by LLMs and describes how their use erodes the supposedly firm boundaries separating specific works and creators. Building upon the author-as-node framework proposed by Soos and Leazer (2020), we compare works created with and without the use of LLMs; ultimately, we argue that the issues associated with these novel tools are indicative of preexisting limitations within standard entity-relationship models. As the growing popularity of generative AI raises concerns about plagiarism, academic integrity, and intellectual property, we encourage a reevaluation of reductive work/creator associations and advocate for the adoption of a more expansive approach to authorship.",Large Language Models (LLMs) "In contemporary machine learning approaches to bilingual lexicon induction (BLI), a model learns a mapping between the embedding spaces of a language pair. Recently, retrieve-and-rank approach to BLI has achieved state of the art results on the task. However, the problem remains challenging in low-resource settings, due to the paucity of data. The task is complicated by factors such as lexical variation across languages. We argue that the incorporation of additional lexical information into the recent retrieve-and-rank approach should improve lexicon induction. We demonstrate the efficacy of our proposed approach on XLING, improving over the previous state of the art by an average of 2\% across all language pairs.",Bilingual Lexicon Induction (BLI) "The steady growth of globalization leads to an increasing requirement for translations among different languages and an urgent need to keep pace with the challenge of cross-lingual international exchanges in multiple fields. Bilingual lexica are important language resources that provide valuable information for semantic equivalence of words across Languages, which has been a research hotspot. However, under resource-poor scenarios, it is difficult to find sufficient cross-lingual knowledge for model training, which requires the exploration of unsupervised learning methods. Unsupervised BLI aims at obtaining the bilingual translation dictionary from monolingual corpora of source and target languages without any cross-lingual knowledge, which can bring huge benefits to NLP tasks in resource-poor fields. Based on previous research, this thesis proposes a neural bilingual lexicon induction (NBLI) method with comparable corpora. This unsupervised method jointly learns word embeddings of source and target languages in the same vector space based on a bidirectional Long Short-Term Memory (BiLSTM) network, and then matches translation pairs through the Cross-domain Similarity Local Scaling (CSLS) retrieval to implement the induction task. It performs well in the general domain and improves the induction performance in the optoelectronic domain between English and Chinese, which breaks the limitation of the pure linear transformation model based on isomorphic assumption in resource-poor scenarios and solves the hubness problem in the bilingual vector space. Furthermore, it requires only comparable corpora in the model training process, and is applicable to various language pairs and specific fields.",Bilingual Lexicon Induction (BLI) "Recent work on bilingual lexicon induction (BLI) has frequently depended either on aligned bilingual lexicons or on distribution matching, often with an assumption about the isometry of the two spaces. We propose a technique to quantitatively estimate this assumption of the isometry between two embedding spaces and empirically show that this assumption weakens as the languages in question become increasingly etymologically distant. We then propose Bilingual Lexicon Induction with Semi-Supervision (BLISS) — a semi-supervised approach that relaxes the isometric assumption while leveraging both limited aligned bilingual lexicons and a larger set of unaligned word embeddings, as well as a novel hubness filtering technique. Our proposed method obtains state of the art results on 15 of 18 language pairs on the MUSE dataset, and does particularly well when the embedding spaces don’t appear to be isometric. In addition, we also show that adding supervision stabilizes the learning procedure, and is effective even with minimal supervision.",Bilingual Lexicon Induction (BLI) "Most of the successful and predominant methods for bilingual lexicon induction (BLI) are mapping-based, where a linear mapping function is learned with the assumption that the word embedding spaces of different languages exhibit similar geometric structures (i.e., approximately isomorphic). However, several recent studies have criticized this simplified assumption showing that it does not hold in general even for closely related languages. In this work, we propose a novel semi-supervised method to learn cross-lingual word embeddings for BLI. Our model is independent of the isomorphic assumption and uses nonlinear mapping in the latent space of two independently trained auto-encoders. Through extensive experiments on fifteen (15) different language pairs (in both directions) comprising resource-rich and low-resource languages from two different datasets, we demonstrate that our method outperforms existing models by a good margin. Ablation studies show the importance of different model components and the necessity of non-linear mapping.",Bilingual Lexicon Induction (BLI) "Abstract Benchmarks can be a useful step toward the goals of the field (when the benchmark is on the critical path), as demonstrated by the GLUE benchmark, and deep nets such as BERT and ERNIE. The case for other benchmarks such as MUSE and WN18RR is less well established. Hopefully, these benchmarks are on a critical path toward progress on bilingual lexicon induction (BLI) and knowledge graph completion (KGC). Many KGC algorithms have been proposed such as Trans[DEHRM], but it remains to be seen how this work improves WordNet coverage. Given how much work is based on these benchmarks, the literature should have more to say than it does about the connection between benchmarks and goals. Is optimizing P@10 on WN18RR likely to produce more complete knowledge graphs? Is MUSE likely to improve Machine Translation?",Bilingual Lexicon Induction (BLI) "Performance in cross-lingual NLP tasks is impacted by the (dis)similarity of languages at hand: e.g., previous work has suggested there is a connection between the expected success of bilingual lexicon induction (BLI) and the assumption of (approximate) isomorphism between monolingual embedding spaces. In this work, we present a large-scale study focused on the correlations between language similarity and task performance, covering thousands of language pairs and four different tasks: BLI, machine translation, parsing, and POS tagging. We propose a novel language distance measure, Eigenvalue Divergence (EVD), which quantifies the degree of isomorphism between two monolingual spaces. We empirically show that 1) language similarity scores derived from embedding-based EVD distances are strongly associated with performance observed in different cross-lingual tasks, 2) EVD outperforms other standard embedding-based language distance measures across the board, at the same time being computationally more tractable and easier to interpret. Finally, we demonstrate that EVD captures information which is complementary to typologically driven language distance measures. We report that their combination yields even higher correlations with performance levels in all cross-lingual tasks.",Bilingual Lexicon Induction (BLI) "Bilingual Lexicon Induction (BLI) is the task of translating words from corpora in two languages. Recent advances in BLI work by aligning the two word embedding spaces. Following that, a key step is to retrieve the nearest neighbor (NN) in the target space given the source word. However, a phenomenon called hubness often degrades the accuracy of NN. Hubness appears as some data points, called hubs, being extra-ordinarily close to many of the other data points. Reducing hubness is necessary for retrieval tasks. One successful example is Inverted SoFtmax (ISF), recently proposed to improve NN. This work proposes a new method, Hubless Nearest Neighbor (HNN), to mitigate hubness. HNN differs from NN by imposing an additional equal preference assumption. Moreover, the HNN formulation explains why ISF works as well as it does. Empirical results demonstrate that HNN outperforms NN, ISF and other state-of-the-art. For reproducibility and follow-ups, we have published all code.",Bilingual Lexicon Induction (BLI) "Recently unsupervised Bilingual Lexicon Induction(BLI) without any parallel corpus has attracted much research interest. One of the crucial parts in methods for the BLI task is the matching procedure. Previous works impose a too strong constraint on the matching and lead to many counterintuitive translation pairings. Thus We propose a relaxed matching procedure to find a more precise matching between two languages. We also find that aligning source and target language embedding space bidirectionally will bring significant improvement. We follow the previous iterative framework to conduct experiments. Results on standard benchmark demonstrate the effectiveness of our proposed method, which substantially outperforms previous unsupervised methods.",Bilingual Lexicon Induction (BLI) "Cross-lingual word embeddings (CLWEs) have proven indispensable for various natural language processing tasks, e.g., bilingual lexicon induction (BLI). However, the lack of data often impairs the quality of representations. Various approaches requiring only weak cross-lingual supervision were proposed, but current methods still fail to learn good CLWEs for languages with only a small monolingual corpus. We therefore claim that it is necessary to explore further datasets to improve CLWEs in low-resource setups. In this paper we propose to incorporate data of related high-resource languages. In contrast to previous approaches which leverage independently pre-trained embeddings of languages, we (i) train CLWEs for the low-resource and a related language jointly and (ii) map them to the target language to build the final multilingual space. In our experiments we focus on Occitan, a low-resource Romance language which is often neglected due to lack of resources. We leverage data from French, Spanish and Catalan for training and evaluate on the Occitan-English BLI task. By incorporating supporting languages our method outperforms previous approaches by a large margin. Furthermore, our analysis shows that the degree of relatedness between an incorporated language and the low-resource language is critically important.",Bilingual Lexicon Induction (BLI) "The bilingual dictionary is a vital data resource for machine translation and cross-language information retrieval research. Uyghur language has rich derivative forms, in which words are formed by a stem connecting with several suffixes, thus a large number of new words can be generated. This will increase the repetition rate of intentional features in the text and affect the efficiency of bilingual dictionary extraction. Aiming at the poor alignment of Chinese-Uyghur cross-language word embeddings due to significant morphological differences, this paper proposes a multilingual morphological analyzer based on morpheme sequence combined with neural network cross-language word embedding vector mapping and used for Chinese-Uyghur bilingual dictionary extraction task. A robust morpheme segmentation and stemming of bilingual text data are used to obtain excellent and meaningful word semantic features. Using a small number of Chinese-Uyghur parallel seed dictionaries as weakly supervised signals, respectively, map multilingual word or morpheme vectors to a unified vector space. And by associative alignment and locally scaling two bi-lingual retrievals through nearest-neighbor retrieval and cross-domain similarity, bilingual dictionaries are automatically extracted. Experimental results show that the morpheme sequence-based method for the Chinese-Uyghur dictionary induction task has significantly improved the accuracy of dictionary alignment compared to the word-based model. The manner in this paper can efficiently improve the accuracy of bilingual word alignment and is effective for morphologically derivative languages.",Bilingual Lexicon Induction (BLI) "Recent work on bilingual lexicon induction (BLI) has frequently depended either on aligned bilingual lexicons or on distribution matching, often with an assumption about the isometry of the two spaces. We propose a technique to quantitatively estimate this assumption of the isometry between two embedding spaces and empirically show that this assumption weakens as the languages in question become increasingly etymologically distant. We then propose Bilingual Lexicon Induction with Semi-Supervision (BLISS) — a semi-supervised approach that relaxes the isometric assumption while leveraging both limited aligned bilingual lexicons and a larger set of unaligned word embeddings, as well as a novel hubness filtering technique. Our proposed method obtains state of the art results on 15 of 18 language pairs on the MUSE dataset, and does particularly well when the embedding spaces don’t appear to be isometric. In addition, we also show that adding supervision stabilizes the learning procedure, and is effective even with minimal supervision.⇤",Bilingual Lexicon Induction (BLI) "With numerous new methods proposed recently, the evaluation of Bilingual Lexicon Induction have been quite hazardous and inconsistent across works. Some studies proposed some guidance to sanitize this; yet, they are not necessarily followed by practitioners. In this study, we try to gather these different recommendations and add our owns, with the aim to propose an unified evaluation protocol. We further show that the easiness of a benchmark while being correlated to the proximity of the language pairs being considered, is even more conditioned on the graphical similarities within the test word pairs.",Bilingual Lexicon Induction (BLI) "Word translation or bilingual lexicon induction (BLI) is a key cross-lingual task, aiming to bridge the lexical gap between different languages. In this work, we propose a robust and effective two-stage contrastive learning framework for the BLI task. At Stage C1, we propose to refine standard cross-lingual linear maps between static word embeddings (WEs) via a contrastive learning objective; we also show how to integrate it into the self-learning procedure for even more refined cross-lingual maps. In Stage C2, we conduct BLI-oriented contrastive fine-tuning of mBERT, unlocking its word translation capability. We also show that static WEs induced from the ‘C2-tuned’ mBERT complement static WEs from Stage C1. Comprehensive experiments on standard BLI datasets for diverse languages and different experimental setups demonstrate substantial gains achieved by our framework. While the BLI method from Stage C1 already yields substantial gains over all state-of-the-art BLI methods in our comparison, even stronger improvements are met with the full two-stage framework: e.g., we report gains for 112/112 BLI setups, spanning 28 language pairs.",Bilingual Lexicon Induction (BLI) "We propose a novel morphologically aware probability model for bilingual lexicon induction, which jointly models lexeme translation and inflectional morphology in a structured way. Our model exploits the basic linguistic intuition that the lexeme is the key lexical unit of meaning, while inflectional morphology provides additional syntactic information. This approach leads to substantial performance improvements—19% average improvement in accuracy across 6 language pairs over the state of the art in the supervised setting and 16% in the weakly supervised setting. As another contribution, we highlight issues associated with modern BLI that stem from ignoring inflectional morphology, and propose three suggestions for improving the task.",Bilingual Lexicon Induction (BLI) "Semi-supervision is a promising paradigm for Bilingual Lexicon Induction (BLI) with limited annotations. However, previous semisupervised methods do not fully utilize the knowledge hidden in annotated and nonannotated data, which hinders further improvement of their performance. In this paper, we propose a new semi-supervised BLI framework to encourage the interaction between the supervised signal and unsupervised alignment. We design two message-passing mechanisms to transfer knowledge between annotated and non-annotated data, named prior optimal transport and bi-directional lexicon update respectively. Then, we perform semi-supervised learning based on a cyclic or a parallel parameter feeding routine to update our models. Our framework is a general framework that can incorporate any supervised and unsupervised BLI methods based on optimal transport. Experimental results on MUSE and VecMap datasets show significant improvement of our models. Ablation study also proves that the two-way interaction between the supervised signal and unsupervised alignment accounts for the gain of the overall performance. Results on distant language pairs further illustrate the advantage and robustness of our proposed method.",Bilingual Lexicon Induction (BLI) "Work on projection-based induction of cross-lingual word embedding spaces (CLWEs) predominantly focuses on the improvement of the projection (i.e., mapping) mechanisms. In this work, in contrast, we show that a simple method for post-processing monolingual embedding spaces facilitates learning of the cross-lingual alignment and, in turn, substantially improves bilingual lexicon induction (BLI). The post-processing method we examine is grounded in the generalisation of first- and second-order monolingual similarities to the nth-order similarity. By post-processing monolingual spaces before the cross-lingual alignment, the method can be coupled with any projection-based method for inducing CLWE spaces. We demonstrate the effectiveness of this simple monolingual post-processing across a set of 15 typologically diverse languages (i.e., 15*14 BLI setups), and in combination with two different projection methods.",Bilingual Lexicon Induction (BLI) "Bilingual lexicon induction (BLI) can transfer knowledgefrom well- to under- resourced language, and has been widelyapplied to various NLP tasks. Recent work on BLI is projection-based that learns a mapping to connect source and target embedding spaces, with the isomorphism assumption. Unfortunately, the isomorphism assumption doesn't hold gener-ally, especially in typologically distant language pairs. Moreover, without supervised signals guiding, the training will further com-plicates BLI, making the performance of unsupervised methods unsatisfactory. To broke the restrict of isomorphism, we propose a semi-supervised method for distant BLI tasks, named A Semi-supervised Bilingual Lexicon Induction method in Latent Space based on Bidirectional Adversarial Model. First, two latent spaces are learned by two autoencoders for source and target domain independently to weaken the constraint of isomorphism in the embedding spaces. Then we add a few pairs of dictionary to learn the initial mapping to connect the Latent Space. Last, based on initial mapping, Cycle-Consistency is combined with Distance constraint constraint to maintain the geometry structure of both embedding spaces stable in the learning of bi-direction mapping based on adversarial model. By conducting extensive experiments, our method gets state-of-the-art results on most language pairs, especially with significant improvements on distant language pairs.",Bilingual Lexicon Induction (BLI) "Most existing approaches for unsupervised bilingual lexicon induction (BLI) depend on good quality static or contextual embeddings requiring large monolingual corpora for both languages. However, unsupervised BLI is most likely to be useful for low-resource languages (LRLs), where large datasets are not available. Often we are interested in building bilingual resources for LRLs against related high-resource languages (HRLs), resulting in severely imbalanced data settings for BLI. We first show that state-of-the-art BLI methods in the literature exhibit near-zero performance for severely data-imbalanced language pairs, indicating that these settings require more robust techniques. We then present a new method for unsupervised BLI between a related LRL and HRL that only requires inference on a masked language model of the HRL, and demonstrate its effectiveness on truly low-resource languages Bhojpuri and Magahi (with <5M monolingual tokens each), against Hindi. We further present experiments on (mid-resource) Marathi and Nepali to compare approach performances by resource range, and release our resulting lexicons for five low-resource Indic languages: Bhojpuri, Magahi, Awadhi, Braj, and Maithili, against Hindi.",Bilingual Lexicon Induction (BLI) "Performance in cross-lingual NLP tasks is impacted by the (dis)similarity of languages at hand: e.g., previous work has suggested there is a connection between the expected success of bilingual lexicon induction (BLI) and the assumption of (approximate) isomorphism between monolingual embedding spaces. In this work we present a large-scale study focused on the correlations between monolingual embedding space similarity and task performance, covering thousands of language pairs and four different tasks: BLI, parsing, POS tagging and MT. We hypothesize that statistics of the spectrum of each monolingual embedding space indicate how well they can be aligned. We then introduce several isomorphism measures between two embedding spaces, based on the relevant statistics of their individual spectra. We empirically show that 1) language similarity scores derived from such spectral isomorphism measures are strongly associated with performance observed in different cross-lingual tasks, and 2) our spectral-based measures consistently outperform previous standard isomorphism measures, while being computationally more tractable and easier to interpret. Finally, our measures capture complementary information to typologically driven language distance measures, and the combination of measures from the two families yields even higher task performance correlations.",Bilingual Lexicon Induction (BLI) "Cross-lingual word embeddings have become ubiquitous for various NLP tasks. Existing literature primarily evaluate the quality of cross-lingual word embeddings on the task of Bilingual Lexicon Induction. They report very high accuracies for European languages. In this paper, we report the accuracy of Bilingual Lexicon Induction (BLI) task for cross-lingual word embeddings generated using two mapping based unsupervised approaches: VecMap and MUSE for Indian languages on a dataset created using linked Indian Wordnet. We also show the comparison of these approaches with a simple baseline where the embeddings for all languages are trained using fast-text on the combined corpora of 11 Indian languages. Our experiments show that existing cross-lingual word embedding approaches give low accuracy on bilingual lexicon induction for cognate words. Given the high cognate overlap of several Indian languages, this is a serious limitation of existing approaches.",Bilingual Lexicon Induction (BLI) "This paper has contents which may be offensive, or upsetting however this cannot be avoided owing to the nature of the work. Hate speech and offensive texts are examples of damaging online content that target or promote hatred toward a group or individual member based on their actual or perceived features of identification, such as ethnicity, religion, or sexual orientation. Sharing violent and offensive content has had a significant negative impact on society. These hate speech and offensive content generally contains societal biases in them. With the rise of online hate speech, automatic detection of such biases as a natural language processing task is getting popular. However, not much research has been done to detect unintended social bias from these toxic language datasets. This report attempts to summarise what are existing hate speech detection and offensive text detection models are. Then it will reason why hate speech models struggle to generalise, which sums up existing attempts at addressing the main obstacles. Finally, this report introduces a new dataset from an existing toxic language dataset to detect social biases, their categories, and targeted groups in English. The dataset contains instances annotated for five different bias categories, viz., gender, race/ethnicity, religion, political, and LGBTQ . We then report baseline performances of both classification tasks on our curated dataset using transformer-based models. The input to the models is English texts which are probably hate speech or toxic texts. The models will then classify these texts into biased or neutral along with bias categories. Model biases and their miti-gation are also discussed in detail. Our study motivates a systematic extraction of social bias data from toxic languages.",Hate and Offensive Speech Detection "In today’s world, social media plays a vital role in spreading hate towards a person or group based on their color, caste, sex, sexual orientation, political differences, etc. Most of the work is done on a single tweet or comment classification, which lacks the conversation’s context. The tweet, corresponding comments, and reply often helps us understand the context of the entire discussion. This paper discusses the used system and the performance of the team CITK_ISI on the first available code-mixed dataset on Hindi-English and German conversation scrapped from Twitter. Data augmentation is used with a baseline transfer-based BERT model and achieved a macro F1 score of 0.6653 for ICHCL Hinglish and German codemix binary classification. The system also identifies hate speech and offensive language in Marathi, a binary classification that secures a macro F1 score of 0.9019.",Hate and Offensive Speech Detection "The paper introduces a very current topic in the field of natural language processing oriented to the automatic detection of hate speech and offensive language performed in the Slovak language. In this work, we describe the creation and processing database of short texts composed of posts and comments written in Slovak and published on social media. The proposed approach is based on sentiment analysis and implementing a tool for detecting hate speech using a convolutional neural network with elements of a recursive neural network, applied to a created database of comments. We achieved 61.32% detection accuracy only on a small set of training data balanced in the number of positive, neutral, and negative sentiments.",Hate and Offensive Speech Detection "This paper describes our participation in the shared task Fine-Grained Hate Speech Detection on Arabic Twitter at the 5th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT). The shared task is divided into three detection subtasks: (i) Detect whether a tweet is offensive or not; (ii) Detect whether a tweet contains hate speech or not; and (iii) Detect the fine-grained type of hate speech (race, religion, ideology, disability, social class, and gender). It is an effort toward the goal of mitigating the spread of offensive language and hate speech in Arabic-written content on social media platforms. To solve the three subtasks, we employed six different transformer versions: AraBert, AraElectra, Albert-Arabic, AraGPT2, mBert, and XLM-Roberta. We experimented with models based on encoder and decoder blocks and models exclusively trained on Arabic and also on several languages. Likewise, we applied two ensemble methods: Majority vote and Highest sum. Our approach outperformed the official baseline in all the subtasks, not only considering F1-macro results but also accuracy, recall, and precision. The results suggest that the Highest sum is an excellent approach to encompassing transformer output to create an ensemble since this method offered at least top-two F1-macro values across all the experiments performed on development and test data.",Hate and Offensive Speech Detection "Social media platforms serve as accessible outlets for individuals to express their thoughts and experiences, resulting in an influx of user-generated data spanning all age groups. While these platforms enable free expression, they also present significant challenges, including the proliferation of hate speech and offensive content. Such objectionable language disrupts objective discourse and can lead to radicalization of debates, ultimately threatening democratic values. Consequently, organizations have taken steps to monitor and curb abusive behavior, necessitating automated methods for identifying suspicious posts. This paper contributes to Hate Speech and Offensive Content Identification in English and Indo-Aryan Languages (HASOC) 2023 shared tasks track. We, team Z-AGI Labs, conduct a comprehensive comparative analysis of hate speech classification across five distinct languages: Bengali, Assamese, Bodo, Sinhala, and Gujarati. Our study encompasses a wide range of pre-trained models, including Bert variants, XLM-R, and LSTM models, to assess their performance in identifying hate speech across these languages. Results reveal intriguing variations in model performance. Notably, Bert Base Multilingual Cased emerges as a strong performer across languages, achieving an F1 score of 0.67027 for Bengali and 0.70525 for Assamese. At the same time, it significantly outperforms other models with an impressive F1 score of 0.83009 for Bodo. In Sinhala, XLM-R stands out with an F1 score of 0.83493, whereas for Gujarati, a custom LSTM-based model outshined with an F1 score of 0.76601. This study offers valuable insights into the suitability of various pre-trained models for hate speech detection in multilingual settings. By considering the nuances of each, our research contributes to an informed model selection for building robust hate speech detection systems.",Hate and Offensive Speech Detection "Nowadays, social media sites like Twitter and Facebook emerge as user-friendly and accessible sources for people to express their voice. Everybody, irrespective of their age group, uses these sites to share every moment of their life, making these sites flooded with data. This has led to many positive outcomes. At the same time, it has brought risks and harms as these sites set no restrictions. The volume of hate speech is not manageable by humans. As part of the HASOC-2021 shared task on information retrieval, we, Team Ignite, address the problem of hate speech identification in the Hindi corpus. Subtask A aims to identify binary hate or non-hate speech. This work was further extended with subtask B to determine the result of subtask A into three categories: profane, offensive, and hate. Hence, this paper compares the performance of three feature engineering techniques and four machine learning algorithms to evaluate their performance on a publicly available dataset with two distinct classes. With these two classes of hate and non-hate, we create a baseline model and improve model performance scores using various optimization techniques. Moreover, the output of different comparisons can be used further for text classification techniques.",Hate and Offensive Speech Detection "Deep neural networks have been adopted successfully in hate speech detection problems. Nevertheless, the effect of the word embedding models on the neural network's performance has not been appropriately examined in the literature. In our study, through different detection tasks, 2-class, 3-class, and 6-class classification, we investigate the impact of both word embedding models and neural network architectures on the predictive accuracy. Our focus is on the Arabic language. We first train several word embedding models on a large-scale unlabelled Arabic text corpus. Next, based on a dataset of Arabic hate and offensive speech, for each detection task, we train several neural network classifiers using the pre-trained word embedding models. This task yields a large number of various learned models, which allows conducting an exhaustive comparison. The empirical analysis demonstrates, on the one hand, the superiority of the skip-gram models and, on the other hand, the superiority of the CNN network across the three detection tasks.",Hate and Offensive Speech Detection "A rapid increase in users on social media has given rise to a vast amount of user-generated content, including hate speech and offensive language. Such content can have serious negative consequences, ranging from psychological harm to inciting violence and discrimination. Existing studies have explored different deep learning and Natural language processing (NLP) methods to perform hate speech detection, and these solutions have yielded significant performance. Most existing solutions are limited to detecting hate speech only in English with less focus on content generated in other languages, particularly in low-resource or regional languages. The goal of this paper is to address this challenge of hate speech detection for low-resource languages and propose a tool that could provide a real-time prediction for social media posts. In this study, the main focus was on English, Hindi, Hinglish, Bengali, and Marathi languages which are commonly used in social media platforms in India. A meta-learning-based model was employed to perform hate speech detection in these languages. The proposed method helps to overcome the limitation of data scarcity and provides fast adaptation to an unseen target language. Extensive experiments were conducted on datasets comprised of different regional languages spoken in India. Accuracy, Precision, recall, and F1-score metrics are used to evaluate the model's performance. The results show that when the dataset size is small, meta-learning-based models perform better than traditional fine-tuned language models.",Hate and Offensive Speech Detection "Even though the improper use of social media is increasing nowadays, there is also technology that brings solutions. Here, improperness is posting hate and offensive speech that might harm an individual or group. Hate speech refers to an insult toward an individual or group based on their identities. Spreading it on social media platforms is a serious problem for society. The solution, on the other hand, is the availability of natural language processing(NLP) technology that is capable to detect and handle such problems. This paper presents the detection of social media’s hate and offensive speech in the code-mixed Telugu language. For this, the task and golden standard dataset were provided for us by the shared task organizer (DravidianLangTech@ EACL 2024)1. To this end, we have employed the TF-IDF technique for numeric feature extraction and used a random forest algorithm for modeling hate speech detection. Finally, the developed model was evaluated on the test dataset and achieved 0.492 macro-F1.",Hate and Offensive Speech Detection "The number of increased social media users has led to a lot of people misusing these platforms to spread offensive content and use hate speech. Manual tracking the vast amount of posts is impractical so it is necessary to devise automated methods to identify them quickly. Large language models are trained on a lot of data and they also make use of contextual embeddings. We fine-tune the large language models to help in our task. The data is also quite unbalanced; so we used a modified cross-entropy loss to tackle the issue. We observed that using a model which is fine-tuned in hindi corpora performs better. Our team (HNLP) achieved the macro F1-scores of 0.808, 0.639 in English Subtask A and English Subtask B respectively. For Hindi Subtask A, Hindi Subtask B our team achieved macro F1-scores of 0.737, 0.443 respectively in HASOC 2021.",Hate and Offensive Speech Detection "In these contemporary times, social media is omnipresent and most people adhere to at least one of these digital platforms. Social entertainment generates an enormous amount of data and this is an unparalleled opportunity for data scientists and linguistic experts. These factors have renewed the interest in Natural Language Processing techniques and as such, there is a continuous increase in the number of publications that deal with the topic of Tweet classification using machine learning models. In this paper, experiments performed by the TweetEval team from the University of Cardiff have been studied and expanded upon. These tasks include emotion detection, offensive language identification and hate speech detection. The decision was made to focus on these specific classification tasks as they directly relate to unsought behaviours such as online harassment. This research endeavour involved building and testing a transformer-based language model which is capable of matching the performance of TweetEval. The aim of this study is therefore to identify common limitations to such models and how these can be circumvented to effectively combat phenomenon such as cyberbullying and online abuse using machine learning. From the results that were obtained, the developed BERT model performed comparatively well to other similar algorithms for all tasks as the obtained results were an F1-Score of 0.51, 0.76 and 0.80 for hate speech, emotion detection and offensive language respectively.",Hate and Offensive Speech Detection "The recognition of hate speech and offensive language (HOF) is commonly formulated as a classification task to decide if a text contains HOF. We investigate whether HOF detection can profit by taking into account the relationships between HOF and similar concepts: (a) HOF is related to sentiment analysis because hate speech is typically a negative statement and expresses a negative opinion; (b) it is related to emotion analysis, as expressed hate points to the author experiencing (or pretending to experience) anger while the addressees experience (or are intended to experience) fear. (c) Finally, one constituting element of HOF is the mention of a targeted person or group. On this basis, we hypothesize that HOF detection shows improvements when being modeled jointly with these concepts, in a multi-task learning setup. We base our experiments on existing data sets for each of these concepts (sentiment, emotion, target of HOF) and evaluate our models as a participant (as team IMS-SINAI) in the HASOC FIRE 2021 English Subtask 1A. Based on model-selection experiments in which we consider multiple available resources and submissions to the shared task, we find that the combination of the CrowdFlower emotion corpus, the SemEval 2016 Sentiment Corpus, and the OffensEval 2019 target detection data leads to an F1 =.79 in a multi-head multi-task learning model based on BERT, in comparison to .7895 of plain BERT. On the HASOC 2019 test data, this result is more substantial with an increase by 2pp in F1 and a considerable increase in recall. Across both data sets (2019, 2021), the recall is particularly increased for the class of HOF (6pp for the 2019 data and 3pp for the 2021 data), showing that MTL with emotion, sentiment, and target identification is an appropriate approach for early warning systems that might be deployed in social media platforms.",Hate and Offensive Speech Detection "Sentiment analysis is the most basic NLP task to determine the polarity of text data. There has been a significant amount of work in the area of multilingual text as well. Still hate and offensive speech detection faces a challenge due to inadequate availability of data, especially for Indian languages like Hindi and Marathi. In this work, we consider hate and offensive speech detection in Hindi and Marathi texts. The problem is formulated as a text classification task using the state of the art deep learning approaches. We explore different deep learning architectures like CNN, LSTM, and variations of BERT like multilingual BERT, IndicBERT, and monolingual RoBERTa. The basic models based on CNN and LSTM are augmented with fast text word embeddings. We use the HASOC 2021 Hindi and Marathi hate speech datasets to compare these algorithms. The Marathi dataset consists of binary labels and the Hindi dataset consists of binary as well as more-fine grained labels. We show that the transformer-based models perform the best and even the basic models along with FastText embeddings give a competitive performance. Moreover, with normal hyper-parameter tuning, the basic models perform better than BERT-based models on the fine-grained Hindi dataset.",Hate and Offensive Speech Detection "The spread of information through social media platforms can create environments possibly hostile to vulnerable communities and silence certain groups in society. To mitigate such instances, several models have been developed to detect hate and offensive speech. Since detecting hate and offensive speech in social media platforms could incorrectly exclude individuals from social media platforms, which can reduce trust, there is a need to create explainable and interpretable models. Thus, we build an explainable and interpretable high performance model based on the XGBoost algorithm, trained on Twitter data. For unbalanced Twitter data, XGboost outperformed the LSTM, AutoGluon, and ULMFiT models on hate speech detection with an F1 score of 0.75 compared to 0.38 and 0.37, and 0.38 respectively. When we down-sampled the data to three separate classes of approximately 5000 tweets, XGBoost performed better than LSTM, AutoGluon, and ULMFiT; with F1 scores for hate speech detection of 0.79 vs 0.69, 0.77, and 0.66 respectively. XGBoost also performed better than LSTM, AutoGluon, and ULMFiT in the down-sampled version for offensive speech detection with F1 score of 0.83 vs 0.88, 0.82, and 0.79 respectively. We use Shapley Additive Explanations (SHAP) on our XGBoost models' outputs to makes it explainable and interpretable compared to LSTM, AutoGluon and ULMFiT that are black-box models.",Hate and Offensive Speech Detection "Online social media is rife with offensive and hateful comments, prompting the need for their automatic detection given the sheer amount of posts created every second. Creating high-quality human-labelled datasets for this task is difficult and costly, especially because non-offensive posts are significantly more frequent than offensive ones. However, unlabelled data is abundant, easier, and cheaper to obtain. In this scenario, self-training methods, using weakly-labelled examples to increase the amount of training data, can be employed. Recent “noisy” self-training approaches incorporate data augmentation techniques to ensure prediction consistency and increase robustness against noisy data and adversarial attacks. In this paper, we experiment with default and noisy self-training using three different textual data augmentation techniques across five different pre-trained BERT architectures varying in size. We evaluate our experiments on two offensive/hate-speech datasets and demonstrate that (i) self-training consistently improves performance regardless of model size, resulting in up to +1.5% F1-macro on both datasets, and (ii) noisy self-training with textual data augmentations, despite being successfully applied in similar settings, decreases performance on offensive and hate-speech domains when compared to the default method, even with state-of-the-art augmentations such as backtranslation.",Hate and Offensive Speech Detection "Automatic detection of abusive online content such as hate speech, offensive language, threats, etc. has become prevalent in social media, with multiple efforts dedicated to detecting this phenomenon in English. However, detecting hatred and abuse in low-resource languages is a non-trivial challenge. The lack of sufficient labeled data in low-resource languages and inconsistent generalization ability of transformer-based multilingual pre-trained language models for typologically diverse languages make these models inefficient in some cases. We propose a meta learning-based approach to study the problem of few-shot hate speech and offensive language detection in low-resource languages that will allow hateful or offensive content to be predicted by only observing a few labeled data items in a specific target language. We investigate the feasibility of applying a meta-learning approach in cross-lingual few-shot hate speech detection by leveraging two meta-learning models based on optimization-based and metric-based (MAML and Proto-MAML) methods. To the best of our knowledge, this is the first effort of this kind. To evaluate the performance of our approach, we consider hate speech and offensive language detection as two separate tasks and make two diverse collections of different publicly available datasets comprising 15 datasets across 8 languages for hate speech and 6 datasets across 6 languages for offensive language. Our experiments show that meta learning-based models outperform transfer learning-based models in a majority of cases, and that Proto-MAML is the best performing model, as it can quickly generalize and adapt to new languages with only a few labeled data points (generally, 16 samples per class yields an effective performance) to identify hateful or offensive content.",Hate and Offensive Speech Detection "This paper provides an overview of the shard task on detecting offensive language, hate speech, and fine-grained hate speech at the fifth workshop on Open-Source Arabic Corpora and Processing Tools (OSACT5). The shared task comprised of three subtasks; Subtask A, involving the detection of offensive language, which contains socially unacceptable or impolite content including any kind of explicit or implicit insults or attacks against individuals or groups; Subtask B, involving the detection of hate speech, which contains offensive language targeting individuals or groups based on common characteristics such as race, religion, gender, etc.; and Subtask C, involving the detection of the fine-grained type of hate speech which takes one value from the following types: (i) race/ethnicity/nationality, (ii) religion/belief, (iii) ideology, (iv) disability/disease, (v) social class, and (vi) gender. In total, 40 teams signed up to participate in Subtask A, and 17 of them submitted test runs. For Subtask B, 26 teams signed up to participate and 12 of them submitted runs. And for Subtask C, 23 teams signed up to participate and 10 of them submitted runs. 10 teams submitted papers describing their participation in one subtask or more, and 8 papers were accepted. We present and analyze all submissions in this paper.",Hate and Offensive Speech Detection "The 2020 US Elections have been, more than ever before, characterized by social media campaigns and mutual accusations. We investigate in this paper if this manifests also in online communication of the supporters of the candidates Biden and Trump, by uttering hateful and offensive communication. We formulate an annotation task, in which we join the tasks of hateful/offensive speech detection and stance detection, and annotate 3000 Tweets from the campaign period, if they express a particular stance towards a candidate. Next to the established classes of favorable and against, we add mixed and neutral stances and also annotate if a candidate is mentioned with- out an opinion expression. Further, we an- notate if the tweet is written in an offensive style. This enables us to analyze if supporters of Joe Biden and the Democratic Party communicate differently than supporters of Donald Trump and the Republican Party. A BERT baseline classifier shows that the detection if somebody is a supporter of a candidate can be performed with high quality (.89 F1 for Trump and .91 F1 for Biden), while the detection that somebody expresses to be against a candidate is more challenging (.79 F1 and .64 F1, respectively). The automatic detection of hate/offensive speech remains challenging (with .53 F1). Our corpus is publicly available and constitutes a novel resource for computational modelling of offensive language under consideration of stances.",Hate and Offensive Speech Detection "The last decade has seen a steep rise in the use and dependence of society on social media. The need for detection and prevention of hate and offensive speech is more than ever. The everchanging form of natural language makes the detection of hate speech challenging, involving code-mixed text. The task becomes even more daunting in a country like India, where different languages and dialects are spoken across the country. This paper details the Code Fellas team’s approaches in the context of HASOC 2023 - Task 4: Annihilate Hate, an initiative aimed at extending hate speech detection to Bengali, Bodo, and Assamese languages. Here we describe our approaches which broadly involve Long Short Term Memory (LSTM) coupled with Convolutional Neural Networks (CNN) and pre-trained Bidirectional Encoder Representations from Transformers (BERT) based models like IndicBERT [1] and MuRIL [2]. Notably, our results showcase the effectiveness of these approaches, with IndicBERT achieving a remarkable F1 score of 69.726% for Assamese, MuRIL achieving 71.955% for Bengali, and a BiLSTM model enhanced with an additional Dense Layer attaining an impressive 83.513% for Bodo.",Hate and Offensive Speech Detection "The rise of emergence of social media platforms has fundamentally altered how people communicate, and among the results of these developments is an increase in online use of abusive content. Therefore, automatically detecting this content is essential for banning inappropriate information, and reducing toxicity and violence on social media platforms. The existing works on hate speech and offensive language detection produce promising results based on pre-trained transformer models, however, they considered only the analysis of abusive content features generated through annotated datasets. This paper addresses a multi-task joint learning approach which combines external emotional features extracted from another corpora in dealing with the imbalanced and scarcity of labeled datasets. Our analysis are using two well-known Transformer-based models, BERT and mBERT, where the later is used to address abusive content detection in multi-lingual scenarios. Our model jointly learns abusive content detection with emotional features by sharing representations through transformers' shared encoder. This approach increases data efficiency, reduce overfitting via shared representations, and ensure fast learning by leveraging auxiliary information. Our findings demonstrate that emotional knowledge helps to more reliably identify hate speech and offensive language across datasets. Our hate speech detection Multi-task model exhibited 3% performance improvement over baseline models, but the performance of multi-task models were not significant for offensive language detection task. More interestingly, in both tasks, multi-task models exhibits less false positive errors compared to single task scenario.",Hate and Offensive Speech Detection "Phishing emails are emails that pretend to be from a trusted company that target users to provide personal or financial information. Sometimes, they include links that may download malicious software on user’s computers, when clicked. Such emails are easily detected by spam filters that classify any email with a link as a phishing email. However, emails that have no links, link-less emails, requires more effort from the spam filters. Although many researches have been done on this topic, spam filters are still classifying some benign emails as phishing and vice-versa. This paper is focused on classifying link-less emails using machine learning approach, deep neural networks. Deep neural networks differs from simple neural network by having multiple hidden layers where data must be processed before reaching the output layer. The data used in this research is publicly available online. Hyper parameter optimization, was performed, using different settings on the data. In order to demonstrate the effectiveness of the approach, precision, recall and accuracy were computed. The results show that the deep neural network performed well in many of its settings.",Email Spam and Phishing Detection "Email Spamming has been recognized as one of the most dangerous cyber-attacks these days. As email is more encouraging platform for the communication mechanism these days, it is accessible to everyone across the world with the help of internet. Hence it has to be protected in order to reduce the cyber-attacks which involve loss of organizational property. The previous spam-filtering technologies include humanly detection of certain keywords and blocking the spam-sending domains which are recognizable. Spamming of emails is on the rise as the number of internet users grows, resulting in the leakage of personal information from users. Thus, detecting these email spams is critical in order to reduce illegal and unethical behavior, as well as phishing and fraud. As a result, ongoing research into email spam detection has been conducted utilizing a variety of machine learning algorithms with varying levels of accuracy. Using the required techniques of machine learning in this project, the regular words which are used in spam emails are easily identified using the pre-occupied data set called stop-words. This proposed system tries to recognize a recurrent word group which are used mostly that are classed as spam using machine learning techniques. The machine learning model that has been using is an early-trained model with feedback that can tell the difference between a correct and an ambiguous output.",Email Spam and Phishing Detection "With the influx of technological advancements and the increased simplicity in communication, especially through emails, the upsurge in the volume of unsolicited bulk emails (UBEs) has become a severe threat to global security and economy. Spam emails not only waste users’ time, but also consume a lot of network bandwidth, and may also include malware as executable files. Alternatively, phishing emails falsely claim users’ personal information to facilitate identity theft and are comparatively more dangerous. Thus, there is an intrinsic need for the development of more robust and dependable UBE filters that facilitate automatic detection of such emails. There are several countermeasures to spam and phishing, including blacklisting and content-based filtering. However, in addition to content-based features, behavior-based features are well-suited in the detection of UBEs. Machine learning models are being extensively used by leading internet service providers like Yahoo, Gmail, and Outlook, to filter and classify UBEs successfully. There are far too many options to consider, owing to the need to facilitate UBE detection and the recent advances in this domain. In this paper, we aim at elucidating on the way of extracting email content and behavior-based features, what features are appropriate in the detection of UBEs, and the selection of the most discriminating feature set. Furthermore, to accurately handle the menace of UBEs, we facilitate an exhaustive comparative study using several state-of-the-art machine learning algorithms. Our proposed models resulted in an overall accuracy of 99% in the classification of UBEs. The text is accompanied by snippets of Python code, to enable the reader to implement the approaches elucidated in this paper.",Email Spam and Phishing Detection "Spam emails have been traditionally seen as just annoying and unsolicited emails containing advertisements, but they increasingly include scams, malware or phishing. In order to ensure the security and integrity for the users, organisations and researchers aim to develop robust filters for spam email detection. Recently, most spam filters based on machine learning algorithms published in academic journals report very high performance, but users are still reporting a rising number of frauds and attacks via spam emails. Two main challenges can be found in this field: (a) it is a very dynamic environment prone to the dataset shift problem and (b) it suffers from the presence of an adversarial figure, i.e. the spammer. Unlike classical spam email reviews, this one is particularly focused on the problems that this constantly changing environment poses. Moreover, we analyse the different spammer strategies used for contaminating the emails, and we review the state-of-the-art techniques to develop filters based on machine learning. Finally, we empirically evaluate and present the consequences of ignoring the matter of dataset shift in this practical field. Experimental results show that this shift may lead to severe degradation in the estimated generalisation performance, with error rates reaching values up to $$48.81\%$$ 48.81 % .",Email Spam and Phishing Detection "Email categorization is crucial in business and academia, filtering spam emails that risk phishing, fraud, and theft. This study employs a hybrid approach, considering group and individual strengths and weaknesses in email classification. It prioritizes identifying spam, particularly phishing, using data mining and the UCI spam base dataset. The primary objective is to enhance automated learning for accurate detection and filtering, differentiating legitimate messages from spam. Radial Basis Function Neural Networks (RBFNN) are central to this work, a key component of artificial neural networks (ANNs) for data classification. The paper encompasses various algorithms: Support Vector Machine (SVM), Neural Network, Naive Bayes, K-Nearest Neighbor (KNN), Decision Tree, Random Forest, Harris Hawk’s Optimizer (HHO), and Extreme Learning Machines (ELM). This research advances email classification, bolstering digital communication's security and efficiency.",Email Spam and Phishing Detection "The tremendously growing problem of phishing e-mail, also known as spam including spear phishing or spam borne malware, has demanded a need for reliable intelligent anti-spam e-mail filters. This survey paper describes a focused literature survey of Artificial Intelligence (AI) and Machine Learning (ML) methods for intelligent spam email detection, which we believe can help in developing appropriate countermeasures. In this paper, we considered 4 parts in the email’s structure that can be used for intelligent analysis: (A) Headers Provide Routing Information, contain mail transfer agents (MTA) that provide information like email and IP address of each sender and recipient of where the email originated and what stopovers, and final destination. (B) The SMTP Envelope, containing mail exchangers’ identification, originating source and destination domains\users. (C) First part of SMTP Data, containing information like from, to, date, subject – appearing in most email clients (D) Second part of SMTP Data, containing email body including text content, and attachment. Based on the number the relevance of an emerging intelligent method, papers representing each method were identified, read, and summarized. Insightful findings, challenges and research problems are disclosed in this paper. This comprehensive survey paves the way for future research endeavors addressing theoretical and empirical aspects related to intelligent spam email detection.",Email Spam and Phishing Detection "E-mail is one of the most widely recognised channels of communication, whether for personal or corporate purposes. The worst part about spam emails is that they intrude on user’s privacy without their consent, and the constant bombardment of spam mails fills up the user's entire email space. Additionally, the issue of wasting network capacity and time checking and deleting spam mails makes it an even more concerning issue. Although confrontation tactics are constantly being upgraded, the results of those methods are now unsatisfactory. Furthermore, phishing emails have been on the rise in recent years. To combat the issue of phishing emails, more effective phishing detection technology is required. We intend to create a phishing email detection tool that analyses the email structure first, using an upgraded Convolutional Neural Networks model with multilayer vectors and Long Short-Term Neural Network (LSTM) to model emails at the email header, character level, email content, and word level all at the same time. To assess the efficacy, we'll use an unbalanced dataset with actual phishing and genuine email ratios. The total accuracy of the experiment achieves a high percentage. ",Email Spam and Phishing Detection "Email Spam has become a major problem nowadays, with Rapid growth of internet users, Email spams is also increasing. People are using them for illegal and unethical conducts, phishing and fraud. Sending malicious link through spam emails which can harm our system and can also seek in into your system. Creating a fake profile and email account is much easy for the spammers, they pretend like a genuine person in their spam emails, these spammers target those peoples who are not aware about these frauds. So, it is needed to Identify those spam mails which are fraud, this project will identify those spam by using techniques of machine learning, this paper will discuss the machine learning algorithms and apply all these algorithm on our data sets and best algorithm is selected for the email spam detection having best precision and accuracy.",Email Spam and Phishing Detection "In practically every industry today, from business to education, emails are used. Ham and spam are the two subcategories of emails. Email spam, often known as junk email or unwelcome email, is a kind of email that can be used to hurt any user by sapping their time and computing resources and stealing important data. Spam emailvolume is rising quickly day by day. Today’s email and IoT service providers face huge and massive challenges with spam identification and filtration. Email filteringis one of the most important and well-known methods among all the methods createdfor identifying and preventing spam. SVM, decision trees, and other machine learning and deep learning approaches have all been applied to this problem. Together with the explosive growth in internet users, email spam has increased substantially in recent years. Individuals are using them for illegal and dishonest purposes, such as fraud, phishing, and distributing malicious links through unsolicited email that can harm our systems and attempt to access your systems. By quickly constructing phone-y/fake profiles and email accounts, spammers prey on those who are ignorant of these scams. They use a real name in their spam emails. As a result, it’s critical to identify spam emails that include fraud. This project will accomplish this by utilizing machine learning methods, and this article will examine the machine learning algorithms, put them to use on our data sets, and select the approach that can detect emailspam with the maximum degree of precision and accuracy.",Email Spam and Phishing Detection "A large number of email users triggers an increase in the occurrence of spam in emails to gain benefits for some parties but harm others and also email users. Spam emails usually contain advertisements or criminal acts such as phishing which implicitly contain human emotions in them. It is quite difficult and takes time to differentiate between a large number of spam and ham emails. This problem can be overcome by using deep learning technology. One of which is a neural network that can classify spam emails. This paper uses the spam and ham Enron email corpus dataset. This study will add emotional features in extracting its features. The steps taken include text preprocessing, feature extraction using tf-idf, and lexicon-based emotion features, followed by classification using RNN to detect spam in emails. A comparison with other methods is also provided by comparing the proposed method to Naïve Bayes and Support-Vector Machine (SVM) algorithm based on precision and accuracy. In addition, this study also compares the effect of using affect intensities on the performance of algorithms. The results show that RNN outperforms other methods by showing the highest accuracy 99% and the precision of 99.1%. Adding effect intensities to the model would increase the model recognition results.",Email Spam and Phishing Detection "Electronic mails (emails) have been widely adapted by organizations and individuals as efficient communication means. Despite the pervasiveness of alternate means like social networks, mobile SMS, electronic messages, etc. email users are continuously growing. The higher user growth attracts more spammers who send unsolicited emails to anonymous users. These spam emails may contain malware, misleading information, phishing links, etc. that can imperil the privacy of benign users. The paper proposes a self-adaptive hybrid algorithm of big bang–big crunch (BB–BC) with ant colony optimization (ACO) for email spam detection. The BB–BC algorithm is based on the physics-inspired evolution theory of the universe, and the collective interaction behavior of ants is the inspiration for the ACO algorithm. Here, the ant miner plus (AMP) variant of the ACO algorithm is adapted, a data mining variant efficient for the classification. The proposed hybrid algorithm (HB3C-AMP) adapts the attributes of B3C (BB–BC) for local exploitation and AMP for global exploration. It evaluates the center of mass along with the consideration of pheromone value evaluated by the best ants to detect email spam efficiently. The experiments for the proposed HB3C-AMP algorithm are conducted with the Ling Spam and CSDMC2010 datasets. Different experiments are conducted to determine the significance of the pre-processing modules, iterations, and population size on the proposed algorithm. The results are also evaluated for the AM (ant miner), AM2 (ant miner2), AM3 (ant miner3), and AMP algorithms. The performance comparison demonstrates that the proposed HB3C-AMP algorithm is superior to the other techniques.",Email Spam and Phishing Detection "Phishing and spam detection is long standing challenge that has been the subject of much academic research. Large Language Models (LLM) have vast potential to transform society and provide new and innovative approaches to solve well-established challenges. Phishing and spam have caused financial hardships and lost time and resources to email users all over the world and frequently serve as an entry point for ransomware threat actors. While detection approaches exist, especially heuristic-based approaches, LLMs offer the potential to venture into a new unexplored area for understanding and solving this challenge. LLMs have rapidly altered the landscape from business, consumers, and throughout academia and demonstrate transformational potential for the potential of society. Based on this, applying these new and innovative approaches to email detection is a rational next step in academic research. In this work, we present IPSDM, our model based on fine-tuning the BERT family of models to specifically detect phishing and spam email. We demonstrate our fine-tuned version, IPSDM, is able to better classify emails in both unbalanced and balanced datasets. This work serves as an important first step towards employing LLMs to improve the security of our information systems.",Email Spam and Phishing Detection "This study leverages Convolutional Neural Networks (CNNs); a state-of-the-art deep learning architecture primarily used in image analysis, and adapts it for the detection of phishing emails. By treating email content as multi-dimensional data, we employ CNNs to extract meaningful features and patterns from email headers, text, and attachments. Our approach not only identifies known phishing templates but also has the capability to detect emerging and zero-day phishing attacks",Email Spam and Phishing Detection "Among the problems caused by spam email are loss of productivity and increase in network resources consumption. Sometimes spam email contain malware as attachments or include links for phishing websites, leading to theft and loss of data. Many email servers are filtering spam but the process becomes increasingly difficult as spammers try to create messages that look similar to normal email. In this paper we implemented five Machine Learning Algorithms in the Python language using the scikit-learn library and we compared their performance against two publicly available spam email corpuses. The discussed algorithms are: Support Vector Machine, Random Forest, Logistic Regression, Multinomial Naive Bayes and Gaussian Naive Bayes.",Email Spam and Phishing Detection "Phishing is a Cyber Attack which the attacker sends fake emails to attract users to visit fake websites to obtain the user’s personal information. Targeted malicious emails (TME) breaching computer network have become more insidious and more widely documented in recent years. Beyond spam or phishing designed to trick users into revealing personal information, TME can exploit computer networks and gather sensitive information. They can consist of persistent and coordinated campaigns that can span years. A new email-filtering technique based on email's persistent-threat and recipient-oriented features with a Naïve Bayes classifier Algorithm. This paper, how to detect a targeted malicious packet (email) for normal network into modern network. We develop a router detection protocol that dynamically infers the precise number of congestive packet losses that will occur.",Email Spam and Phishing Detection "Cyber-attacks are critical threats for both organizations and individual users. Such threats aim to gain access to confidential information, which is often used to steal financial assets. While much work is done on a technical level to improve automated cybersecurity systems, the human user is ultimately the last line of defense. In a domain that is rapidly transforming, it is of critical importance to not only prepare human users to detect fraudulent attacks, but also understand how automated safeguards, like spam filters, influence the users’ detection. As such, the present study investigated the effect of an automated email filter on users’ classifications of legitimate and phishing emails.",Email Spam and Phishing Detection "Phishing is one of the most dangerous attacks targeting individuals, organizations, and nations. Although many traditional methods for email phishing detection exist, there is a need to improve accuracy and reduce false-positive rates. Our work investigates one-dimensional CNN-based models (1D-CNNPD) to detect phishing emails in order to address these challenges. Additionally, further improvement is achieved with the augmentation of the base 1D-CNNPD model with recurrent layers, namely, LSTM, Bi-LSTM, GRU, and Bi-GRU, and experimented with the four resulting models. Two benchmark datasets were used to evaluate the performance of our models: Phishing Corpus and Spam Assassin. Our results indicate that, in general, the augmentations improve the performance of the 1D-CNNPD base model. Specifically, the 1D-CNNPD with Bi-GRU yields the best results. Overall, the performance of our models is comparable to the state of the art of CNN-based phishing email detection. The Advanced 1D-CNNPD with Leaky ReLU and Bi-GRU achieved 100% precision, 99.68% accuracy, an F1 score of 99.66%, and a recall of 99.32%. We observe that increasing model depth typically leads to an initial performance improvement, succeeded by a decline. In conclusion, this study highlights the effectiveness of augmented 1D-CNNPD models in detecting phishing emails with improved accuracy. The reported performance measure values indicate the potential of these models in advancing the implementation of cybersecurity solutions to combat email phishing attacks.",Email Spam and Phishing Detection "Phishing is one of the most dangerous attacks targeting individuals, organizations, and nations. Although many traditional methods for email phishing detection exist, there is a need to improve accuracy and reduce false-positive rates. Our work investigates one-dimensional CNN-based models (1D-CNNPD) to detect phishing emails in order to address these challenges. Additionally, further improvement is achieved with the augmentation of the base 1D-CNNPD model with recurrent layers, namely, LSTM, Bi-LSTM, GRU, and Bi-GRU, and experimented with the four resulting models. Two benchmark datasets were used to evaluate the performance of our models: Phishing Corpus and Spam Assassin. Our results indicate that, in general, the augmentations improve the performance of the 1D-CNNPD base model. Specifically, the 1D-CNNPD with Bi-GRU yields the best results. Overall, the performance of our models is comparable to the state of the art of CNN-based phishing email detection. The Advanced 1D-CNNPD with Leaky ReLU and Bi-GRU achieved 100% precision, 99.68% accuracy, an F1 score of 99.66%, and a recall of 99.32%. We observe that increasing model depth typically leads to an initial performance improvement, succeeded by a decline. In conclusion, this study highlights the effectiveness of augmented 1D-CNNPD models in detecting phishing emails with improved accuracy. The reported performance measure values indicate the potential of these models in advancing the implementation of cybersecurity solutions to combat email phishing attacks.",Email Spam and Phishing Detection "Enterprise security is increasingly being threatened by social engineering attacks, such as phishing, which deceive employees into giving access to enterprise data. To protect both the users themselves and enterprise data, more and more organizations provide cyber security training that seeks to teach employees/customers to identify and report suspicious content. By its very nature, such training seeks to focus on signals that are likely to persist across a wide range of attacks. Further, it expects the user to apply the learnings from these training on e-mail messages that were not filtered by existing, automatic enterprise security (e.g., spam filters and commercial phishing detection software). However, relying on such training now shifts the detection of phishing from an automatic process to a human driven one which is fallible especially when a user errs due to distraction, forgetfulness, etc. In this work we explore treating this type of detection as a natural language processing task and modifying training pipelines accordingly. We present a dataset with annotated labels where these labels are created from the classes of signals that users are typically asked to identify in such training. We also present baseline classifier models trained on these classes of labels. With a comparative analysis of performance between human annotators and the models on these labels, we provide insights which can contribute to the improvement of the respective curricula for both machine and human training.",Email Spam and Phishing Detection "In the digital age, spam emails have emerged as a persistent and pervasive nuisance, inundating our inboxes and bringing with them an array of potential threats. These threats encompass phishing attacks, the distribution of malicious software, and breaches of privacy. With the rapid expansion of internet users, this problem has grown exponentially and has been exploited for illicit and unethical purposes. The act of sending unauthorized and potentially harmful links through email has seen a significant uptick in recent years, posing a direct threat to our systems and personal security. Recognizing the urgent need to identify and mitigate fraudulent emails, this project is dedicated to the development of robust techniques for spam email detection. Leveraging the power of machine learning and deep learning algorithms, including Multinomial Naive Bayes, Recurrent Neural Networks, and Support Vector Machines, we aim to tackle this issue head-on. Our approach involves applying these advanced algorithms to large datasets, enabling us to select the most effective solution for spam email detection. The criteria for our selection process are centered around precision and accuracy, ensuring that we employ the algorithm that offers the highest level of reliability and performance in safeguarding our email inboxes.",Email Spam and Phishing Detection "Fake news is an information that has been carefully manipulated to mislead readers by using false facts and figures. Since the introduction of the Internet and social media, fake news has grown to be a significant problem. Identifying fake news has become an important area of research in Natural Language Processing (NLP). The key challenge is determining the veracity of news stories. There is an increasing difficulty in studying and designing a technological strategy to combat fake news without compromising speed and collaborative access to high-quality information. Despite the fact that various technologies have been developed to assist in the detection of false news, and despite significant breakthroughs, identifying fake news stays ineffective. In this research, a new framework has been proposed that utilizes Porter Stemmer, TF-IDF vectorizer for pre-processing and double layer Bi-LSTM for extracting the refined features to obtain better learning. In this model, initially, the summarized input vector is formed by concatenating the most relevant text attributes such as headlines, news for further process. The performance of the proposed model has been justified by evaluating its performance on three experimental datasets namely Kaggle fake_real_news, Liar and Politifact Fake_Real.",Fake News Detection "Fake news can mislead public opinion, weaken social order, limit the legitimacy of government, and lead to a serious threat to social stability. Therefore, the early detection of fake news from the online platform is extremely important. Most of the previous literature has focused on finding fake news from resource-rich languages like English, Hindi, and Spanish. The current work utilizes the dataset of Urdu language for fake news detection. Two different models have been proposed in the paper. The first one is an ensemble-based technique and the second one is a multi-layer dense neural network. The multi-layer dense neural network-based approach performed better with character n-gram TF-IDF features to achieve a macro 𝐹 1 -score of 0.8101.",Fake News Detection "Due to the enormous and exponential advancement in the online social network, the triad of Facebook, Twitter and Whatsapp posed a great challenge in the form of fake news in front of us. In recent years many events like false propaganda of the ‘US presidential election’, opinion spamming in ‘Brexit referendum’, and long-tail series of viral rumors after many natural calamities around the world, created a lot of chaos and law and order problem. Simultaneously, this rapid explosion of fake news also attracted the attention of different researchers to investigate the real cause of it and thus to developed some tools and techniques to relieve and discover the Rumors across online media as soon as possible. In this regard, the Machine Learning (ML) algorithms and Natural Language Processing (NLP) algorithms emerged as the remarkably vital and essential tool to detect fake news in the current age. NLP when aided with machine learning produced many remarkable results that were possible just by manual fact-checking or by normal text detection process. We have systematically discussed the role of NLP and machine learning in the fake news detection process, and various detection techniques based on these. Basic terminology of NLP and machine learning too explained in brief. At last, we gave light on the future trends, open issues, challenges, and potential research oriented toward NLP and ML-based approaches.",Fake News Detection "This article aims to compare current state-of-the-art natural language processing models (NLP) fine-tuned for fake news detection based on a set of metrics and asses their effectiveness as a part of a disinformation management structure. The need for a development of this area comes as a response to the overwhelming and unregulated spread of fake news that represents one of the current major difficulties in today`s era. The development of AI technologies has a direct impact over the creation and spreading of misinformation and disinformation as a result of the multiple uses that technology may have. Currently, machine learning techniques are used for the development of large language models (LLM). These developments in science are also used in disinformation campaigns. Related to this matter the concept of disinformation management has arisen as a cybersecurity issue integral in the current cyber threat landscape",Fake News Detection "This comprehensive investigation uses powerful NLP and multi-modal integration to detect fake information on social media. Our technique was tested using 10,000 news pieces from various social media networks. Advanced NLP models are crucial, with the state-of-the-art transformer model, BERT, collecting subtle contextual information with 94.2% accuracy. SVM and BERT had the greatest classification accuracy at 95.6% when combined with ensemble approaches. Multi-modal information sources including picture captions and user metadata improved classification performance, with the textual-image model achieving the greatest accuracy at 96.8%. The research was ethical, incorporating fairness analysis and transparency measures, to ensure detection trustworthiness and fairness. These findings show that sophisticated NLP models and multi-modal integration can improve fake news detection, enabling the creation of strong, real-world apps to combat disinformation and protect digital discourse.",Fake News Detection "Messages posted to online social networks (OSN) have recently caused a stir due to the deliberate spread of fake news or rumour. The goal of this research is to understand and analyse the characteristics of fake news, particularly in relation to sentiments, in order to automate the detection of fake news and rumours. We offer a notion that there is a relation between bogus communications or rumours and the sentiments of texts submitted online, based on actual evidence.. We validate our theory by comparing it to cutting-edge baseline text-only fake news detection methods that do not consider sentiments. We ran tests on a standard Twitter fake news dataset and found significant improvements.",Fake News Detection "Fake News Detection in Dravidian Languages is a shared task that identifies youtube comments in the Malayalam language for fake news detection. In this work, we have proposed a transformer-based model with cross-entropy loss and focal loss, which classifies the comments into fake or authentic news. We have used different transformer-based models for the dataset with modifications in the experimental setup, out of which the fine-tuned model, which is based on MuRIL with focal loss, achieved the best overall macro F1-score of 0.87, and we got second position in the final leaderboard.",Fake News Detection "The spreading of fake news has given rise to many problems in society. It is due to its ability to cause a lot of social and national damage with destructive impacts. Sometimes it gets very difficult to know if the news is genuine or fake. Therefore it is very important to detect if the news is fake or not. ""Fake News"" is a term used to represent fabricated news or propaganda comprising misinformation communicated through traditional media channels like print, and television as well as nontraditional media channels like social media. Techniques of NLP and Machine learning can be used to create models which can help to detect fake news. In this paper we have presented six LSTM models using the techniques of NLP and ML. The datasets in comma-separated values format, pertaining to political domain were used in the project. The different attributes like the title and text of the news headline/article were used to perform the fake news detection. The results showed that the proposed solution performs well in terms of providing an output with good accuracy, precision and recall. The performance analysis made between all the models showed that the models which have used GloVe and Word2vec method work better than the models using TF-IDF. Further, a larger dataset for better output and also other factors such as the author ,publisher of the news can be used to determine the credibility of the news. Also, further research can also be done on images, videos, images containing text which can help in improving the models in future. Keywords: Fake news detection, LSTM(long short term memory),Word2Vec,TF-IDF,Natural Language Processing.",Fake News Detection "Fake news has become a major concern due to its spread on social media. To combat this, various machine learning (ML) techniques have been proposed. However, there is a lack of research on the performance of transformer models using datasets from a wide range of domains. This paper investigates the performance of ML algorithms on three fake news datasets: LIAR, FNC-1 and Balanced Dataset for Fake News Analysis. Pretrained transformer language models such as BERT, RoBERTa, ALBERT and DistilBERT were chosen for this paper. The performance of the models was consistent across all datasets. RoBERTa obtained an accuracy of 69% when trained on the LIAR dataset, an 11% improvement over the existing traditional and deep learning ML model implementations, and an accuracy of 97% when trained on the FNC-1 dataset, proving to be the best-performing model across all the fake news detection datasets utilized in the experiments. DistilBERT trains at a significantly faster rate than the other three variants. The experimental results from the paper can help the research community to continue investigating and gain insights into fake news detection.",Fake News Detection "Fake news is incorrect information. With the advent of false news on social media and other platforms, it is more important than ever to know the difference. Fake news contributes to riots, mayhem, mob violence, and other social and economic turmoil. Fake news can be created to purposely mislead or deceive readers, promote a biased point of view, a cause or goal, or just for fun. Fake news can be shared on social media, printed in publications due to political pressure, etc. Misinformation may cause public discontent, riots, and even distrust amongst individuals or nations. It also has the ability to tap into broad public sentiment in novel ways. The emergence of fake news has radically transformed news and media coverage. In recent years, people's news consuming patterns have altered. The internet has become the major source of information. However, most online content is untrustworthy and may even be deceptive. Humans can't identify certain fake news from real news. Fake news is created to mislead or deceive readers, support a biased point of view, a cause or goal, or just for fun. Fake news can be spread via unauthenticated user IDs, social media, and newspaper publishing. The proposed project will use hybrid machine learning model to detect bogus news in order to recognize the fake news easier.",Fake News Detection "In the age of digital media, fake news is a serious problem because it spreads misinformation and harms individuals, organizations, and even entire nations which is a challenging aspect. This study proposes a machine learning approach for detecting fake news. In the proposed approach, a categorization model is developed with four different types of machine learning algorithms, evaluating the content and aesthetic components of news stories. The performance of the proposed model is analyzed by using a large dataset of real and fake news articles and the results show that it outperforms many existing systems. The proposed findings demonstrate the potential of machine learning techniques, such as logistic regression, decision tree, random forest, and passive aggressive algorithms to address the fake news detection challenges.",Fake News Detection "News is the most vital source of information for common people about what is happening around the world. Newspapers are an authentic source of news, but nowadays social networks have become the emerging source of news. Due to easy access to these social networks, the news can be easily manipulated which gives rise to fake news. Fake news can be used for economic as well as political benefits. It can be used as a weapon to spread hate among the community which can harm society. So it is crucial to detect fake news to avoid its consequences. There is no existing platform that can verify the news and categorize it. This paper proposes a system that can be used for real-time prediction of news to be real or fake. This system is based on natural language processing to extract features from the data and then these features are used for the training of machine learning classifiers such as Naive Bayes, Support Vector Machine (SVM), Random Forest (RF), Stochastic Gradient Descent (SGD), and Logistic Regression (LR). Each of the classifier performance is evaluated on various parameters. Then the best performing classifier is deployed as a website using flask API for real-time prediction of the news",Fake News Detection "The phenomenon of fake news disseminates fabricated information presented in a news-like fashion, posing significant challenges for news agencies regarding accurate processing and verification. The dissemination of fake material could incite or defame prominent entities or individuals and may even serve the personal agendas of its makers, thereby posing societal encounters. Differentiating between fake and real news poses a substantial problem, mostly stemming from the constraints imposed by limited topic knowledge and time limitations. Based on the survey findings, Banten, DKI Jakarta, and West Java are the evident regions with the highest exposure to hoaxes and misinformation among their populations. An artificial intelligence (AI) methodology, the transformers, employs natural language processing (NLP), leveraging deep learning architectures to mitigate fake news. Transformers use a robust attention mechanism to concurrently process textual data and generate comprehensive and contextually informed word representations. A prior investigation demonstrates the higher performance of BERT, a transformer-based model, compared to the non-transformer approach. However, several studies have indicated that the performance of BERT models is potentially enhanced by utilizing advanced variants such as ALBERT and RoBERTa. Thus, further investigation is necessary to improve the utilization of modified BERT models in detecting fabricated news in Bahasa Indonesia. This study investigates various transformer models and clarifies that ALBERT performs superior to other models, achieving an accuracy of 87.6%, precision of 86.9%, F1-score of 86.9%, and a run-time of 174.5 seconds per epoch.",Fake News Detection "Fake news evolving around us for a very long time. The gradual growth of social media platforms has provided us with an easily accessible and publishable news platform in front of the audience that news may be true or False. The spreading of fake news increased as compared to ancient times. Nowadays Fake news detection become a tough challenge for Both Natural language processing(NLP) and Machine Learning (ML) experts. For detecting fake news fact-checking is also very important. In this paper, we focus on the analysis of recently published papers in this domain and the analysis of different techniques for detecting fake news. Through this survey, we will get inside knowledge of the detection process of fake news using different natural language processing, machine learning, and Deep Learning Techniques.",Fake News Detection "The Internet and social media have altered how individuals access news in the age of instantaneous information distribution. While this development has increased access to information, it has also created a significant problem: the spread of fake news and information. Fake news is rapidly spreading on digital platforms, which has a negative impact on the media ecosystem, public opinion, decision-making, and social cohesion. Natural Language Processing(NLP), which offers a variety of approaches to identify content as authentic, has emerged as a potent weapon in the growing war against disinformation. This paper takes an in-depth look at how NLP technology can be used to detect fake news and reveals the challenges and opportunities it presents.",Fake News Detection "Rapid and a vast increase in fake news, defined as probably incorrect information spread with the goal of fraud.Spread of this misinformation is a severe danger that can cause political polarisation. Fake news can make people their political parties or hate a particular community. Thus, fake news is a phenomenon that significantly impacts people's social lives.This system will use NLP techniques to detect fake and misleading news stories from non-reputable sources. We apply the term frequency-inverse document frequency (TF-IDF) of bi-grams and probabilistic context-free grammar (PCFG) detection to a corpus of about 11,000 articles. We find that TF-IDF of bi-grams fed into a Stochastic Gradient Descent model performs exceptionally well at identifying non-credible sources, with PCFGs having slight effects on recall. However, we are sceptical about the generalizability of these findings and include ample discussion on the next steps for exploration in this space.",Fake News Detection "Fighting fake news is a difficult and challenging task. With an increasing impact on the social and political environment, fake news exert an unprecedently dramatic influence on people’s lives. In response to this phenomenon, initiatives addressing automated fake news detection have gained popularity, generating widespread research interest. However, most approaches targeting English and low-resource languages experience problems when devising such solutions. This study focuses on the progress of such investigations, while highlighting existing solutions, challenges, and observations shared by various research groups. In addition, given the limited amount of automated analyses performed on Romanian fake news, we inspect the applicability of the available approaches in the Romanian context, while identifying future research paths.",Fake News Detection "Fake news is a big concern since it spreads widely over social media and other media channels, creating significant social and national damage with devastating consequences. In order to address this issue, substantial research how to be recognized on fake news detection has been done. The purpose of this study is to analyze the existing research on fake news detection and select the best conventional machine learning frameworks to develop an algorithm for supervised machine learning. The algorithm will be able to classify false reports as true or fraudulent using textual analysis methods such as NLP. The proposed process involves data preprocessing and vectorization, where the NLP library will be used to perform tokenization and feature extraction of text data, utilizing tools such as Count Vectorizer and Tiff Vectorizer. Further, feature selection methods will be employed to evaluate and determine the most fitting features to achieve the highest precision based on confusion matrix results. The outcome of the decision tree classifier algorithm gives the accuracy of 90%.",Fake News Detection "Fake news production, accessibility, and consumption have all increased with the rise of internet-connected gadgets and social media platforms. A good fake news detection system is essential because the news readers receive can affect their opinions. Several works on fake news detection have been done using machine learning and deep learning approaches. Recently, the deep learning approach has been preferred over machine learning because of its ability to comprehend the intricacies of textual data. The introduction of transformer architecture changed the NLP paradigm and distinguished itself from recurrent models by enabling the processing of sentences as a whole rather than word by word. The attention mechanisms introduced in Transformers allowed them to understand the relationship between far-apart tokens in a sentence. Numerous deep learning works on fake news detection have been published by focusing on different features to determine the authenticity of a news source. We performed an extensive analysis of the comprehensive NELA-GT 2020 dataset, which revealed that the title and content of a news source contain discernible information critical for determining its integrity. To this objective, we introduce ‘FakeNews Transformer’ — a specialized Transformer-based architecture that considers the news story’s title and content to assess its veracity. Our proposed work achieved an accuracy of 74.0% on a subset of the NELA-GT 2020 dataset. To our knowledge, FakeNews Transformer is the first published work that considers both title and content for evaluating a news article; thus, we compare the performance of our work against two BERT and two LSTM models working independently on title and content. Our work outperformed the BERT and LSTM models working independently on title by 7.6% and 9.6%, while performing better than the BERT and LSTM models working independently on content by 8.9% and 10.5%, respectively.",Fake News Detection "News plays a significant role in shaping people's beliefs and opinions. Fake news has always been a problem, which wasn't exposed to the mass public until the past election cycle for the 45th President of the United States. While quite a few detection methods have been proposed to combat fake news since 2015, they focus mainly on linguistic aspects of an article without any fact checking. In this paper, we argue that these models have the potential to misclassify fact-tampering fake news as well as under-written real news. Through experiments on Fakebox, a state-of-the-art fake news detector, we show that fact tampering attacks can be effective. To address these weaknesses, we argue that fact checking should be adopted in conjunction with linguistic characteristics analysis, so as to truly separate fake news from real news. A crowdsourced knowledge graph is proposed as a straw man solution to collecting timely facts about news events.",Fake News Detection "With the proliferation of e-commerce platforms, the authenticity of online reviews has become increasingly crucial. In this research, we present a method for detecting spam reviews in e-commerce platforms, employing the Random Forest algorithm. Leveraging a dataset comprising reviews from Amazon Yelp dataset, we employ a combination of Natural Language Processing (NLP) techniques and text processing methods to uncover underlying patterns distinguishing between genuine and fake reviews. Real-time data scraping from the Amazon website facilitated the acquisition of a diverse range of reviews, subsequently stored in a CSV file for analysis. These reviews stored in CSV file is fed to model for prediction. Our model effectively discerns between authentic and spam reviews, offering a valuable tool for maintaining the integrity of e-commerce platforms and ensuring informed consumer decision-making. Key Words: Web Scrapping, Text Mining, Sentiment Analysis.",Fake Review Detection "Fake online reviews are becoming a major problem nowadays with the growing number of online purchases. Recently, natural language processing (NLP) methods that analyze the content of reviews have been increasingly used to detect fake reviews. The problem becomes extremely difficult due to the lack of reliable data caused by the difficulty in labeling fake and honest reviews. In this paper, we not only conduct a structural taxonomy of this topic, but we also present extensive experiments using a state-of-the-art language model BERT (Bidirectional Encoder Representations from Transformers) on different online review datasets. By efficiently fine-tuning this model, we outperform existing detection models by achieving 91% accuracy on the balanced crowdsourced dataset of hotel, restaurant, and doctor reviews and 73% accuracy on the imbalanced third-party Yelp dataset of restaurant reviews.",Fake Review Detection "We are in the era of internet where people are more techno-savvy and they surf internet before buying a single item. Since buying a product online is easy and convenient these days, people also tending towards it as it save time and sometimes money. Also many branded products can be bought without thinking much about quality as name is enough for branded item. Nowadays various vendors also advertise their products through social media like facebook, whatsapp etc. Thus it is an extremely important to check their reliability before buying product. Buyer or client wants to check the opinion of other buyers regarding their purchase for that product. Most of the times review given by the user is not considered genuine as review was given without buying it. Sometimes review contains unrelated words. This makes a false impression on another customer and he or she may cancel buying it. Such as activity often referred as fake Review. Thus detecting fake reviews has become more important issue for customers to make better decision on purchase as well as for the trader to make their products reliable. This paper presents an active learning method for detecting fake and genuine reviews.",Fake Review Detection "Fake reviews have been a major problem in online platforms with detrimental effects on customer trust. Different machine learning and natural language processing methods have been used recently to classify fake reviews from authentic ones. Due to the lack of labelled and reliable data in this domain, the right selection of input features plays a critical role in extracting the most useful and relevant information from the review content. This research investigates the impact of inconsistency between a review's rating and its sentiment score in detecting deceptive reviews. Our approach presents two sets of experiments: First with and then without the inclusion of a rating-sentiment inconsistency feature. We use deep learning classifiers and GloVe (Global Vectors for Word Representation), word embeddings to compare the performance of fake review detection models. The results indicate that incorporating the inconsistency feature in the BiGRU (Bidirectional Gated Recurrent Unit) classifier can lead the model to achieve more than 90% accuracy. However, further study is needed to demonstrate its effectiveness since inconsistency also leads to reduction in accuracy for some other models' performance.",Fake Review Detection "With the development of e-commerce, the number of counterfeit products is increasing and the rights and interests of customers have been seriously infringed. A product can be evaluated by reviews and ratings objectively. However, the topics of reviews are diverse while customers tend to focus on only a few aspects, and many reviews have wrong scores that are inconsistent with the content. Natural language processing (NLP) is helpful in mining the opinion of reviews automatically. In this paper, the goal is to improve fake product detection through text classification technology. Precisely, we use CNN and LSTM models to judge whether the review is quality related or not, which can remove useless reviews, and aspect-based sentiment analysis with an attention mechanism to determine the sentiment polarity of the concerning aspect to get ratings for different aspects. We experiment on the Self-Annotated datasets and results show that by using text classification technology, the performance of fake product detection can be greatly improved.",Fake Review Detection "Fake review detection and its elimination from the given dataset using different Natural Language Processing (NLP) techniques is important in several aspects. In this article, the fake review dataset is trained by applying two different Machine Learning (ML) models to predict the accuracy of how genuine are the reviews in a given dataset. The rate of fake reviews in E-commerce industry and even other platforms is increasing when depend on product reviews for the item found online on different websites and applications. The products of the company were trusted before making a purchase. So this fake review problem must be addressed so that these large E-commerce industries such as Flipkart, Amazon, etc. can rectify this issue so that the fake reviewers and spammers are eliminated to prevent users from losing trust on online shopping platforms. This model can be used by websites and applications with few thousands of users where it can predict the authenticity of the review based on which the website owners can take necessary action towards them. This model is developed using Naïve Bayes and random forest methods. By applying these models one can know the number of spam reviews on a website or application instantly. To counter such spammers, a sophisticated model is required in which a need to be trained on millions of reviews. In this work “amazon Yelp dataset” is used to train the models and its very small dataset is used for training on a very small scale and can be scaled to get high accuracy and flexibility.",Fake Review Detection "The rise of online platforms and e-commerce has revolutionized consumer behavior, making product reviews a vital source of information for purchasing decisions. However, the prevalence of fake reviews has undermined the credibility and trustworthiness of online reviews, leading to the need for effective fake review detection systems. This research paper presents a novel approach to address this challenge by leveraging supervised machine learning techniques for the detection of fake reviews. The proposed system begins by constructing a comprehensive dataset consisting of genuine and fake reviews, along with relevant features such as review text, reviewer information, and rating patterns. These features are carefully selected to capture the distinguishing characteristics of fake reviews, including the presence of biased sentiments, unnatural language patterns, and inconsistent reviewer behavior. A supervised machine learning model, such as a support vector machine (SVM), KNN,Logistic Regression is trained on the labeled dataset to learn the complex patterns and relationships between the review features and their authenticity. The model undergoes an iterative process of feature engineering, selection, and hyperparameter tuning to optimize its performance.",Fake Review Detection "Sentiment analysis (SA) is based on natural language processing (NLP) techniques used to extract the user’s feelings and opinions about any manufactured goods or services provided. Opinion mining is the other name for sentiment analysis. Sentiment analysis is very useful in the decision-making process. With greater Internet use, SA is a powerful tool for studying the opinions of customers about any product or services provided by any business organization or a company. Several approaches and techniques have came to existence in past years for sentiment analysis. Sentiment analysis is useful in decision making. In this paper, we offer an exhaustive description about techniques used for SA, approaches used for SA and applications of sentiment analysis.",Fake Review Detection "This paper investigates the potential of semi-supervised Generative Adversarial Networks (GANs) to fine-tune pretrained language models in order to classify Bengali fake reviews from real reviews with a few annotated data. With the rise of social media and e-commerce, the ability to detect fake or deceptive reviews is becoming increasingly important in order to protect consumers from being misled by false information. Any machine learning model will have trouble identifying a fake review, especially for a low resource language like Bengali. We have demonstrated that the proposed semi-supervised GAN-LM architecture (generative adversarial network on top of a pretrained language model) is a viable solution in classifying Bengali fake reviews as the experimental results suggest that even with only 1024 annotated samples, BanglaBERT with semi-supervised GAN (SSGAN) achieved an accuracy of 83.59% and a f1-score of 84.89% outperforming other pretrained language models - BanglaBERT generator, Bangla BERT Base and BanglaElectra by almost 3%, 4% and 10% respectively in terms of accuracy. The experiments were conducted on a manually labeled food review dataset consisting of total 6014 real and fake reviews collected from various social media groups. Researchers that are experiencing difficulty recognizing not just fake reviews but other classification issues owing to a lack of labeled data may find a solution in our proposed methodology.",Fake Review Detection "World of internet has a great impact on online shopping as new buyers use experience of others about the product or service. Opinion or review comment given by someone in order to ruin the reliability of the product or service is considered as Fake review or spam.Thus it is an extremely important to verify their reliability before buying product. Natural language processing techniques are widely used for spam detection. Different NLP techniques are proposed earlier for detecting the review spam such as active learning, n-gram patterns etc. This paper presents an active learning method using different classification algorithm for detecting fake and genuine reviews. The classification algorithms suggested and implemented are rough-set, decision tree, random forest and support vector machine. This paper studies the effectiveness of algorithm used for fake review detection.",Fake Review Detection "Fake review detection has the characteristics of huge stream data processing scale, unlimited data increment, dynamic change, and so on. However, the existing fake review detection methods mainly target limited and static review data. In addition, deceptive fake reviews have always been a difficult point in fake review detection due to their hidden and diverse characteristics. To solve the above problems, this article proposes a fake review detection model based on sentiment intensity and PU learning (SIPUL), which can continuously learn the prediction model from the constantly arriving streaming data. First, when the streaming data arrive, the sentiment intensity is introduced to divide the reviews into different subsets (i.e., strong sentiment set and weak sentiment set). Then, the initial positive and negative samples are extracted from the subset using the marking mechanism of selection completely at random (SCAR) and Spy technology. Second, building a semi-supervised positive-unlabeled (PU) learning detector based on the initial sample to detect fake reviews in the data stream iteratively. According to the detection results, the data of initial samples and the PU learning detector are continuously updated. Finally, the old data are continually deleted according to the historical record points, so that the training sample data are within a manageable size and prevent overfitting. Experimental results show that the model can effectively detect fake reviews, especially deceptive reviews.",Fake Review Detection "In a web-based world driven by e-commerce, customers are quick to turn to online shopping services. However, the products available for purchase cannot be personally inspected so buyers turn to online product reviews. Consumers trust these reviews and are likely to spend more at stores with good evaluations. Sellers are well aware of this phenomenon and are not averse to using unethical methods to boost the reputation of their own products, or demerit the products of their rivals. In other words, fake reviews are quite rampant and are responsible for heavily affecting consumer purchasing decisions and business profits. In response, many fake review detection models have been extensively explored in the last decade. However, there is still a lack of robustness in these approaches. Our research addresses this gap in the field of fake review detection. We deploy the BERT and LSTM models coupled with the Monte Carlo Dropout (MCD) technique, on the Yelp Labelled Dataset comprising 10,000 hotel reviews from North America. MCD provides a representation of uncertainty by randomly dropping neurons in multiple predictions of the network. This gives us an approximation of the uncertainty. Since, fake review detection is a risky task we employ MCD to make our system robust and reliable. Our study yields an accuracy of 91.75% using the MCD-embedded BERT model. It outperforms the LSTM model overall.",Fake Review Detection "Online reviews are often the primary factor in a customer’s decision to purchase a product or service, and are a valuable source of information that can be used to determine public opinion on these products or services. Because of their impact, manufacturers and retailers are highly concerned with customer feedback and reviews. Reliance on online reviews gives rise to the potential concern that wrongdoers may create false reviews to artificially promote or devalue products and services. This practice is known as Opinion (Review) Spam, where spammers manipulate and poison reviews (i.e., making fake, untruthful, or deceptive reviews) for profit or gain. Since not all online reviews are truthful and trustworthy, it is important to develop techniques for detecting review spam. By extracting meaningful features from the text using Natural Language Processing (NLP), it is possible to conduct review spam detection using various machine learning techniques. Additionally, reviewer information, apart from the text itself, can be used to aid in this process. In this paper, we survey the prominent machine learning techniques that have been proposed to solve the problem of review spam detection and the performance of different approaches for classification and detection of review spam. The majority of current research has focused on supervised learning methods, which require labeled data, a scarcity when it comes to online review spam. Research on methods for Big Data are of interest, since there are millions of online reviews, with many more being generated daily. To date, we have not found any papers that study the effects of Big Data analytics for review spam detection. The primary goal of this paper is to provide a strong and comprehensive comparative study of current research on detecting review spam using various machine learning techniques and to devise methodology for conducting further investigation.",Fake Review Detection "Opinion review became important thing for both individuals and organizations. Although opinions were subjective, the collection of opinions had views, experiences, and interests that made opinions facilitate individuals in making buying decisions and organizations in improving their productions from existing feedback. However, many entities took advantage of doing deception by creating fake reviews. In this research, the authors created a fake review detection system using the Convolutional Neural Network (CNN) method with feature extraction. The feature extractions included word embedding and lexicon embedding with naïve concatenation, multichannel, and separated convolution integration. The research aimed to obtain accuracy, precision, recall, and F1score from both features and with unbalanced and balanced classes. The authors used literature study to collect data and perform simulations. The authors’ models were experimented with both 6,632 hotel reviews and 60,763 restaurant reviews in the Yelp filtered datasets, with a ratio of 1 fake review to 6 genuine reviews in both topics. Our analysis of quantitative data revealed that the embedding lexicon feature had significant effect on base CNN model. The lexicon embedding separated convolution exhibited the best overall performance, achieving 78% accuracy, 81% recall, 74% precision, and 77% F1-score in balanced classes.",Fake Review Detection "Fake review detection and its elimination from the given data set using different natural language processing techniques is important in several aspects fake review dataset is trained by applying two different machine learning models to predict the accuracy of how genuine are the reviews in a given data set. The fake review problem must be addressed so that these large ecommerce industries such as amazon, Flipkart, etc.",Fake Review Detection "Customer reviews play an important role in influencing purchasing decisions on ecommerce websites, which are becoming increasingly popular for online shopping. The appearance of phoney reviews, on the other hand, might have a substantial impact on the credibility and dependability of these platforms. As a result, fake review identification has developed as a significant study field, with machine learning, artificial intelligence, and data science techniques emerging as promising approaches to solving this issue. In this review paper, we present a complete overview of the most recent strategies for detecting fraudulent reviews on ecommerce websites, with a focus on the use of machine learning, artificial intelligence, and data science. We evaluate the usefulness of several approaches, such as feature-based, behaviour-based, and deep learning-based techniques, in detecting false reviews. We also discuss the obstacles and future directions in fake review detection research, including imbalanced datasets, adversarial attacks, multimodal fake reviews, real-time detection, explainability, ethical implications, and domain knowledge incorporation. The goal of this review article is to provide a thorough overview of the present research environment in false review identification on ecommerce websites utilising machine learning, artificial intelligence, and data science, as well as to guide future research in this area",Fake Review Detection "Online reviews become a valuable source of information that indicates the overall opinion about products and services, which may affect decision-making processes such as purchase a product or service. Fake reviews are considered as spam reviews, which may have a great impact in the online marketplace behavior. Extracting useful features from review's text using Natural Language Processing (NLP) is not a straightforward step, in addition, it affects the overall performance and results. Many types of features could be used for conducting this task such as Bag-of-Words, linguistic features, words counts and n-gram feature. In this paper, we will investigate the effects of using two different feature selection methods on the spam reviews detection: Bag-of-Words and words counts. Different machine learning algorithms were applied such as Support Victor Machine, Decision Tree, Naïve Bayes and Random Forest. Experiments were conducted on a labeled balanced dataset of Hotels reviews. The efficiency will be evaluated according to many evaluation measures such as: precision, recall and accuracy.",Fake Review Detection "In today’s life consumer reviews are the part of everyday life. User read the reviews before purchase, or stores it for finding the best product through comparison of the product review. From customers view point the reviews play vital role to make a decision regarding an online purchase as well as spammers to write the fake reviews which can increase or defame the reputation of any product. Spammers are using these platforms illegally for financial benefits/incentives are involved in writing fake reviews and they are trying to achieve their motive in terms of financial or to defeat the competitor which causes an explosive growth of sentiment/opinion spamming of writing forged/fake reviews. The present studies and research are used to analyse and categorize the opinion spamming into three different detection targets opinion spam, spammers, and to find the collusive opinion spammer groups so that false opinions can be avoided. Opinion spamming further divided into three different types based on textual and linguistic, behavioral, and relational features. The motivation behind this work is to study the dynamics of spam diffusion and extract the latent features that fuel the diffusion process. The user-based features and content-based features have been used for the categorization of spam/non-spam content. The contributions of this work are building the datasetwhich assists as the ground-truth for classifying/analyzing the variation of fraud/genuine and non-spam/spam information diffusion and to analyze the effects of topics over the diffusibility of non-spam and spam evidences/information. The paper, carried out an in-depth analysis of Twitter Spam diffusion.",Fake Review Detection "After the pandemic, our overall life is changing and challenging that’s why our demand is changing now, we are focused on wellness, sustainability, technology, and the gig economy because of these trends we can observe the reflection in the changing desired need and limitation of seller and customer. Many customers can post a review on any website after making a purchase. Whether it’s an online purchase or an offline retail purchase. When customers buy a product online, they check the product reviews. This is very important for today’s e-commerce product decisions. There is a financial gain associated with writing fake reviews, which is why there has been a significant increase in misleading statements about certain product reviews on websites. Misleading reviews are dangerous reviews. Positive product reviews can attract customers and increase sales. Negative product reviews can reduce demand for that product and reduce sales. These misleading reviews are dangerous to your product’s reputation. In this paper, we use machine learning algorithms which are SVM Support vector machines, which are one of the most popular supervised learning algorithms used for both classification and regression problems. However, it is mainly used for machine-learning classification problems",Fake Review Detection "Online reviews are often the primary factor in a customer’s decision to purchase a product or service, and are a valuable source of information that can be used to determine public opinion on these products or services. Because of their impact, manufacturers and retailers are highly concerned with customer feedback and reviews. Reliance on online reviews gives rise to the potential concern that wrongdoers may create false reviews to artificially promote or devalue products and services. This practice is known as Opinion (Review) Spam, where spammers manipulate and poison reviews (i.e., making fake, untruthful, or deceptive reviews) for profit or gain. Since not all online reviews are truthful and trustworthy, it is important to develop techniques for detecting review spam. By extracting meaningful features from the text using Natural Language Processing (NLP), it is possible to conduct review spam detection using various machine learning techniques. Additionally, reviewer information, apart from the text itself, can be used to aid in this process. In this paper, we survey the prominent machine learning techniques that have been proposed to solve the problem of review spam detection and the performance of different approaches for classification and detection of review spam. The majority of current research has focused on supervised learning methods, which require labeled data, a scarcity when it comes to online review spam. Research on methods for Big Data are of interest, since there are millions of online reviews, with many more being generated daily. To date, we have not found any papers that study the effects of Big Data analytics for review spam detection. The primary goal of this paper is to provide a strong and comprehensive comparative study of current research on detecting review spam using various machine learning techniques and to devise methodology for conducting further investigation.",Fake Review Detection "Aspect-based sentiment analysis (ABSA) is an NLP task that entails processing user-generated reviews to determine (i) the target being evaluated, (ii) the aspect category to which it belongs, and (iii) the sentiment expressed towards the target and aspect pair. In this article, we propose transforming ABSA into an abstract summary-like conditional text generation task that uses targets, aspects, and polarities to generate auxiliary statements. To demonstrate the efficacy of our task formulation and a proposed system, we fine-tune a pre-trained model for conditional text generation tasks to get new state-of-the-art results on a few restaurant domains and urban neighborhoods domain benchmark datasets.",Aspect-Based Sentiment Analysis (ABSA) "Sentiment analysis is one of the most important fields of natural language processing due to its wide range of applications and the benefits associated with using it. It is defined as identifying the sentiment polarity of natural language text. Researchers have recently focused their attention on Arabic SA due to the massive amounts of user-generated content on social media and e-commerce websites in the Arabic world. Most of the research in this fieldwork is on the sentence and document levels. This study tackles the aspect-level sentiment analysis for the Arabic language, which is a less studied version of SA. Because Arabic NLP is challenging and there are few available Arabic resources and many Arabic dialects, limited studies have attempted to detect aspect-based sentiment analyses on Arabic texts. Specifically, this study considers two ABSA tasks: aspect term polarity and aspect category polarity, using the text normalization of the Arabic dialect after making the classification task. We present a Seq2Seq model for dialect normalization that can serve as a pre-processing step for the ABSA classification task by reducing the number of OOV words. Thus, the model’s accuracy increased. The results of the conducted experiments show that our models outperformed the existing models in the literature on both tasks and datasets.",Aspect-Based Sentiment Analysis (ABSA) "Aspect-based financial sentiment analysis (ABFSA) is a fine-grained task that can enrich and sharpen financial analysis by identifying sentiments towards specific entities (e.g., company, stock). As the application of ABSA in professional language, ABFSA is a challenging task requiring extensive domain knowledge while remaining understudied. Numeral understanding is crucial for financial text analysis, but existing NLP models for ABFSA lack such ability by mainly treating numerals as plain text. In addition, most studies on knowledge incorporation disregard necessary domain-specific connotations or suffer from the low coverage issue. In this paper, we propose a novel numeral-oriented network with a multi-source affective knowledge refinement strategy (NumAKEN) for ABFSA. NumAKEN utilizes a numeral encoding method based on DigitCNN to capture critical numeric concepts such as magnitude and category. A multi-source affective knowledge fusion strategy is designed for hybrid lexicon construction and incorporation, which can guide the model to capture significant sentiment clues as well as alleviate conflicting and coverage issues. Extensive experiments on two datasets illustrate that our NumAKEN model outperforms all state-of-the-art methods and verify the effectiveness of our model.",Aspect-Based Sentiment Analysis (ABSA) "The the task of Aspect-based opinion mining (AbOM) is an emeraging research area, where aspects are mined, the corresponding opinion are scrutinized and sentiments are continuously changed, is gaining increased attention with growing feedback of clients and community across various social media streams. The gigantic improvements of deep learning (DL) techniques in natural language processing (NLP) tasks motivated research community to introduce a novel DL models and for AbSA, each investigate a diverse research points from different perspective, that cope with imminent problems and composite circumstances of AbOM. Consequently, in this survey paper, we concentrate on the limitations of the current studies and challenges relevant to mining of various aspects and their pertinent opinion, interrelationship delineations among different aspects, interactions, dependencies and contextual-semantic associations among various entities for enhanced opinion precision, and estimation of the automaticity of opinion polrity development. A laborious investigation of the later advancement is discussed depending on their contribution in the direction of spotlighting and alleviating the shortcomings related to Aspect Extraction (AE), AbOM, opinion progression (OP). The reported performance for each scrutinized study of Aspect Extraction and Aspect opinion Analysis is also given, revealing the numeriacal evaluation of the presented approach. Future research trends are introduce and delibrated by critically analysing the existing recent approaches, that will be supportive for researchers and advantageous for refining aspect based opinion classification",Aspect-Based Sentiment Analysis (ABSA) "Aspect-based Sentiment Analysis (ABSA) is a subdomain of Sentiment Analysis (SA) that focuses on detecting the sentiment toward features of a product or particular aspects, experience, or service. ABSA targets to go beyond simple sentiment classification of a sentence or document and present a more granular study of sentiment towards different aspects. ABSA has several real-time applications, which include social media monitoring, customer feedback analysis, and product reviews. Many difficulties exist in ABSA, including dealing with language variability and complexity, sentiment subjectivity, and managing multiple aspects in a single sentence. Recently, Deep Learning (DL) methods continued to be an active area of research and proved a promising model in ABSA. This study focuses on designing and developing ABSA models using DL concepts. The presented ABSA model aims to identify the sentiments in the direction of particular aspects or features of a product, service, or experience. The presented approach initially accomplishes diverse phases of data pre-processing to convert the input data meaningfully. In addition, the word2vec model is applied as a feature extraction approach. For sentiment analysis, three DL models are employed, namely Hopfield Network (HN), Convolutional Neural Network (CNN), and Bidirectional Long Short Term Memory (BiLSTM) approaches. The experimental validation of the DL models occurs utilizing a benchmark dataset. The simulation values highlighted that the CNN model exhibits improved sentiment classification results over other DL models.",Aspect-Based Sentiment Analysis (ABSA) "Aspect-Based Sentiment Analysis (ABSA) is an advanced NLP application that aims to identify aspect terms present in the given review and predict the sentiment associated with those aspect terms. ABSA is better than sentence-based sentiment classification because it considers the aspect terms present in the reviews to determine the sentiment rather than considering the individual sentence. Entrepreneurs could make use of ABSA to understand the customers' opinions about different aspects of their products or services. The task of Aspect-based sentiment analysis can be divided into two subtasks: Aspect Term Extraction (ATE) and Aspect Term Sentiment Classification (ATSC). In this paper, an SVM model is proposed for the task of ATE, and an Attention based LSTM model is proposed for the task of ATSC. The proposed models will be trained and tested on SemEval-2014 dataset.",Aspect-Based Sentiment Analysis (ABSA) "While state-of-the-art NLP models have demonstrated excellent performance for aspect based sentiment analysis (ABSA), substantial evidence has been presented on their lack of robustness. This is especially manifested as significant degradation in performance when faced with out-of-distribution data. Recent solutions that rely on counterfactually augmented datasets show promising results, but they are inherently limited because of the lack of access to explicit causal structure. In this paper, we present an alternative approach that relies on non-counterfactual data augmentation. Our proposal instead relies on using noisy, cost-efficient data augmentations that preserve semantics associated with the target aspect. Our approach then relies on modelling invariances between different versions of the data to improve robustness. A comprehensive suite of experiments shows that our proposal significantly improves upon strong pre-trained baselines on both standard and robustness-specific datasets. Our approach further establishes a new state-of-the-art on the ABSA robustness benchmark and transfers well across domains.",Aspect-Based Sentiment Analysis (ABSA) "Recent years have seen a considerable evolution in online shopping, as well as the ability to get text-based client feedback, comments, and recommendations. The sentiment analysis method examines a large collection of text data to determine the customer’s opinion. One method for analyzing customer sentiment that is provided by natural language processing is AbSA. Understanding out precisely exactly consumers feel about a product is the aim of AbSA. In this research, we spoke about aspect-based sentiment analysis, which identifies the sentiment by using the aspect term. To illustrate our framework, we used user reviews for headphones and earphones that were gathered from the Amazon website. We used NLP to preprocess the data, and we then used the Pachinko Allocation model (PAM) to extract the aspect term and polarity from the review. Machine learning technique have been explored to evaluate the model.",Aspect-Based Sentiment Analysis (ABSA) "AraMA comprises 10,750 Google Maps reviews for restaurants in Riyadh, Saudi Arabia. It covers four aspect categories— food, environment, service, and price—along with four sentiment polarities: positive, negative, neutral, and conflict. All AraMA reviews are labeled with at least two aspect categories. A second version, named AraMAMS, includes reviews labeled with at least two different sentiments, making it the first Arabic multi-aspect, multi-sentiment dataset. Aspect-based sentiment analysis (ABSA) is a field of SA that goes one step further than SA by automatically assigning sentiments to certain features or aspects in the text.",Aspect-Based Sentiment Analysis (ABSA) "Student recruitment and retention are important issues for all higher education institutions. Constant monitoring of student satisfaction levels is therefore crucial. Traditionally, students voice their opinions through official surveys organized by the universities. In addition to that, nowadays, social media and review websites such as “Rate my professors” are rich sources of opinions that should not be ignored. Automated mining of students’ opinions can be realized via aspect-based sentiment analysis (ABSA). ABSA s is a sub-discipline of natural language processing (NLP) that focusses on the identification of sentiments (negative, neutral, positive) and aspects (sentiment targets) in a sentence. The purpose of this paper is to introduce a system for ABSA of free text reviews expressed in student opinion surveys in the Serbian language. Sentiment analysis was carried out at the finest level of text granularity – the level of sentence segment (phrase and clause).,The presented system relies on NLP techniques, machine learning models, rules and dictionaries. The corpora collected and annotated for system development and evaluation comprise students’ reviews of teaching staff at the Faculty of Technical Sciences, University of Novi Sad, Serbia, and a corpus of publicly available reviews from the Serbian equivalent of the “Rate my professors” website.,The research results indicate that positive sentiment can successfully be identified with the F-measure of 0.83, while negative sentiment can be detected with the F-measure of 0.94. While the F-measure for the aspect’s range is between 0.49 and 0.89, depending on their frequency in the corpus. Furthermore, the authors have concluded that the quality of ABSA depends on the source of the reviews (official students’ surveys vs review websites).,The system for ABSA presented in this paper could improve the quality of service provided by the Serbian higher education institutions through a more effective search and summary of students’ opinions. For example, a particular educational institution could very easily find out which aspects of their service the students are not satisfied with and to which aspects of their service more attention should be directed.,To the best of the authors’ knowledge, this is the first study of ABSA carried out at the level of sentence segment for the Serbian language. The methodology and findings presented in this paper provide a much-needed bases for further work on sentiment analysis for the Serbian language that is well under-resourced and under-researched in this area.",Aspect-Based Sentiment Analysis (ABSA) "Aspect-based sentiment analysis (ABSA) is a granular-level sentiment analysis task that aims to detect the sentiment polarities of a specified aspect in the text. This research shows excessive curiosity in modelling target and context through attention networks to attain effective feature representations for sentiment detection works. We have proposed a synthetic attention in bidirectional encoder representations from transformers (SA-BERT) with an extreme gradient boosting (XGBoost) classifier to classify sentiment polarity in the review dataset. The proposed model generates dynamic word vector encoding of the aspect and corresponding context of the reviews. Then, the aspect and context of the reviews are meaningfully represented by a transformer that can input the vector word in parallel. After that, the model uses the synthetic attention mechanism to learn essential parts of context and aspects in reviews. Finally, the model places overall representation in the sentiment classification layer to predict sentiment polarity. Both proposed SA-BERT and SA-BERT-XGBoost models achieved the highest accuracy (92.02 and 93.71%) on the restaurant16 and highest F-1 scores (81.19 and 81.64%) on the restaurant14 dataset, respectively. The average accuracy and F1 scores are approximately 2 and 3.04% higher than the baseline models (DLCF-DCA-CDM, R-GAT+BERT, ASGCN-DG, AEN-BERT and BERT-PT). Therefore, proposed models outperform in comparison with baseline models.",Aspect-Based Sentiment Analysis (ABSA) "Aspect-based Sentiment Analysis(ABSA) has gained significant attention in recent years because of its ability to provide more fine-grained insights into customer preferences. ABSA, an NLP-based data mining technique, focuses on identi-fying user sentiments related to different aspects of a product or service. This paper presents a comprehensive overview of ABSA including aspect extraction, sentiment analysis, and aspect-based summarization techniques, and discusses its challenges, applications, and future scope. In addition, the paper presents a thorough comparative analysis of deep learning-based models for ABSA and also discusses the commonly used datasets. Deep learning-based techniques have produced better outcomes than the traditional ABSA methods.",Aspect-Based Sentiment Analysis (ABSA) "Aspect-based sentiment analysis (ABSA), a popular research area in NLP, has two distinct parts—aspect extraction (AE) and labelling the aspects with sentiment polarity (ALSA). Although distinct, these two tasks are highly correlated. The work primarily hypothesizes that transferring knowledge from a pre-trained AE model can benefit the performance of ALSA models. Based on this hypothesis, word embeddings are obtained during AE and, subsequently, feed that to the ALSA model. Empirically, this work shows that the added information significantly improves the performance of three different baseline ALSA models on two distinct domains. This improvement also translates well across domains between AE and ALSA tasks.",Aspect-Based Sentiment Analysis (ABSA) "Sentiment analysis (SA) has been used to monitor social media, help customers, perform market research, and gauge client sentiment to improve businesses’ products and services over time. The conventional approach of SA often focuses on classifying sentiments into three main polarities: positive, negative, and neutral. Aspect based sentiment analysis (ABSA) is an advantageous methodology that facilitates a comprehensive analysis of consumer feedback. This approach not only determines the sentiment polarity of the text but also identifies the specific attributes of the product or service. While substantial research on ABSA has been conducted in English and other widely spoken languages, Bengali-language research on this topic is notably limited. This study explores machine learning and deep learning methodologies for analyzing Bengali text, particularly emphasizing on five different aspects and their corresponding emotions. This research employs several machine learning approaches, such as Decision Tree, Random Forest, XGBoost, Gradient Boosting, and Naive Bayes, to investigate both aspect and sentiment. The Gradient Boost model exhibits the best level of accuracy, reaching 67%, in the task of sentiment detection. On the other hand, the Random Forest model achieves an accuracy of 87.4%, in the task of aspect determination. In addition, several deep learning algorithms, including BiLSTM, BiGRU, and BERT, have been used in this study. Compared to the BiLSTM, and BiGRU models, BERT has the highest sentiment analysis and aspect accuracy, with 73.09% and 88.84%, respectively.",Aspect-Based Sentiment Analysis (ABSA) "Aspect-based sentiment analysis (ABSA) is an NLP task that classify fine-grained sentiment towards one specific aspect from the same text. While Pretrained language models (PLMs) like BERT have been widely researched in ABSA, attaching aspects to corresponding sentiment remains challenging. In this paper, we propose Multi-task Multi-prompt learning Model (3M) to introduce more proper semantic information for different task aims. In 3M, the ABSA task is divided into two subtasks, a classification task for aspect categorization and a generation task for sentiment analysis. For specific task, we design respective prompt learning approach to help model understand ABSA more accurately. Note that the overfitting using BERT on ABSA, we utilize the method of float loss to restrict the training not too small. The experiment result shows that 3M achieves results comparable to SOTA.",Aspect-Based Sentiment Analysis (ABSA) "The analysis of the opinions of customers and users has been always of great interest in supporting decision-making in many fields, especially in marketing. Sentiment analysis (SA) is the umbrella term for techniques and approaches that analyze user’s sentiments, emotions, opinions in text or other media. The need for a better understanding of these opinions paved the way to novel approaches that focus on the analysis of the sentiment related to specific features of a product, giving birth to the field of aspect-based sentiment analysis (ABSA). Although the increasing interest in this discipline, there is still confusion regarding the basic concepts of ABSA: terms like sentiment, affect, emotion, opinion, are used as synonyms while they represent different concepts. This often leads to an incorrect analysis of the users’ opinions.This work presents an overview of the state-of-the-art techniques and approaches for ABSA, highlighting the main critical issues related to current trends in this field. Following this analysis, a new reference model for SA and ABSA, namely the KnowMIS-ABSA model, is proposed. The model is grounded on the consideration that sentiment, affect, emotion and opinion are very different concepts and that it is profoundly wrong to use the same metric and the same technique to measure them. Accordingly, we argue that different tools and metrics should be adopted to measure each of the dimensions of an opinion. A qualitative case study, regarding product reviews, is proposed to motivate the advantages of the KnowMIS-ABSA mode",Aspect-Based Sentiment Analysis (ABSA) "Aspect-based Sentiment Analysis (ABSA) is a fine-grained form of SA that greatly benefits customers and the real world. ABSA of customer reviews has become a trendy topic because of the profuse information that is shared through these reviews. While SA also known as opinion mining helps to find opinion, ABSA greatly impact business world by converting these reviews to finer form with aspects and opinion or sentiment. These review words are interwoven internally, which depends on the semantics besides syntax, and sometimes there are long dependencies. Recently, the hybrid methods for ABSA are popular, but most of them merely considered if the syntax and long dependency exist, thus missing the inclusion of multi and infrequent aspects. In addition, in most literature, sentiment classification is shown directly without calculating the sentiment scores in ABSA. To this effect, this paper proposes a hybrid with syntax dependency and the lexicon for aspect, sentiment extraction, and polarity classification by Logistic Regression (LR) classifier to overcome the issues in ABSA. The proposed method is able to address the challenges of ABSA in a number of ways. First, it is able to extract multi-word and infrequent aspects by using syntactic dependency information. Second, it is able to calculate sentiment scores, which provides a more nuanced understanding of the overall sentiment expressed towards an aspect. Third, it is able to capture long dependencies between words by using syntactic dependency and semantic information. The proposed hybrid model outperformed the other methods by an average of 8-10 percent with the standard public dataset in terms of accuracy.",Aspect-Based Sentiment Analysis (ABSA) "Following the proposal of the transformer, many pre-trained language models were developed, and the sentiment analysis (SA) task was improved.In this paper, we proposed a method that uses an auxiliary sentence to describe aspects that the sentence contains to help sentiment prediction. The first is aspect detection, which uses a multi-aspects detection model to predict all the aspects that the sentence has. Combining the predicted aspects and the original sentence as the Sentiment Analysis (SA) model’s input. The second is to do out-of-domain aspect-based sentiment analysis (ABSA), train a sentiment classification model with one kind of dataset and validate it with another kind of dataset. Finally, we created two baselines, they use no aspect and all aspects as the sentiment classification model’s input, respectively. Compared two baselines’ performance to our method, we found that our method really makes sense.",Aspect-Based Sentiment Analysis (ABSA) "Aspect-based Sentiment Analysis (ABSA) is a complex model within the domain of Sentiment Analysis (SA) tasks which deals with classifying the sentiments related to particular aspects (or targets) in the given text. ABSA task has gained popularity due to its various sub-tasks related to the aspect-based sentiment analysis task. This work provides a comparative study of various approaches used to solve the ABSA task using the BERT technique. The selected approaches include a fine-tuned BERT model, adversarial training using BERT (Bidirectional Encoder Representations from Transformers) and the incorporation of disentangled attention in BERT or the DeBERTa for the ABSA task. One of the challenges faced during implementation of the ABSA task is that it requires an in-depth understanding about the language. Experiment results indicate that the approach, which uses the fine-tuned BERT model yields the best mean F1 score of 85.65 and the best mean accuracy score of 85.98 is yielded by the DeBERTa model.",Aspect-Based Sentiment Analysis (ABSA) "The chess domain is well-suited for creating an artificial intelligence (AI) system that mimics real-world challenges, including decision-making. Throughout the years, minimal attention has been paid to investigating insights derived from unstructured chess data sources. In this study, we examine the complicated relationships between multiple referenced moves in a chess-teaching textbook, and propose a novel method designed to encapsulate chess knowledge derived from move-action phrases. This study investigates the feasibility of using a modified sentiment analysis method as a means for evaluating chess moves based on text. Our proposed Aspect-Based Sentiment Analysis (ABSA) method represents an advancement in evaluating the sentiment associated with referenced chess moves. By extracting insights from move-action phrases, our approach aims to provide a more fine-grained and contextually aware ‘chess move’-based sentiment classification. Through empirical experiments and analysis, we evaluate the performance of our fine-tuned ABSA model, presenting results that confirm the efficiency of our approach in advancing aspect-based sentiment classification within the chess domain. This research contributes to the area of game-playing by machines and shows the practical applicability of leveraging NLP techniques to understand the context of strategic games. Keywords: Natural Language Processing, Chess, Aspect-based Sentiment Analysis (ABSA), Chess Move Evaluation.",Aspect-Based Sentiment Analysis (ABSA) "Dialogue state tracking (DST) plays an important role in task-oriented dialogue systems. However, collecting a large amount of turn-by-turn annotated dialogue data is costly and inefficient. In this paper, we propose a novel turn-level active learning framework for DST to actively select turns in dialogues to annotate. Given the limited labelling budget, experimental results demonstrate the effectiveness of selective annotation of dialogue turns. Additionally, our approach can effectively achieve comparable DST performance to traditional training approaches with significantly less annotated data, which provides a more efficient way to annotate new dialogue data.",Dialogue State Tracking (DST) "The task-oriented dialogue systems aim to assist the users in completing specific tasks through natural language dialogue. Recently, word-level dialogue state tracking (DST) has become a core component of task-oriented dialogue systems. In this paper, we study the word-level DST task at the 8th dialogue system technology challenge (DSTC8), namely schema-guided dialogue state tracking, which focuses on cross-domain dialogue state tracking and zero-shot generalization to new services. Many approaches have been proposed to exploit the schema description for dialogue modeling, especially on unseen services. Despite their success, existing methods still suffer from two weaknesses: (1) the current methods do not fully exploit the dialogue history, which makes it difficult to solve the slot carryover problem from the multi-domain dialogues; (2) the current method treats the task as four independent sub tasks without considering the relevance of the subtasks. To address these issues, we propose a novel two-stage framework for schema-guided dialogue state tracking with selected dialogue history (TS-DST). Specifically, to solve the first issue, we propose a novel utterance selection module to select the most related previous utterances from the dialogue history by considering the specific schema element. To solve the second issue, we propose a two-stage framework to solve the four subtasks. Experiments conducted on the SGD dataset show that our method achieves new state-of-the-art performance. We also conduct ablation studies to demonstrate the effectiveness of the utterance selection module and the two-stage strategy.",Dialogue State Tracking (DST) "There has been significant interest in zero and few-shot learning for dialogue state tracking (DST) due to the high cost of collecting and annotating task-oriented dialogues. Recent work has demonstrated that in-context learning requires very little data and zero parameter updates, and even outperforms trained methods in the few-shot setting (Hu et al. 2022). We propose RefPyDST, which advances the state of the art with three advancements to in-context learning for DST. First, we formulate DST as a Python programming task, explicitly modeling language coreference as variable reference in Python. Second, since in-context learning depends highly on the context examples, we propose a method to retrieve a diverse set of relevant examples to improve performance. Finally, we introduce a novel re-weighting method during decoding that takes into account probabilities of competing surface forms, and produces a more accurate dialogue state prediction. We evaluate our approach using MultiWOZ and achieve state-of-the-art multi-domain joint-goal accuracy in zero and few-shot settings.",Dialogue State Tracking (DST) "In task-oriented dialogues, dialogue state tracking (DST) is a critical component as it identifies specific information for the user’s purpose. However, as annotating DST data requires a significant amount of human effort, leveraging raw dialogue is crucial. To address this, we propose a new self-training (ST) framework with a verification model. Unlike previous ST meth-ods that rely on extensive hyper-parameter searching to filter out inaccurate data, our verification methodology ensures the accuracy and validity of the dataset without using a fixed threshold. Furthermore, to mitigate overfitting, we augment the dataset by generating diverse user utterances. Even when using only 10% of the labeled data, our approach achieves comparable results to a fully labeled MultiWOZ2.0 dataset. The evaluation of scalability also demonstrates enhanced robustness in predicting un-seen values.",Dialogue State Tracking (DST) "Zero-shot transfer learning for Dialogue State Tracking (DST) helps to handle a variety of task-oriented dialogue domains without the cost of collecting in-domain data. Existing works mainly study common data- or model-level augmentation methods to enhance the generalization but fail to effectively decouple semantics of samples, limiting the zero-shot performance of DST. In this paper, we present a simple and effective “divide, conquer and combine” solution, which explicitly disentangles the semantics of seen data, and leverages the performance and robustness with the mixture-of-experts mechanism. Specifically, we divide the seen data into semantically independent subsets and train corresponding experts, the newly unseen samples are mapped and inferred with mixture-of-experts with our designed ensemble inference.Extensive experiments on MultiWOZ2.1 upon T5-Adapter show our schema significantly and consistently improves the zero-shot performance, achieving the SOTA on settings without external knowledge, with only 10M trainable parameters.",Dialogue State Tracking (DST) "Dialogue State Tracking (DST) aims to keep track of users’ intentions during the course of a conversation. In DST, modelling the relations among domains and slots is still an under-studied problem. Existing approaches that have considered such relations generally fall short in: (1) fusing prior slot-domain membership relations and dialogue-aware dynamic slot relations explicitly, and (2) generalizing to unseen domains. To address these issues, we propose a novel Dynamic Schema Graph Fusion Network (DSGFNet), which generates a dynamic schema graph to explicitly fuse the prior slot-domain membership relations and dialogue-aware dynamic slot relations. It also uses the schemata to facilitate knowledge transfer to new domains. DSGFNet consists of a dialogue utterance encoder, a schema graph encoder, a dialogue-aware schema graph evolving network, and a schema graph enhanced dialogue state decoder. Empirical results on benchmark datasets (i.e., SGD, MultiWOZ2.1, and MultiWOZ2.2), show that DSGFNet outperforms existing methods.",Dialogue State Tracking (DST) "The schema-guided paradigm overcomes scalability issues inherent in building task-oriented dialogue (TOD) agents with static ontologies. Rather than operating on dialogue context alone, agents have access to hierarchical schemas containing task-relevant natural language descriptions. Fine-tuned language models excel at schema-guided dialogue state tracking (DST) but are sensitive to the writing style of the schemas. We explore methods for improving the robustness of DST models. We propose a framework for generating synthetic schemas which uses tree-based ranking to jointly optimise lexical diversity and semantic faithfulness. The robust generalisation of strong baselines is improved when augmenting their training data with prompts generated by our framework, as demonstrated by marked improvements in average Joint Goal Accuracy (JGA) and schema sensitivity (SS) on the SGD-X benchmark.",Dialogue State Tracking (DST) "Despite the recent advances in dialogue state tracking (DST), the joint goal accuracy (JGA) of the existing methods on MultiWOZ 2.1 still remains merely 60%. In our preliminary error analysis, we find that beam search produces a pool of candidates that is likely to include the correct dialogue state. Motivated by this observation, we introduce a novel framework, called BREAK (Beam search and RE-rAnKing), that achieves outstanding performance on DST. BREAK performs DST in two stages: (i) generating k-best dialogue state candidates with beam search and (ii) re-ranking the candidates to select the correct dialogue state. This simple yet powerful framework shows state-of-the-art performance on all versions of MultiWOZ and M2M datasets. Most notably, we push the joint goal accuracy to 80-90% on MultiWOZ 2.1-2.4, which is an improvement of 23.6%, 26.3%, 21.7%, and 10.8% over the previous best-performing models, respectively. The data and code will be available at https://github.com/tony-won/DST-BREAK",Dialogue State Tracking (DST) "Data efficiency is a critical challenge for cross-lingual task-oriented dialogue state tracking (DST) due to high cost of collecting large amount of task-related labeled training set for specific language. Therefore, we focus on adapting high-performance source language DST to target language by using only bilingual dictionary, without accessing labeled target data. We propose a novel data efficient cross-lingual DST framework (ECO-DST), which consists of cross-lingual encoder and language independent decoder. To support cross-lingual zero-shot adaptation, we leverage two advanced methods in encoder: 1) pre-trained cross-lingual model XLM-RoBERTa (XLM-R), 2) dynamic local phrase code-switching data augmentation for cross-lingual representation alignment. We evaluate the proposed method on The Ninth Dialogue System Technology Challenge (DSTC9) cross-lingual tasks. For target language DST, we compare our proposed framework with submitted systems in DSTC9, our model achieves state-of-the-art result on CrossWOZ dataset and promising result on MultiWOZ 2.1 dataset. Meanwhile on source language DST, the same model keeps competitive performance compared with original source DST model.",Dialogue State Tracking (DST) "Prompt-based methods with large pre-trained language models (PLMs) have shown impressive unaided performance across many NLP tasks. These models improve even further with the addition of a few labeled in-context exemplars to guide output generation. However, for more complex tasks such as dialogue state tracking (DST), designing prompts that reliably convey the desired intent is nontrivial, leading to unstable results. Furthermore, building in-context exemplars for dialogue tasks is difficult because conversational contexts are long while model input lengths are relatively short.To overcome these issues we first adapt a meta-learning scheme to the dialogue domain which stabilizes the ability of the model to perform well under various prompts. We additionally design a novel training method to improve upon vanilla retrieval mechanisms to find ideal in-context examples. Finally, we introduce a saliency model to limit dialogue text length, allowing us to include more exemplars per query. In effect, we are able to achieve highly competitive results for few-shot DST on MultiWOZ.",Dialogue State Tracking (DST) "Dialogue state tracking (DST) is designed to track the dialogue state during the conversations between users and systems, which is the core of task-oriented dialogue systems. Mainstream models predict the values for each slot with fully token-wise slot attention from dialogue history. However, such operations may result in overlooking the neighboring relationship. Moreover, it may lead the model to assign probability mass to irrelevant parts, while these parts contribute little. It becomes severe with the increase in dialogue length. Therefore, we investigate sparse local slot attention for DST in this work. Slot-specific local semantic information is obtained at a sub-sampled temporal resolution capturing local dependencies for each slot. Then these local representations are attended with sparse attention weights to guide the model to pay attention to relevant parts of local information for subsequent state value prediction. The experimental results on MultiWOZ 2.0 and 2.4 datasets show that the proposed approach effectively improves the performance of ontology-based dialogue state tracking, and performs better than token-wise attention for long dialogues.",Dialogue State Tracking (DST) "Though Dialogue State Tracking (DST) is a core component of spoken dialogue systems, recent work on this task mostly deals with chat corpora, disregarding the discrepancies between spoken and written language. In this paper, we propose OLISIA, a cascade system which integrates an Automatic Speech Recognition (ASR) model and a DST model. We introduce several adaptations in the ASR and DST modules to improve integration and robustness to spoken conversations. With these adaptations, our system ranked first in DSTC11 Track 3, a benchmark to evaluate spoken DST. We conduct an in-depth analysis of the results and find that normalizing the ASR outputs and adapting the DST inputs through data augmentation, along with increasing the pre-trained models size all play an important role in reducing the performance discrepancy between written and spoken conversations.",Dialogue State Tracking (DST) "Collecting and annotating task-oriented dialogues is time-consuming and costly; thus, zero and few shot learning could greatly benefit dialogue state tracking (DST). In this work, we propose an in-context learning (ICL) framework for zero-shot and few-shot learning DST, where a large pre-trained language model (LM) takes a test instance and a few exemplars as input, and directly decodes the dialogue state without any parameter updates. To better leverage a tabular domain description in the LM prompt, we reformulate DST into a text-to-SQL problem. We also propose a novel approach to retrieve annotated dialogues as exemplars. Empirical results on MultiWOZ show that our method IC-DST substantially outperforms previous fine-tuned state-of-the-art models in few-shot settings. In addition, we test IC-DST in zero-shot settings, in which the model only takes a fixed task instruction as input, finding that it outperforms previous zero-shot methods by a large margin.",Dialogue State Tracking (DST) "A challenge in the Dialogue State Tracking (DST) field is adapting models to new domains without using any supervised data — zero-shot domain adaptation. Parameter-Efficient Transfer Learning (PETL) has the potential to address this problem due to its robustness. However, it has yet to be applied to the zero-shot scenarios, as it is not clear how to apply it unsupervisedly. Our method, Prompter, uses descriptions of target domain slots to generate dynamic prefixes that are concatenated to the key and values at each layer’s self-attention mechanism. This allows for the use of prefix-tuning in zero-shot. Prompter outperforms previous methods on both the MultiWOZ and SGD benchmarks. In generating prefixes, our analyses find that Prompter not only utilizes the semantics of slot descriptions but also how often the slots appear together in conversation. Moreover, Prompter’s gains are due to its improved ability to distinguish ”none”-valued dialogue slots, compared against baselines.",Dialogue State Tracking (DST) "Dialogue state tracking (DST) is an important step in dialogue management to keep track of users' beliefs. Existing works fine-tune all language model (LM) parameters to tackle the DST task, which requires significant data and computing resources for training and hosting. The cost grows exponentially in the real-world deployment where dozens of fine-tuned LM are used for different domains and tasks. To reduce parameter size and better utilize cross-task shared information, we propose to use soft prompt token embeddings to learn task properties. Without tuning LM parameters, our method drastically reduces the number of parameters needed to less than 0.5% of prior works while achieves better low-resource DST performance.",Dialogue State Tracking (DST) "Previous zero-shot dialogue state tracking (DST) methods only apply transfer learning, ignoring unlabelled data in the target domain. We transform zero-shot DST into few-shot DST by utilising such unlabelled data via joint and self-training methods. Our method incorporates auxiliary tasks that generate slot types as inverse prompts for main tasks, creating slot values during joint training. Cycle consistency between these two tasks enables the generation and selection of quality samples in unknown target domains for subsequent fine-tuning. This approach also facilitates automatic label creation, thereby optimizing the training and fine-tuning of DST models. We demonstrate this method's effectiveness on general language models in zero-shot scenarios, improving average joint goal accuracy by 8% across all domains in MultiWOZ.",Dialogue State Tracking (DST) "An important yet rarely tackled problem in dialogue state tracking (DST) is scalability for dynamic ontology (e.g., movie, restaurant) and unseen slot values. We focus on a specific condition, where the ontology is unknown to the state tracker, but the target slot value (except for none and dontcare), possibly unseen during training, can be found as word segment in the dialogue context. Prior approaches often rely on candidate generation from n-gram enumeration or slot tagger outputs, which can be inefficient or suffer from error propagation. We propose BERT-DST, an end-to-end dialogue state tracker which directly extracts slot values from the dialogue context. We use BERT as dialogue context encoder whose contextualized language representations are suitable for scalable DST to identify slot values from their semantic context. Furthermore, we employ encoder parameter sharing across all slots with two advantages: (1) Number of parameters does not grow linearly with the ontology. (2) Language representation knowledge can be transferred among slots. Empirical evaluation shows BERT-DST with cross-slot parameter sharing outperforms prior work on the benchmark scalable DST datasets Sim-M and Sim-R, and achieves competitive performance on the standard DSTC2 and WOZ 2.0 datasets.",Dialogue State Tracking (DST) "In a real-world environment, Dialogue State Tracking (DST) should use speech recognition results to perform tasks. However, most existing DST research has been conducted in text-based environments. This study aims to build a model that efficiently performs Automatic Speech Recognition-based DST. To operate robustly against speech noise, we used CopyT5, which adopted a copy mechanism, and trained the model using augmented data including speech noise. Furthermore, CopyT5 performed post-training using the masked language modeling method with the MultiWOZ dataset in T5 in order to learn the dialogue context better. The copy mechanism also mitigated name entity errors that may occur during DST generation. Experiments confirmed that data augmentation, post-training, and the copy mechanism effectively improve DST performance.",Dialogue State Tracking (DST) "The traditional Dialogue State Tracking (DST) problem aims to track user preferences and intents in user-agent conversations. While sufficient for task-oriented dialogue systems supporting narrow domain applications, the advent of Large Language Model (LLM)-based chat systems has introduced many real-world intricacies in open-domain dialogues. These intricacies manifest in the form of increased complexity in contextual interactions, extended dialogue sessions encompassing a diverse array of topics, and more frequent contextual shifts. To handle these intricacies arising from evolving LLM-based chat systems, we propose joint dialogue segmentation and state tracking per segment in open-domain dialogue systems. Assuming a zero-shot setting appropriate to a true open-domain dialogue system, we propose S3-DST, a structured prompting technique that harnesses Pre-Analytical Recollection, a novel grounding mechanism we designed for improving long context tracking. To demonstrate the efficacy of our proposed approach in joint segmentation and state tracking, we evaluate S3-DST on a proprietary anonymized open-domain dialogue dataset, as well as publicly available DST and segmentation datasets. Across all datasets and settings, S3-DST consistently outperforms the state-of-the-art, demonstrating its potency and robustness the next generation of LLM-based chat systems.",Dialogue State Tracking (DST) "Dialogue State Tracking (DST), a crucial component of task-oriented dialogue (ToD) systems, keeps track of all important information pertaining to dialogue history: filling slots with the most probable values throughout the conversation. Existing methods generally rely on a predefined set of values and struggle to generalise to previously unseen slots in new domains. To overcome these challenges, we propose a domain-agnostic extractive question answering (QA) approach with shared weights across domains. To disentangle the complex domain information in ToDs, we train our DST with a novel domain filtering strategy by excluding out-of-domain question samples. With an independent classifier that predicts the presence of multiple domains given the context, our model tackles DST by extracting spans in active domains. Empirical results demonstrate that our model can efficiently leverage domain-agnostic QA datasets by two-stage fine-tuning while being both domain-scalable and open vocabulary in DST. It shows strong transferability by achieving zero-shot domain-adaptation results on MultiWOZ 2.1 with an average JGA of 36.7%. It further achieves cross-lingual transfer with state-of-the-art zero-shot results, 66.2% JGA from English to German and 75.7% JGA from English to Italian on WOZ 2.0.",Dialogue State Tracking (DST) "Visual Question Answering (VQA) is an emerging field in Artificial Intelligence (AI) that aims to enable machines to understand and answer questions about visual content. In this survey paper, current state-of-the art research is extensively surveyed to highlight limitations and futuristic opportunities. Visual QA systems use Natural Language Processing and machine learning techniques to understand and respond to questions posed by users. The paper then reviews the recent advances in neural network-based models and pre-trained language models. The paper also discusses the challenges facing visual QA systems, including the need for large-scale training data, the ability to handle complex and open-ended questions, and the need for robust evaluation metrics. Further, different types of datasets and evaluation metrics used in the literature are summarized, as well as the challenges and open research problems that remain to be addressed. Overall, it is concluded that VQA is a challenging task that requires a combination of visual understanding and natural language processing skills, and that there is still much scope for improvement in terms of accuracy and generalization.",Visual QA (VQA) "Scaling Visual Question Answering (VQA) to the open-domain and multi-hop nature of web searches, requires fundamental advances in visual representation learning, knowledge aggregation, and language generation. In this work, we introduce WEBQA, a challenging new benchmark that proves difficult for large-scale state-of-the-art models which lack language groundable visual representations for novel objects and the ability to reason, yet trivial for humans. WebQA mirrors the way humans use the web: 1) Ask a question, 2) Choose sources to aggregate, and 3) Produce a fluent language response. This is the behavior we should be expecting from IoT devices and digital assistants. Existing work prefers to assume that a model can either reason about knowledge in images or in text. WebQA includes a secondary text-only QA task to ensure improved visual performance does not come at the cost of language understanding. Our challenge for the community is to create unified multimodal reasoning models that answer questions regardless of the source modality, moving us closer to digital assistants that not only query language knowledge, but also the richer visual online world.",Visual QA (VQA) "Visual question answering on document images that contain textual, visual, and layout information, called document VQA, has received much attention recently. Although many datasets have been proposed for developing document VQA systems, most of the existing datasets focus on understanding the content relationships within a single image and not across multiple images. In this study, we propose a new multi-image document VQA dataset, SlideVQA, containing 2.6k+ slide decks composed of 52k+ slide images and 14.5k questions about a slide deck. SlideVQA requires complex reasoning, including single-hop, multi-hop, and numerical reasoning, and also provides annotated arithmetic expressions of numerical answers for enhancing the ability of numerical reasoning. Moreover, we developed a new end-to-end document VQA model that treats evidence selection and question answering as a unified sequence-to-sequence format. Experiments on SlideVQA show that our model outperformed existing state-of-the-art QA models, but that it still has a large gap behind human performance. We believe that our dataset will facilitate research on document VQA.",Visual QA (VQA) "Visual commonsense reasoning (VCR) task leads to a cognitive level of understanding between vision and linguistic domains. Three sub-tasks, i.e., $$Q \rightarrow A$$ Q → A , $$QA \rightarrow R$$ Q A → R , and $$Q \rightarrow AR$$ Q → A R , require the ability to predict the correct answer and rational explanation according to the given image and question. Different from other visual reasoning tasks, such as VQA and GQA, VCR focuses on the exploration of the facts that clarify the causes, context, and consequences of the image and questions, which is the process of acquiring knowledge and thorough understanding. In this paper, we propose a rationale knowledge base (RKB) incorporating the convolution fusion mechanism to import the VCR-related knowledge. We emphasize that (1) the RKB is extracted and then trained over VCR’s dataset (VCR-set) itself, and (2) the convolution fusion mechanism is subtly designed to be self-adaptive and computationally efficient. Experiments on the large-scale VCR-set demonstrate the effectiveness of our proposed method with respect to the three sub-tasks.",Visual QA (VQA) "Visual question answering (VQA) is an important and challenging multimodal task in computer vision. Recently, a few efforts have been made to bring VQA task to aerial images, due to its potential real-world applications in disaster monitoring, urban planning, and digital earth product generation. However, not only the huge variation in the appearance, scale and orientation of the concepts in aerial images, but also the scarcity of the well-annotated datasets restricts the development of VQA in this domain. In this paper, we introduce a new dataset, HRVQA, which provides collected 53512 aerial images of 1024*1024 pixels and semi-automatically generated 1070240 QA pairs. To benchmark the understanding capability of VQA models for aerial images, we evaluate the relevant methods on HRVQA. Moreover, we propose a novel model, GFTransformer, with gated attention modules and a mutual fusion module. The experiments show that the proposed dataset is quite challenging, especially the specific attribute related questions. Our method achieves superior performance in comparison to the previous state-of-the-art approaches.",Visual QA (VQA) "Visual question answering (VQA) is a task where an image is given, and a series of questions are asked about the image. To build an efficient VQA algorithm, a large amount of QA data is required which is very expensive. Generating synthetic QA pairs based on templates is a practical way to obtain data. However, VQA models trained on those data do not perform well on complex, human-written questions. To address this issue, we propose a new method called {\it chain of QA for human-written questions} (CoQAH). CoQAH utilizes a sequence of QA interactions between a large language model and a VQA model trained on synthetic data to reason and derive logical answers for human-written questions. We tested the effectiveness of CoQAH on two types of human-written VQA datasets for 3D-rendered and chest X-ray images and found that it achieved state-of-the-art accuracy in both types of data. Notably, CoQAH outperformed general vision-language models, VQA models, and medical foundation models with no finetuning.",Visual QA (VQA) "Today's VQA models still tend to capture superficial linguistic correlations in the training set and fail to generalize to the test set with different QA distributions. To reduce these language biases, recent VQA works introduce an auxiliary question-only model to regularize the training of targeted VQA model, and achieve dominating performance on diagnostic benchmarks for out-of-distribution testing. However, due to the complex model design, ensemble-based methods are unable to equip themselves with two indispensable characteristics of an ideal VQA model: 1) Visual-explainable: The model should rely on the right visual regions when making decisions. 2) Question-sensitive: The model should be sensitive to the linguistic variations in questions. To this end, we propose a novel model-agnostic Counterfactual Samples Synthesizing and Training (CSST) strategy. After training with CSST, VQA models are forced to focus on all critical objects and words, which significantly improves both visual-explainable and question-sensitive abilities. Specifically, CSST is composed of two parts: Counterfactual Samples Synthesizing (CSS) and Counterfactual Samples Training (CST). CSS generates counterfactual samples by carefully masking critical objects in images or words in questions and assigning pseudo ground-truth answers. CST not only trains the VQA models with both complementary samples to predict respective ground-truth answers, but also urges the VQA models to further distinguish the original samples and superficially similar counterfactual ones. To facilitate the CST training, we propose two variants of supervised contrastive loss for VQA, and design an effective positive and negative sample selection mechanism based on CSS. Extensive experiments have shown the effectiveness of CSST. Particularly, by building on top of model LMH+SAR (Clark et al. 2019), (Si et al. 2021), we achieve record-breaking performance on all out-of-distribution benchmarks (e.g., VQA-CP v2, VQA-CP v1, and GQA-OOD).",Visual QA (VQA) "Albeit progress has been made in Composed Image Retrieval (CIR), we empirically find that a certain percentage of failure retrieval results are not consistent with their relative captions. To address this issue, this work provides a Visual Question Answering (VQA) perspective to boost the performance of CIR. The resulting VQA4CIR is a post-processing approach and can be directly plugged into existing CIR methods. Given the top-C retrieved images by a CIR method, VQA4CIR aims to decrease the adverse effect of the failure retrieval results being inconsistent with the relative caption. To find the retrieved images inconsistent with the relative caption, we resort to the""QA generation to VQA""self-verification pipeline. For QA generation, we suggest fine-tuning LLM (e.g., LLaMA) to generate several pairs of questions and answers from each relative caption. We then fine-tune LVLM (e.g., LLaVA) to obtain the VQA model. By feeding the retrieved image and question to the VQA model, one can find the images inconsistent with relative caption when the answer by VQA is inconsistent with the answer in the QA pair. Consequently, the CIR performance can be boosted by modifying the ranks of inconsistently retrieved images. Experimental results show that our proposed method outperforms state-of-the-art CIR methods on the CIRR and Fashion-IQ datasets.",Visual QA (VQA) "Visual question answering (VQA) requires systems to perform concept-level reasoning by unifying unstructured (e.g., the context in question and answer; “QA context”) and structured (e.g., knowledge graph for the QA context and scene; “concept graph”) multimodal knowledge. Existing works typically combine a scene graph and a concept graph of the scene by connecting corresponding visual nodes and concept nodes, then incorporate the QA context representation to perform question answering. However, these methods only perform a unidirectional fusion from unstructured knowledge to structured knowledge, limiting their potential to capture joint reasoning over the heterogeneous modalities of knowledge. To perform more expressive reasoning, we propose VQA-GNN, a new VQA model that performs bidirectional fusion between unstructured and structured multimodal knowledge to obtain unified knowledge representations. Specifically, we inter-connect the scene graph and the concept graph through a super node that represents the QA context, and introduce a new multimodal GNN technique to perform inter-modal message passing for reasoning that mitigates representational gaps between modalities. On two challenging VQA tasks (VCR and GQA), our method outperforms strong baseline VQA methods by 3.2% on VCR (Q-AR) and 4.6% on GQA, suggesting its strength in performing concept-level reasoning. Ablation studies further demonstrate the efficacy of the bidirectional fusion and multimodal GNN method in unifying unstructured and structured multimodal knowledge.",Visual QA (VQA) "Aesthetic assessment of images can be categorized into two main forms: numerical assessment and language assessment. In this paper, we propose a new task of aesthetic language assessment: aesthetic visual question and answering (AVQA) of images. We use images from www.flickr.com. The objective QA pairs are generated by the proposed aesthetic attributes analysis algorithms. Moreover, we introduce subjective QA pairs that are converted from aesthetic numerical labels and sentiment analysis from large-scale pre-train models. We build the first aesthetic visual question answering dataset, AesVQA, that contains 72,168 high-quality images and 324,756 pairs of aesthetic questions. This is the first work that both addresses the task of aesthetic VQA and introduces subjectiveness into VQA tasks. The experimental results reveal that our methods outperform other VQA models on this new task.",Visual QA (VQA) "In recent years, multiple-choice Visual Question Answering (VQA) has become topical and achieved remarkable progress. However, most pioneer multiple-choice VQA models are heavily driven by statistical correlations in datasets, which cannot perform well on multimodal understanding and suffer from poor generalization. In this paper, we identify two kinds of spurious correlations, i.e., a Vision-Answer bias (VA bias) and a Question-Answer bias (QA bias). To systematically and scientifically study these biases, we construct a new video question answering (videoQA) benchmark NExT-OOD in OOD setting and propose a graph-based cross-sample method for bias reduction. Specifically, the NExT-OOD is designed to quantify models’ generalizability and measure their reasoning ability comprehensively. It contains three sub-datasets including NExT-OOD-VA, NExT-OOD-QA, and NExT-OOD-VQA, which are designed for the VA bias, QA bias, and VA&QA bias, respectively. We evaluate several existing multiple-choice VQA models on our NExT-OOD, and illustrate that their performance degrades significantly compared with the results obtained on the original multiple-choice VQA dataset. Besides, to mitigate the VA bias and QA bias, we explicitly consider the cross-sample information and design a contrastive graph matching loss in our approach, which provides adequate debiasing guidance from the perspective of whole dataset, and encourages the model to focus on multimodal contents instead of spurious statistical regularities. Extensive experimental results illustrate that our method significantly outperforms other bias reduction strategies, demonstrating the effectiveness and generalizability of the proposed approach.",Visual QA (VQA) "Vision-extended LLMs have made significant strides in Visual Question Answering (VQA). Despite these advancements, VLLMs still encounter substantial difficulties in handling queries involving long-tail entities, with a tendency to produce erroneous or hallucinated responses. In this work, we introduce a novel evaluative benchmark named \textbf{SnapNTell}, specifically tailored for entity-centric VQA. This task aims to test the models' capabilities in identifying entities and providing detailed, entity-specific knowledge. We have developed the \textbf{SnapNTell Dataset}, distinct from traditional VQA datasets: (1) It encompasses a wide range of categorized entities, each represented by images and explicitly named in the answers; (2) It features QA pairs that require extensive knowledge for accurate responses. The dataset is organized into 22 major categories, containing 7,568 unique entities in total. For each entity, we curated 10 illustrative images and crafted 10 knowledge-intensive QA pairs. To address this novel task, we devised a scalable, efficient, and transparent retrieval-augmented multimodal LLM. Our approach markedly outperforms existing methods on the SnapNTell dataset, achieving a 66.5\% improvement in the BELURT score. We will soon make the dataset and the source code publicly accessible.",Visual QA (VQA) "Document Question Answering (QA) presents a challenge in understanding visually-rich documents (VRD), particularly those dominated by lengthy textual content like research journal articles. Existing studies primarily focus on real-world documents with sparse text, while challenges persist in comprehending the hierarchical semantic relations among multiple pages to locate multimodal components. To address this gap, we propose PDF-MVQA, which is tailored for research journal articles, encompassing multiple pages and multimodal information retrieval. Unlike traditional machine reading comprehension (MRC) tasks, our approach aims to retrieve entire paragraphs containing answers or visually rich document entities like tables and figures. Our contributions include the introduction of a comprehensive PDF Document VQA dataset, allowing the examination of semantically hierarchical layout structures in text-dominant documents. We also present new VRD-QA frameworks designed to grasp textual contents and relations among document layouts simultaneously, extending page-level understanding to the entire multi-page document. Through this work, we aim to enhance the capabilities of existing vision-and-language models in handling challenges posed by text-dominant documents in VRD-QA.",Visual QA (VQA) "We introduce a novel visual question answering (VQA) task in the context of autonomous driving, aiming to answer natural language questions based on street-view clues. Compared to traditional VQA tasks, VQA in autonomous driving scenario presents more challenges. Firstly, the raw visual data are multi-modal, including images and point clouds captured by camera and LiDAR, respectively. Secondly, the data are multi-frame due to the continuous, real-time acquisition. Thirdly, the outdoor scenes exhibit both moving foreground and static background. Existing VQA benchmarks fail to adequately address these complexities. To bridge this gap, we propose NuScenes-QA, the first benchmark for VQA in the autonomous driving scenario, encompassing 34K visual scenes and 460K question-answer pairs. Specifically, we leverage existing 3D detection annotations to generate scene graphs and design question templates manually. Subsequently, the question-answer pairs are generated programmatically based on these templates. Comprehensive statistics prove that our NuScenes-QA is a balanced large-scale benchmark with diverse question formats. Built upon it, we develop a series of baselines that employ advanced 3D detection and VQA techniques. Our extensive experiments highlight the challenges posed by this new task.",Visual QA (VQA) "We introduce LEAF-QA, a comprehensive dataset of 250,000 densely annotated figures/charts, constructed from real-world open data sources, along with 2 million question-answer (QA) pairs querying the structure and semantics of these charts. LEAF-QA highlights the problem of multimodal QA, which is notably different from conventional visual QA (VQA), and has recently gained interest in the community. Furthermore, LEAF-QA is significantly more complex than previous attempts at chart QA, viz. FigureQA and DVQA, which present only limited variations in chart data. LEAF-QA being constructed from real-world sources, requires a novel architecture to enable question answering. To this end, LEAF-Net, a deep architecture involving chart element localization, question and answer encoding in terms of chart elements, and an attention network is proposed. Different experiments are conducted to demonstrate the challenges of QA on LEAF-QA. The proposed architecture, LEAF-Net also considerably advances the current state-of-the-art on FigureQA and DVQA.",Visual QA (VQA) "Multi-modal reasoning in visual question answering (VQA) has witnessed rapid progress recently. However, most reasoning models heavily rely on shortcuts learned from training data, which prevents their usage in challenging real-world scenarios. In this paper, we propose a simple but effective cross-modal contrastive learning strategy to get rid of the shortcut reasoning caused by imbalanced annotations and improve the overall performance. Different from existing contrastive learning with complex negative categories on coarse (Image, Question, Answer) triplet level, we leverage the correspondences between the language and image modalities to perform finer-grained cross-modal contrastive learning. We treat each Question-Answer (QA) pair as a whole, and differentiate between images that conform with it and those against it. To alleviate the issue of sampling bias, we further build connected graphs among images. For each positive pair, we regard the images from different graphs as negative samples and deduct the version of multi-positive contrastive learning. To our best knowledge, it is the first paper that reveals a general contrastive learning strategy without delicate hand-craft rules can contribute to robust VQA reasoning. Experiments on several mainstream VQA datasets demonstrate our superiority compared to the state of the arts.",Visual QA (VQA) "Quality assurance (QA) plays a crucial role in manufacturing to ensure that products meet their specifications. However, manual QA processes are costly and time-consuming, thereby making artificial intelligence (AI) an attractive solution for automation and expert support. In particular, convolutional neural networks (CNNs) have gained a lot of interest in visual inspection. Next to AI methods, the explainable artificial intelligence (XAI) systems, which achieve transparency and interpretability by providing insights into the decision-making process of the AI, are interesting methods for achieveing quality inspections in manufacturing processes. In this study, we conducted a systematic literature review (SLR) to explore AI and XAI approaches for visual QA (VQA) in manufacturing. Our objective was to assess the current state of the art and identify research gaps in this context. Our findings revealed that AI-based systems predominantly focused on visual quality control (VQC) for defect detection. Research addressing VQA practices, like process optimization, predictive maintenance, or root cause analysis, are more rare. Least often cited are papers that utilize XAI methods. In conclusion, this survey emphasizes the importance and potential of AI and XAI in VQA across various industries. By integrating XAI, organizations can enhance model transparency, interpretability, and trust in AI systems. Overall, leveraging AI and XAI improves VQA practices and decision-making in industries.",Visual QA (VQA) "The modern operating room is becoming increasingly complex, requiring innovative intra-operative support systems. While the focus of surgical data science has largely been on video analysis, integrating surgical computer vision with natural language capabilities is emerging as a necessity. Our work aims to advance visual question answering (VQA) in the surgical context with scene graph knowledge, addressing two main challenges in the current surgical VQA systems: removing question-condition bias in the surgical VQA dataset and incorporating scene-aware reasoning in the surgical VQA model design. METHODS First, we propose a surgical scene graph-based dataset, SSG-VQA, generated by employing segmentation and detection models on publicly available datasets. We build surgical scene graphs using spatial and action information of instruments and anatomies. These graphs are fed into a question engine, generating diverse QA pairs. We then propose SSG-VQA-Net, a novel surgical VQA model incorporating a lightweight Scene-embedded Interaction Module, which integrates geometric scene knowledge in the VQA model design by employing cross-attention between the textual and the scene features. RESULTS Our comprehensive analysis shows that our SSG-VQA dataset provides a more complex, diverse, geometrically grounded, unbiased and surgical action-oriented dataset compared to existing surgical VQA datasets and SSG-VQA-Net outperforms existing methods across different question types and complexities. We highlight that the primary limitation in the current surgical VQA systems is the lack of scene knowledge to answer complex queries. CONCLUSION We present a novel surgical VQA dataset and model and show that results can be significantly improved by incorporating geometric scene features in the VQA model design. We point out that the bottleneck of the current surgical visual question-answer model lies in learning the encoded representation rather than decoding the sequence. Our SSG-VQA dataset provides a diagnostic benchmark to test the scene understanding and reasoning capabilities of the model.",Visual QA (VQA) "Visual Question Answering (VQA) stands to benefit from the boost of increasingly sophisticated Pretrained Language Model (PLM) and Computer Vision-based models. In particular, many language modality studies have been conducted using image captioning or question generation with the knowledge ground of PLM in terms of data augmentation. However, image generation of VQA has been implemented in a limited way to modify only certain parts of the original image in order to control the quality and uncertainty. In this paper, to address this gap, we propose a method that utilizes the diffusion model, pre-trained with various tasks and images, to inject the prior knowledge base into generated images and secure diversity without losing generality about the answer. In addition, we design an effective training strategy by considering the difficulty of questions to address the multiple images per QA pair and to compensate for the weakness of the diffusion model. VQA model trained on our strategy improves significant performance on the dataset that requires factual knowledge without any knowledge information in language modality.",Visual QA (VQA) "Visual question answering (VQA) is the task of answering questions about an image. The task assumes an understanding of both the image and the question to provide a natural language answer. VQA has gained popularity in recent years due to its potential applications in a wide range of fields, including robotics, education, and healthcare. In this paper, we focus on knowledge-augmented VQA, where answering the question requires commonsense knowledge, world knowledge, and reasoning about ideas and concepts not present in the image. We propose a multimodal framework that uses language guidance (LG) in the form of rationales, image captions, scene graphs, etc to answer questions more accurately. We benchmark our method on the multi-choice question-answering task of the A-OKVQA, Science-QA, VSR, and IconQA datasets using CLIP and BLIP models. We show that the use of language guidance is a simple but powerful and effective strategy for visual question answering. Our language guidance improves the performance of CLIP by 7.6% and BLIP-2 by 4.8% in the challenging A-OKVQA dataset. We also observe consistent improvement in performance on the Science-QA, VSR, and IconQA datasets when using the proposed language guidances.",Visual QA (VQA) "Retrieval-augmented generation (RAG) methods have been receiving increasing attention from the NLP community and achieved state-of-the-art performance on many NLP downstream tasks. Compared with conventional pre-trained generation models, RAG methods have remarkable advantages such as easy knowledge acquisition, strong scalability, and low training cost. Although existing RAG models have been applied to various knowledge-intensive NLP tasks, such as open-domain QA and dialogue systems, most of the work has focused on retrieving unstructured text documents from Wikipedia. In this paper, I first elaborate on the current obstacles to retrieving knowledge from a single-source homogeneous corpus. Then, I demonstrate evidence from both existing literature and my experiments, and provide multiple solutions on retrieval-augmented generation methods across heterogeneous knowledge.",Open-Domain QA "Recent advancements in open-domain question answering (ODQA), that is, finding answers from large open-domain corpus like Wikipedia, have led to human-level performance on many datasets. However, progress in QA over book stories (Book QA) lags despite its similar task formulation to ODQA. This work provides a comprehensive and quantitative analysis about the difficulty of Book QA: (1) We benchmark the research on the NarrativeQA dataset with extensive experiments with cutting-edge ODQA techniques. This quantifies the challenges Book QA poses, as well as advances the published state-of-the-art with a ∼7% absolute improvement on ROUGE-L. (2) We further analyze the detailed challenges in Book QA through human studies.1 Our findings indicate that the event-centric questions dominate this task, which exemplifies the inability of existing QA models to handle event-oriented scenarios.",Open-Domain QA "Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks such as question answering. However, this knowledge is stored implicitly in the parameters of a neural network, requiring ever-larger networks to cover more facts. To capture knowledge in a more modular and interpretable way, we augment language model pre-training with a latent knowledge retriever, which allows the model to retrieve and attend over documents from a large corpus such as Wikipedia, used during pre-training, fine-tuning and inference. For the first time, we show how to pre-train such a knowledge retriever in an unsupervised manner, using masked language modeling as the learning signal and backpropagating through a retrieval step that considers millions of documents. We demonstrate the effectiveness of Retrieval-Augmented Language Model pre-training (REALM) by fine-tuning on the challenging task of Open-domain Question Answering (Open-QA). We compare against state-of-the-art models for both explicit and implicit knowledge storage on three popular Open-QA benchmarks, and find that we outperform all previous methods by a significant margin (4-16% absolute accuracy), while also providing qualitative benefits such as interpretability and modularity.",Open-Domain QA "In open-domain question answering, dense passage retrieval has become a new paradigm to retrieve relevant passages for finding answers. Typically, the dual-encoder architecture is adopted to learn dense representations of questions and passages for semantic matching. However, it is difficult to effectively train a dual-encoder due to the challenges including the discrepancy between training and inference, the existence of unlabeled positives and limited training data. To address these challenges, we propose an optimized training approach, called RocketQA, to improving dense passage retrieval. We make three major technical contributions in RocketQA, namely cross-batch negatives, denoised hard negatives and data augmentation. The experiment results show that RocketQA significantly outperforms previous state-of-the-art models on both MSMARCO and Natural Questions. We also conduct extensive experiments to examine the effectiveness of the three strategies in RocketQA. Besides, we demonstrate that the performance of end-to-end QA can be improved based on our RocketQA retriever.",Open-Domain QA "This paper studies the problem of open-domain question answering, with the aim of answering a diverse range of questions leveraging knowledge resources. Two types of sources, QA-pair and document corpora, have been actively leveraged with the following complementary strength. The former is highly precise when the paraphrase of given question $q$ was seen and answered during training, often posed as a retrieval problem, while the latter generalizes better for unseen questions. A natural follow-up is thus leveraging both models, while a naive pipelining or integration approaches have failed to bring additional gains over either model alone. Our distinction is interpreting the problem as calibration, which estimates the confidence of predicted answers as an indicator to decide when to use a document or QA-pair corpus. The effectiveness of our method was validated on widely adopted benchmarks such as Natural Questions and TriviaQA.",Open-Domain QA "Open-domain Question Answering (OpenQA) is an important task in Natural Language Processing (NLP), which aims to answer a question in the form of natural language based on large-scale unstructured documents. Recently, there has been a surge in the amount of research literature on OpenQA, particularly on techniques that integrate with neural Machine Reading Comprehension (MRC). While these research works have advanced performance to new heights on benchmark datasets, they have been rarely covered in existing surveys on QA systems. In this work, we review the latest research trends in OpenQA, with particular attention to systems that incorporate neural MRC techniques. Specifically, we begin with revisiting the origin and development of OpenQA systems. We then introduce modern OpenQA architecture named""Retriever-Reader""and analyze the various systems that follow this architecture as well as the specific techniques adopted in each of the components. We then discuss key challenges to developing OpenQA systems and offer an analysis of benchmarks that are commonly used. We hope our work would enable researchers to be informed of the recent advancement and also the open challenges in OpenQA research, so as to stimulate further progress in this field.",Open-Domain QA "While research on explaining predictions of open-domain QA systems (ODQA) to users is gaining momentum, most works have failed to evaluate the extent to which explanations improve user trust. While few works evaluate explanations using user studies, they employ settings that may deviate from the end-user's usage in-the-wild: ODQA is most ubiquitous in voice-assistants, yet current research only evaluates explanations using a visual display, and may erroneously extrapolate conclusions about the most performant explanations to other modalities. To alleviate these issues, we conduct user studies that measure whether explanations help users correctly decide when to accept or reject an ODQA system's answer. Unlike prior work, we control for explanation modality, e.g., whether they are communicated to users through a spoken or visual interface, and contrast effectiveness across modalities. Our results show that explanations derived from retrieved evidence passages can outperform strong baselines (calibrated confidence) across modalities but the best explanation strategy in fact changes with the modality. We show common failure cases of current explanations, emphasize end-to-end evaluation of explanations, and caution against evaluating them in proxy modalities that are different from deployment.",Open-Domain QA "The retriever-reader pipeline has shown promising performance in open-domain QA but suffers from a very slow inference speed. Recently proposed question retrieval models tackle this problem by indexing question-answer pairs and searching for similar questions. These models have shown a significant increase in inference speed, but at the cost of lower QA performance compared to the retriever-reader models. This paper proposes a two-step question retrieval model, SQuID (Sequential Question-Indexed Dense retrieval) and distant supervision for training. SQuID uses two bi-encoders for question retrieval. The first-step retriever selects top-k similar questions, and the second-step retriever finds the most similar question from the top-k questions. We evaluate the performance and the computational efficiency of SQuID. The results show that SQuID significantly increases the performance of existing question retrieval models with a negligible loss on inference speed.",Open-Domain QA "Open-domain question answering (QA) systems are often built with retrieval modules. However, retrieving passages from a given source is known to suffer from insufficient knowledge coverage. Alternatively, prompting large language models (LLMs) to generate contextual passages based on their parametric knowledge has been shown to improve QA performance. Yet, LLMs tend to""hallucinate""content that conflicts with the retrieved knowledge. Based on the intuition that answers supported by both sources are more likely to be correct, we propose COMBO, a Compatibility-Oriented knowledge Merging for Better Open-domain QA framework, to effectively leverage the two sources of information. Concretely, we match LLM-generated passages with retrieved counterparts into compatible pairs, based on discriminators trained with silver compatibility labels. Then a Fusion-in-Decoder-based reader model handles passage pairs to arrive at the final answer. Experiments show that COMBO outperforms competitive baselines on three out of four tested open-domain QA benchmarks. Further analysis reveals that our proposed framework demonstrates greater efficacy in scenarios with a higher degree of knowledge conflicts.",Open-Domain QA "Natural language processing (NLP) systems based on deep neural networks have drastically improved the ability to gain the knowledge stored in text form from humongous amount of information stored on the web or on Wikipedia using search engines and QA systems. These systems are expected to extract the precise information corresponding to a query independent of the domain. Open domain question answering (ODQA) systems have gained prominence to extract genuine answers for queries in natural language form. Although ODQA systems are famous for web search, they can be used in many applications like gaining investment insights from financial documents, addressing queries from the internal enterprise wikis, customer support for newly launched services, among other uses. Chatbots, document classification systems and other applications use Machine Reading Comprehension (MRC) which is an integral part of ODQA systems. MRC is the critical processing bottleneck in the ODQA systems to process several documents with minimal latency.",Open-Domain QA "Open-Domain Question Answering (ODQA) requires models to answer factoid questions with no context given. The common way for this task is to train models on a large-scale annotated dataset to retrieve related documents and generate answers based on these documents. In this paper, we show that the ODQA architecture can be dramatically simplified by treating Large Language Models (LLMs) as a knowledge corpus and pro-pose a Self-Prompting framework for LLMs to perform ODQA so as to eliminate the need for training data and external knowledge corpus. Concretely, we firstly generate multiple pseudo QA pairs with background passages and one-sentence explanations for these QAs by prompting LLMs step by step and then leverage the generated QA pairs for in-context learning. Experimental results show our method surpasses previous state-of-the-art methods by +8.8 EM averagely on three widely-used ODQA datasets, and even achieves comparable performance with several retrieval-augmented fine-tuned models.",Open-Domain QA "This work presents a novel pipeline that demonstrates what is achievable with a combined effort of state-of-the-art approaches, surpassing the 50% exact match on NaturalQuestions and EfficentQA datasets. Specifically, it proposes the novel R2-D2 (Rank twice, reaD twice) pipeline composed of retriever, reranker, extractive reader, generative reader and a simple way to combine them. Furthermore, previous work often comes with a massive index of external documents that scales in the order of tens of GiB. This work presents a simple approach for pruning the contents of a massive index such that the open-domain QA system altogether with index, OS, and library components fits into 6GiB docker image while retaining only 8% of original index contents and losing only 3% EM accuracy.",Open-Domain QA "With the rise of large-scale pre-trained language models, open-domain question-answering (ODQA) has become an important research topic in NLP. Based on the popular pre-training fine-tuning approach, we posit that an additional in-domain pre-training stage using a large-scale, natural, and diverse question-answering (QA) dataset can be beneficial for ODQA. Consequently, we propose a novel QA dataset based on the Common Crawl project in this paper. Using the readily available schema.org annotation, we extract around 130 million multilingual question-answer pairs, including about 60 million English data-points. With this previously unseen number of natural QA pairs, we pre-train popular language models to show the potential of large-scale in-domain pre-training for the task of question-answering. In our experiments, we find that pre-training question-answering models on our Common Crawl Question Answering dataset (CCQA) achieves promising results in zero-shot, low resource and fine-tuned settings across multiple tasks, models and benchmarks.",Open-Domain QA "We study open-domain question answering with structured, unstructured and semi-structured knowledge sources, including text, tables, lists and knowledge bases. Departing from prior work, we propose a unifying approach that homogenizes all sources by reducing them to text and applies the retriever-reader model which has so far been limited to text sources only. Our approach greatly improves the results on knowledge-base QA tasks by 11 points, compared to latest graph-based methods. More importantly, we demonstrate that our unified knowledge (UniK-QA) model is a simple and yet effective way to combine heterogeneous sources of knowledge, advancing the state-of-the-art results on two popular question answering benchmarks, NaturalQuestions and WebQuestions, by 3.5 and 2.6 points, respectively.",Open-Domain QA "Open-domain question qnswering (OpenQA) involves a retriever for selecting relevant passages from large text corpora (e.g. Wikipedia) and a reading comprehension (RC) model for extracting answers from these retrieved passages. The retrieved passages are often noisy. Since OpenQA relies heavily on efficient passages for better answer prediction, many passage ranker models have been proposed to filter out noisy passages. However, their performance is limited because their ranker model scores each passage separately by modelling only the relationship between query and passage. Thus, they could not capture local context information. Their ranker model also ignored the rich initial rank of passages ranked by a search engine. This paper presents a Passage Ranker model that captures local-context information through cross-passage interaction. Our ranker model integrates initial ranking and uses modified attention in the cross-passage interaction to compute a better confidence score for each passage. Moreover, we integrate SRL into our passage reader and train it on proposed sampled data. Our semantic reader can absorb contextual semantics. Experimental results on four public OpenQA datasets show that our model significantly outperforms recent OpenQA baselines.",Open-Domain QA "The paper presents an open-domain Question Answering system for Romanian, answering COVID-19 related questions. The QA system pipeline involves automatic question processing, automatic query generation, web searching for the top 10 most relevant documents and answer extraction using a fine-tuned BERT model for Extractive QA, trained on a COVID-19 data set that we have manually created. The paper will present the QA system and its integration with the Romanian language technologies portal RELATE, the COVID-19 data set and different evaluations of the QA performance.",Open-Domain QA "Open-domain conversational QA (ODCQA) calls for effective question rewriting (QR), as the questions in a conversation typically lack proper context for the QA model to interpret. In this paper, we compare two types of QR approaches, generative and expansive QR, in end-to-end ODCQA systems with recently released QReCC and OR-QuAC benchmarks. While it is common practice to apply the same QR approach for both the retriever and the reader in the QA system, our results show such strategy is generally suboptimal and suggest expansive QR is better for the sparse retriever and generative QR is better for the reader. Furthermore, while conversation history modeling with dense representations outperforms QR, we show the advantages to apply both jointly, as QR boosts the performance especially when limited history turns are considered.",Open-Domain QA "Motivation: Question Answering (QA) is a highly focused topic in the field of Natural Language Processing (NLP). Recent progress in neural network models and the availability of large datasets like SQuAD have played a significant role in improving performance in open domains. However, there remains a need to further effectively implement these systems in more specific domains, especially in the biomedical field, to help medical practitioners provide accurate solutions for inquiries related to medicine and healthcare, including specific subjects such as the COVID-19 disease. Fortunately, recent models, such as transformers, have opened up avenues and modern techniques for developing accurate systems.Aims: In this work, we aim to leverage transformer models and Transfer Learning to effectively train models in the biomedical domain. By taking a pre-trained model for Question Answering tasks and further fine-tuning it on specific domains, we enhance the system’s performance in the biomedical domain. Our ultimate goal is to develop a QA model specifically tailored for COVID-19 QA.Results: We have trained BERT and RoBERTa models on the COVID-QA dataset and achieved competitive results on COVID-19 QA. Our RoBERTa model achieved an Exact Match (EM)/F1 score of 0.38/0.64, respectively, on COVID-QA, indicating successful performance in COVID-19 QA.",Open-Domain QA "Open-domain textual question answering (QA) systems have been a hot topic in Natural Language Processing (NLP) for quite some time. These systems are designed to find answers from a large amount of textual sources, such as Wikipedia or search engines. Due to the rapid development of deep learning, the performance of QA systems have been significantly improved, especially on machine reading comprehension. In this chapter, we provide an overview of open-domain QA systems, then introduce models on paragraph ranking, candidate answer extraction, and answer selection.",Open-Domain QA "Open-domain question answering relies on efficient passage retrieval to select candidate contexts, where traditional sparse vector space models, such as TF-IDF or BM25, are the de facto method. In this work, we show that retrieval can be practically implemented using dense representations alone, where embeddings are learned from a small number of questions and passages by a simple dual-encoder framework. When evaluated on a wide range of open-domain QA datasets, our dense retriever outperforms a strong Lucene-BM25 system largely by 9%-19% absolute in terms of top-20 passage retrieval accuracy, and helps our end-to-end QA system establish new state-of-the-art on multiple open-domain QA benchmarks.",Open-Domain QA "Multiple-choice question answering (MCQA) is one of the most challenging tasks in machine reading comprehension since it requires more advanced reading comprehension skills such as logical reasoning, summarization, and arithmetic operations. Unfortunately, most existing MCQA datasets are small in size, which increases the difficulty of model learning and generalization. To address this challenge, we propose a multi-source meta transfer (MMT) for low-resource MCQA. In this framework, we first extend meta learning by incorporating multiple training sources to learn a generalized feature representation across domains. To bridge the distribution gap between training sources and the target, we further introduce the meta transfer that can be integrated into the multi-source meta training. More importantly, the proposed MMT is independent of backbone language models. Extensive experiments demonstrate the superiority of MMT over state-of-the-arts, and continuous improvements can be achieved on different backbone networks on both supervised and unsupervised domain adaptation settings.",Multiple Choice QA (MCQA) "Multiple-Choice Question Answering (MCQA) is one of the challenging tasks in machine reading comprehension. The main challenge in MCQA is to extract ""evidence"" from the given context that supports the correct answer. In OpenbookQA dataset [1], the requirement of extracting ""evidence"" is particularly important due to the mutual independence of sentences in the context. Existing work tackles this problem by annotated evidence or distant supervision with rules which overly rely on human efforts. To address the challenge, we propose a simple yet effective approach termed evidence filtering to model the relationships between the encoded contexts with respect to different options collectively, and to potentially highlight the evidence sentences and filter out unrelated sentences. In addition to the effective reduction of human efforts of our approach compared, through extensive experiments on OpenbookQA, we show that the proposed approach outperforms the models that use the same backbone and more training data; and our parameter analysis also demonstrates the interpretability of our approach.",Multiple Choice QA (MCQA) "Training AI models that generalize across tasks and domains has long been among the open problems driving AI research. The emergence of Foundation Models made it easier to obtain expert models for a given task, but the heterogeneity of data that may be encountered at test time often means that any single expert is insufficient. We consider the Fusion of Experts (FoE) problem of fusing outputs of expert models with complementary knowledge of the data distribution and formulate it as an instance of supervised learning. Our method is applicable to both discriminative and generative tasks and leads to significant performance improvements in image and text classification, text summarization, multiple-choice QA, and automatic evaluation of generated text. We also extend our method to the""frugal""setting where it is desired to reduce the number of expert model evaluations at test time.",Multiple Choice QA (MCQA) "The task of multiple choice question answering (MCQA) is to identify the correct answer from multiple candidates given a passage and a question. It is typically approached by estimating the matching score among the triple of the passage, question and candidate answers. Existing methods decouple this estimation into pairwise or dual matching ignoring the third component. This paper introduces a Context-guided Triple Matching algorithm, which models the matching among the triple simultaneously. Specifically, the proposed matching takes one component from the triple as the context, and estimates its semantic matching between the other two. Additionally, a contrastive term is adopted to model the dissimilarity between the correct answer and distractive ones. The proposed algorithm is validated on several benchmarking MCQA datasets and outperforms the state-of-the-art models by a large margin.",Multiple Choice QA (MCQA) "The rapid evolution of Natural Language Processing (NLP) has favored major languages such as English, leaving a significant gap for many others due to limited resources. This is especially evident in the context of data annotation, a task whose importance cannot be underestimated, but which is time-consuming and costly. Thus, any dataset for resource-poor languages is precious, in particular when it is task-specific. Here, we explore the feasibility of repurposing existing datasets for a new NLP task: we repurposed the Belebele dataset (Bandarkar et al., 2023), which was designed for multiple-choice question answering (MCQA), to enable extractive QA (EQA) in the style of machine reading comprehension. We present annotation guidelines and a parallel EQA dataset for English and Modern Standard Arabic (MSA). We also present QA evaluation results for several monolingual and cross-lingual QA pairs including English, MSA, and five Arabic dialects. Our aim is to enable others to adapt our approach for the 120+ other language variants in Belebele, many of which are deemed under-resourced. We also conduct a thorough analysis and share our insights from the process, which we hope will contribute to a deeper understanding of the challenges and the opportunities associated with task reformulation in NLP research.",Multiple Choice QA (MCQA) "Depression is one of the most prevalent mental health diseases. Although there are effective treatments, the main problem relies on providing early and effective risk detection. Medical experts use self-reporting questionnaires to elaborate their diagnosis, but these questionnaires have some limitations. Social stigmas and the lack of awareness often negatively affect the success of these self-report questionnaires. This article aims to describe techniques to automatically estimate the depression severity from users on social media. We explored the use of pre-trained language models over the subject’s writings. We addressed the task “Measuring the Severity of the Signs of Depression” of eRisk 2020, an initiative in the CLEF Conference. In this task, participants have to fill the Beck Depression Questionnaire (BDI-II). Our proposal explores the application of pre-trained Multiple-Choice Question Answering (MCQA) models to predict user’s answers to the BDI-II questionnaire using their posts on social media. These MCQA models are built over the BERT (Bidirectional Encoder Representations from Transformers) architecture. Our results showed that multiple-choice question answering models could be a suitable alternative for estimating the depression degree, even when small amounts of training data are available (20 users).",Multiple Choice QA (MCQA) "This paper presents our submission to the SemEval 2024 Task 5: The Legal Argument Reasoning Task in Civil Procedure. We present two approaches to solving the task of legal answer validation, given an introduction to the case, a question and an answer candidate. Firstly, we fine-tuned pre-trained BERT-based models and found that models trained on domain knowledge perform better. Secondly, we performed few-shot prompting on GPT models and found that reformulating the answer validation task to be a multiple-choice QA task remarkably improves the performance of the model. Our best submission is a BERT-based model that achieved the 7th place out of 20.",Multiple Choice QA (MCQA) "Medical multiple-choice question answering (MCQA) is a challenging evaluation for medical natural language processing and a helpful task in itself. Medical questions may describe patient symptoms and ask for the correct diagnosis, which requires domain knowledge and complex reasoning. Standard language modeling pretraining alone is not sufficient to achieve the best results with BERT-base size (Devlin et al., 2019) encoders. Jin et al. (2020) showed that focusing masked language modeling on disease name prediction when using medical encyclopedic paragraphs as input leads to considerable MCQA accuracy improvement. In this work, we show that (1) fine-tuning on generated MCQA dataset outperforms the masked language modeling based objective and (2) correctly masking the cues to the answers is critical for good performance. We release new pretraining datasets and achieve state-of-the-art results on 4 MCQA datasets, notably +5.7% with base-size model on MedQA-USMLE.",Multiple Choice QA (MCQA) "Multiple-choice question answering (MCQA) for machine reading comprehension (MRC) is challenging. It requires a model to select a correct answer from several candidate options related to text passages or dialogue. To select the correct answer, such models must have the ability to understand natural languages, comprehend textual representations, and infer the relationship between candidate options, questions, and passages. Previous models calculated representations between passages and question-option pairs separately, thereby ignoring the effect of other relation-pairs. In this study, we propose a human reading comprehension attention (HRCA) model and a passage-question-option (PQO) matrix-guided HRCA model called HRCA+ to increase accuracy. The HRCA model updates the information learned from the previous relation-pair to the next relation-pair. HRCA+ utilizes the textual information and the interior relationship between every two parts in a passage, a question, and the corresponding candidate options. Our proposed method outperforms other state-of-the-art methods. On the Semeval-2018 Task 11 dataset, our proposed method improved accuracy levels from 95.8% to 97.2%, and on the DREAM dataset, it improved accuracy levels from 90.4% to 91.6% without extra training data, from 91.8% to 92.6% with extra training data.",Multiple Choice QA (MCQA) "Multiple-choice question answering (MCQA) is a challenging task that requires selecting the correct answer from a set of options based on a given question. There is a trend to use pre-trained encoder-decoder models to solve MCQA. Previous works concentrate on the decoder and adopt the generated text to enhance model performance. However, few studies have optimized the use of encoders for the characteristics of MCQA. In this work, we propose a dynamic exclusion model for MCQA named ExcMC, which mimics human thinking in selection. It dynamically eliminates several incorrect options to optimize the encoder usage. ExcMC outperforms existing comparable works on two widely-used MCQA datasets, demonstrating the effectiveness of our model.",Multiple Choice QA (MCQA) "A trending paradigm for multiple-choice question answering (MCQA) is using a text-to-text framework. By unifying data in different tasks into a single text-to-text format, it trains a generative encoder-decoder model which is both powerful and universal. However, a side effect of twisting a generation target to fit the classification nature of MCQA is the under-utilization of the decoder and the knowledge that can be decoded. To exploit the generation capability and underlying knowledge of a pre-trained encoder-decoder model, in this paper, we propose a generation-enhanced MCQA model named GenMC. It generates a clue from the question and then leverages the clue to enhance a reader for MCQA. It outperforms text-to-text models on multiple MCQA datasets.",Multiple Choice QA (MCQA) "In spoken multiple-choice question answering (SMCQA) task, given a passage, a question, and multiple choices all in the form of speech, the machine needs to pick the correct choice to answer the question. A common strategy is to employ an automatic speech recognition (ASR) system to translate speech contents into auto-transcribed text. Therefore, a SMCQA task is reduced to a classic MCQA task. Under the strategy, bidirectional encoder representations from transformers (BERT) can achieve a certain level of performance despite ASR errors. However, previous studies have evidenced that acoustic-level statistics can compensate for text inaccuracies caused by ASR systems, thereby improving the performance of a SMCQA system. Accordingly, we concentrate on designing a BERT-based SMCQA framework, which not only inherits the advantages of contextualized language representations learned by BERT, but integrates acoustic-level information with text-level information in a systematic and theoretical way. Considering temporal characteristics of speech, we first formulate multi-turn audio-extracter hierarchical convolutional neural networks (MA-HCNNs), which encode acoustic-level features under various temporal scopes. Based on MA-HCNNs, we propose a multi-turn audio-extracter BERT-based (MA-BERT) framework for SMCQA task. A series of experiments demonstrates remarkable improvements in accuracy over selected baselines and SOTA systems on a published Chinese SMCQA dataset.",Multiple Choice QA (MCQA) "Machine Reading Comprehension (MRC) for question answering (QA), which aims to answer a question given the relevant context passages, is an important way to test the ability of intelligence systems to understand human language. Multiple-Choice QA (MCQA) is one of the most difficult tasks in MRC because it often requires more advanced reading comprehension skills such as logical reasoning, summarization, and arithmetic operations, compared to the extractive counterpart where answers are usually spans of text within given passages. Moreover, most existing MCQA datasets are small in size, making the task even harder. We introduce MMM, a Multi-stage Multi-task learning framework for Multi-choice reading comprehension. Our method involves two sequential stages: coarse-tuning stage using out-of-domain datasets and multi-task learning stage using a larger in-domain dataset to help model generalize better with limited data. Furthermore, we propose a novel multi-step attention network (MAN) as the top-level classifier for this task. We demonstrate MMM significantly advances the state-of-the-art on four representative MCQA datasets.",Multiple Choice QA (MCQA) "Multiple Choice Question Answering (MCQA) is a well-established task in the field of Machine Reading Comprehension (MRC). Its objective is to identify the correct answer from a given set of options, based on the provided background passage and question. Recent advancements in large-scale Pre-trained Language Models (PLMs) have yielded impressive performance in MCQA. However, achieving such performance requires a significant number of training samples, leading to time-consuming and labor-intensive sample acquisition and annotation processes. To overcome the limitation posed by the availability of training samples, this paper explores the potential of leveraging the [Mask] token, which is commonly used in Masked Language Modeling (MLM) during the self-supervised training of PLMs. Specifically, the paper introduces a straightforward yet effective approach called [Mask] based Data Augmentation (MDA). The proposed method involves injecting [Mask] tokens into background passages to create masked versions of the original data. Moreover, a self-evaluator is introduced to regulate the process of masking production, with the objective of minimizing negative impact caused by argumentation noise. The effectiveness of the proposed method is empirically validated using various benchmark MCQA datasets. Experimental results demonstrate considerable improvements over state-of-the-arts.",Multiple Choice QA (MCQA) "In this paper, we proposed a novel approach to improve the performance of multiple choice question answering (MCQA) system using distributed semantic similarity and classification approach. We mainly focus on science-based MCQ which is really difficult to handle. Our proposed method is based on the hypothesis that the relation between question and answer of that question will be high in distributional semantic model rather than other options of that question. We are using IJCNLP shared Task 5 and SciQ dataset for our experiments. We have built three Models (i.e., Model 1, Model 2, Model 3) based on the dataset format. The basic difference between IJCNLP Task 5 and SciQ datasets is that SciQ dataset contains supporting text with questions whereas IJCNLP Task 5 dataset does not contain supporting text. Model 1 and Model 2 are mainly built to deal with IJCNLP Task 5 dataset whereas Model 3 is mainly built for SciQ dataset. Model 2 is mainly built to deal with the dependencies between options (i.e., all of these, two of them, none of them) whereas Model 1 is the basic model for MCQA and it cannot capture the dependencies between options. We also compare the result of SciQ dataset with supporting text (i.e., using Model 3) and without supporting text (i.e., using Model 1). We also compared our system with other existing methods. Though in some cases the performance of our proposed method is not satisfactory, we have noted that our submission is simple and robust that allows it to be more easily integrated into complex applications. This work investigates different techniques for choosing the correct answer of a given question in MCQA system. These experiments may therefore be useful to improve the performance of current science-based question answering (QA) systems. For IJCNLP Task 5 dataset, we achieved 44.5% using Model 2 and PubMed Dataset. Similarly for SciQ dataset we achieved 82.25% using Model 3 and PubMed dataset.",Multiple Choice QA (MCQA) "Multiple Choice Question Answering(MCQA) aims to automatically choose a correct answer from candidate options when given a passage and question. Existing approaches generally model attention mechanisms based on whole-passage information or manually tag key sentences for weakly supervised learning, which leads to the models focusing extensively on redundant information and costly manual annotation. In this paper, we consider evidence sentence extraction work in an unsupervised way to precisely pinpoint evidence sentences and minimize the impact of redundant information while avoiding costly manual annotations. Specifically, we propose a novel model called Term Similarity-aware Extensive and Intensive Reading(TS-EIR), which dynamically and automatically refines critical information by term similarity. In detail, it intelligently selects sentences more relevant to the question from the passage and deeply extracts features by enhanced graph convolutional neural network. We apply the proposed TS-EIR to a typical pre-trained language model, BERT, for encoding and evaluate it on the RACE and Dream benchmarks, which verify our model achieves substantial performance improvements over the current baseline.",Multiple Choice QA (MCQA) "With the transformer-based pre-trained language models, multiple-choice question answering (MCQA) systems can reach a particular level of performance. This study focuses on inheriting the benefits of contextualized language representations acquired by language models and transferring and sharing information among MCQA datasets. In this work, a method called multi-stage-fine-tuning considering the Curriculum Learning strategy is presented, which proposes sequencing not training samples, but the source datasets in a meaningful order, not randomized. Consequently, an extensive series of experiments over various MCQA datasets shows that the proposed method reaches remarkable performance enhancements than classical fine-tuning over picked baselines T5 and RoBERTa. Moreover, the experiments are conducted on merged source datasets, and the proposed method achieves improved performance. This study shows that increasing the number of source datasets and even using some small-scale datasets helps build well-generalized models. Moreover, having a higher similarity between source datasets and target also plays a vital role in the performance.",Multiple Choice QA (MCQA) "Multiple-Choice Question Answering (MCQA) is the most challenging area of Machine Reading Comprehension (MRC) and Question Answering (QA), since it not only requires natural language understanding, but also problem-solving techniques. We propose a novel method, Wrong Answer Ensemble (WAE), which can be applied to various MCQA tasks easily. To improve performance of MCQA tasks, humans intuitively exclude unlikely options to solve the MCQA problem. Mimicking this strategy, we train our model with the wrong answer loss and correct answer loss to generalize the features of our model, and exclude likely but wrong options. An experiment on a dialogue-based examination dataset shows the effectiveness of our approach. Our method improves the results on a fine-tuned transformer by 2.7%.",Multiple Choice QA (MCQA) "Multiple-choice question answering(MCQA) is one of the most challenging tasks in machine reading comprehension. MCQA task requires selecting the most appropriate answer from several relevant options for a given question. In recent years, many works have concentrated on designing models from the perspective of using the information of the question and options at a large granularity level. However, few studies have explored how the model uses the information to find the correct answer at a fine granularity level or a multi-granularity level. This paper proposed a multi-granularity representation enhancement method to use information from different granularities. The method introduces large-grained candidate option information into the question to guide the selection of fine-grained critical information and facilitate the information interaction between the answer and the question which is in line with the human reasoning processes. Experimental results show that the method proposed in this paper can effectively improve the accuracy of MCQA tasks without introducing external knowledge.",Multiple Choice QA (MCQA) "Machine Reading (MR) is an art of understanding text by the machine and one of the best tools to evaluate the understanding level of the machine is Reading Comprehension System (RCS) with Multiple Choice Questions (MCQ). In this paper, we proposed with a new knowledge representation, for understanding the given text, called Linguistic Knowledge Document (LKD). Such, LKD is generated from the given comprehension text. Natural Logic is used for generating the LKD. It is like an inference engine which contains all possible inference for each sentence in the comprehension text. The proposed LKD is acting like a human brain for the machine for answering the questions inquired by MCQA system. We use token based alignment model for finding answers from the LKD. We evaluate our system on RACE dataset and the obtained results are compared with recent methods. The comparison results show that the proposed model outperforms the recent results.",Multiple Choice QA (MCQA) "Interests play an essential role in the process of learning, thereby enriching learners ‘interests will yield to an enhanced experience in MOOCs. Learners interact freely and spontaneously on social media through different forms of user-generated content which contain hidden information that reveals their real interests and preferences. In this paper, we aim to identify and extract the topical interest from the text content shared by learners on social media to enrich their course preferences in MOOCs. We apply NLP pipeline and topic modeling techniques to the textual feature using three well-known topic models: Latent Dirichlet Allocation, Latent Semantic Analysis, and BERTopic. The results of our experimentation have shown that BERTopic performed better on the scrapped dataset.",NLP for Social Media "In the past few years, the meme has become a new way of communication on the Internet. As memes are in images forms with embedded text, it can quickly spread hate, offence and violence. Classifying memes are very challenging because of their multimodal nature and region-specific interpretation. A shared task is organized to develop models that can identify trolls from multimodal social media memes. This work presents a computational model that we developed as part of our participation in the task. Training data comes in two forms: an image with embedded Tamil code-mixed text and an associated caption. We investigated the visual and textual features using CNN, VGG16, Inception, m-BERT, XLM-R, XLNet algorithms. Multimodal features are extracted by combining image (CNN, ResNet50, Inception) and text (Bi-LSTM) features via early fusion approach. Results indicate that the textual approach with XLNet achieved the highest weighted f_1-score of 0.58, which enable our model to secure 3rd rank in this task.",NLP for Social Media "During the last decade, the use of social media has exploded. One of the internet’s most popular and accessible social media sites is Twitter in which people can have micro-blog to post their views on any subject, called a tweet. Indonesia becomes one of country which has the most active user of social media. On the other side, Indonesia has experienced a great deal of natural disasters because it is located in a pacific ring of fire. It is needed effective disaster management used in Indonesia, especially with the help of social media data. Big data analysis obtained during turbulent and unorganized emergency situations is the perfect fit for effective management. During a disaster, it is vital to manage the proper decision to help people affected with their needs. This research investigated some methods used in social media analysis and then apply deep learning model for supervised tasks which classify the related and unrelated disasters on tweet datasets and Latent Dirichlet Allocation (LDA) for unsupervised learning which extracts the details category on the disaster Twitter data. The LSTM model architecture obtained a higher score accuracy than CNN. Moreover, the LDA topic modeling obtained potential results on the details topic from the datasets which could describe useful information to help disaster management.",NLP for Social Media "The growing use of media has led to the development of several machine learning (ML) and natural language processing (NLP) tools to process the unprecedented amount of social media content to make actionable decisions. However, these ML and NLP algorithms have been widely shown to be vulnerable to adversarial attacks. These vulnerabilities allow adversaries to launch a diversified set of adversarial attacks on these algorithms in different applications of social media text processing. In this article, we provide a comprehensive review of the main approaches for adversarial attacks and defenses in the context of social media applications with a particular focus on key challenges and future research directions. In detail, we cover literature on six key applications: 1) rumors detection; 2) satires detection; 3) clickbaits and spams identification; 4) hate speech detection; 5) misinformation detection; and 6) sentiment analysis. We then highlight the concurrent and anticipated future research questions and provide recommendations and directions for future work.",NLP for Social Media "User-generated social media data is constantly changing as new trends influence online discussion, causing distribution shift in test data for social media NLP applications. In addition, training data is often subject to change as user data is deleted. Most current NLP systems are static and rely on fixed training data. As a result, they are unable to adapt to temporal change – both test distribution shift and deleted training data – without frequent, costly re-training. In this paper, we study temporal adaptation through the task of longitudinal hash-tag prediction and propose a non-parametric technique as a simple but effective solution: non-parametric classifiers use datastores which can be updated, either to adapt to test distribution shift or training data deletion, without re-training. We release a new benchmark dataset comprised of 7 . 13 M Tweets from 2021, along with their hashtags, broken into consecutive temporal buckets. We compare parametric neural hash-tag classification and hashtag generation models, which need re-training for adaptation, with a non-parametric, training-free dense retrieval method that returns the nearest neighbor’s hashtags based on text embedding distance. In experiments on our longitudinal Twitter dataset we find that dense nearest neighbor retrieval has a relative performance gain of 64 . 12% over the best parametric baseline on test sets that exhibit distribution shift without requiring gradient-based re-training. Furthermore, we show that our datastore approach is particularly well-suited to dynamically deleted user data, with negligible computational cost and performance loss. Our novel benchmark dataset and empirical analysis can support future inquiry into the important challenges presented by temporal-ity in the deployment of AI systems on real-world user data",NLP for Social Media "Recently, emotion analysis has gained increased attention by NLP researchers due to its various applications in opinion mining, e-commerce, comprehensive search, healthcare, personalized recommendations and online education. Developing an intelligent emotion analysis model is challenging in resource-constrained languages like Tamil. Therefore a shared task is organized to identify the underlying emotion of a given comment expressed in the Tamil language. The paper presents our approach to classifying the textual emotion in Tamil into 11 classes: ambiguous, anger, anticipation, disgust, fear, joy, love, neutral, sadness, surprise and trust. We investigated various machine learning (LR, DT, MNB, SVM), deep learning (CNN, LSTM, BiLSTM) and transformer-based models (Multilingual-BERT, XLM-R). Results reveal that the XLM-R model outdoes all other models by acquiring the highest macro f_1-score (0.33).",NLP for Social Media "The surge in internet use to express personal thoughts and beliefs makes it increasingly feasible for the social NLP research community to find and validate associations between social media posts and mental health status . Cross-sectional and longitudinal studies of social media data bring to fore the importance of real-time responsible AI models for mental health analysis. Aiming to classify the research directions for social computing and tracking advances in the development of machine learning (ML) and deep learning (DL) based models, we propose a comprehensive survey on quantifying mental health on social media . We compose a taxonomy for mental healthcare and highlight recent attempts in examining social well-being with personal writings on social media. We define all the possible research directions for mental healthcare and investigate a thread of handling online social media data for stress, depression and suicide detection for this work. The key features of this manuscript are (i) feature extraction and classification, (ii) recent advancements in AI models, (iii) publicly available dataset, (iv) new frontiers and future research directions. We compile this information to introduce young research and academic practitioners with the field of computational intelligence for mental health analysis on social media. In this manuscript, we carry out a quantitative synthesis and a qualitative review with the corpus of over 92 potential research articles.",NLP for Social Media "Pretrained language models (PLMs) on domain-specific data have been proven to be effective for in-domain natural language processing (NLP) tasks. Our work aimed to develop a language model which can be effective for the NLP tasks with the data from diverse social media platforms. We pretrained a language model on Twitter and Reddit posts in English consisting of 929M sequence blocks for 112K steps. We benchmarked our model and 3 transformer-based models—BERT, BERTweet, and RoBERTa on 40 social media text classification tasks. The results showed that although our model did not perform the best on all of the tasks, it outperformed the baseline model—BERT on most of the tasks, which illustrates the effectiveness of our model. Also, our work provides some insights of how to improve the efficiency of training PLMs.",NLP for Social Media "In this paper we present TweetNLP, an integrated platform for Natural Language Processing (NLP) in social media. TweetNLP supports a diverse set of NLP tasks, including generic focus areas such as sentiment analysis and named entity recognition, as well as social media-specific tasks such as emoji prediction and offensive language identification. Task-specific systems are powered by reasonably-sized Transformer-based language models specialized on social media text (in particular, Twitter) which can be run without the need for dedicated hardware or cloud services. The main contributions of TweetNLP are: (1) an integrated Python library for a modern toolkit supporting social media analysis using our various task-specific models adapted to the social domain; (2) an interactive online demo for codeless experimentation using our models; and (3) a tutorial covering a wide variety of typical social media applications.",NLP for Social Media "Social media data such as Twitter messages (“tweets”) pose a particular challenge to NLP systems because of their short, noisy, and colloquial nature. Tasks such as Named Entity Recognition (NER) and syntactic parsing require highly domain-matched training data for good performance. To date, there is no complete training corpus for both NER and syntactic analysis (e.g., part of speech tagging, dependency parsing) of tweets. While there are some publicly available annotated NLP datasets of tweets, they are only designed for individual tasks. In this study, we aim to create Tweebank-NER, an English NER corpus based on Tweebank V2 (TB2), train state-of-the-art (SOTA) Tweet NLP models on TB2, and release an NLP pipeline called Twitter-Stanza. We annotate named entities in TB2 using Amazon Mechanical Turk and measure the quality of our annotations. We train the Stanza pipeline on TB2 and compare with alternative NLP frameworks (e.g., FLAIR, spaCy) and transformer-based models. The Stanza tokenizer and lemmatizer achieve SOTA performance on TB2, while the Stanza NER tagger, part-of-speech (POS) tagger, and dependency parser achieve competitive performance against non-transformer models. The transformer-based models establish a strong baseline in Tweebank-NER and achieve the new SOTA performance in POS tagging and dependency parsing on TB2. We release the dataset and make both the Stanza pipeline and BERTweet-based models available “off-the-shelf” for use in future Tweet NLP research.",NLP for Social Media "The idea of “citizen sensing” and “human as sensors” is crucial for social Internet of Things, an integral part of cyber–physical–social systems (CPSSs). Social media data, which can be easily collected from the social world, has become a valuable resource for research in many different disciplines, e.g., crisis/disaster assessment, social event detection, or the recent COVID-19 analysis. Useful information, or knowledge derived from social data, could better serve the public if it could be processed and analyzed in more efficient and reliable ways. Advances in deep neural networks have significantly improved the performance of many social media analysis tasks. However, deep learning models typically require a large amount of labeled data for model training, while most CPSS data is not labeled, making it impractical to build effective learning models using traditional approaches. In addition, the current state-of-the-art, pretrained natural language processing (NLP) models do not make use of existing knowledge graphs, thus often leading to unsatisfactory performance in real-world applications. To address the issues, we propose a new zero-shot learning method which makes effective use of existing knowledge graphs for the classification of very large amounts of social text data. Experiments were performed on a large, real-world tweet data set related to COVID-19, the evaluation results show that the proposed method significantly outperforms six baseline models implemented with state-of-the-art deep learning models for NLP.",NLP for Social Media "Twitter is a microblogging service for sending short, public text messages (tweets) that has recently received more attention in scientific comunity. In the works of Sasaki et al. (2010) and Earle et al., (2011) the authors explored the real-time interaction on Twitter for detecting natural hazards (e.g., earthquakes, typhoons) baed on users' tweets. An inherent challenge for such an application is the natural language processing (NLP), which basically consists in converting the words in number (vectors and tensors) in order to (mathematically/ computationally) make predictions and classifications. Recently advanced computational tools have been made available for dealing with text computationally. In this report we implement a NLP machine learning with TensorFlow, an end-to-end open source plataform for machine learning applications, to process and classify evenct based on files containing only text.",NLP for Social Media "With the technological advancements and its reach Social media has become an essential part of our daily lives. Using social media platforms allows propagandist to spread the propaganda more effortlessly and faster than ever before. Machine learning and Natural language processing applications to solve the problem of propaganda in social media has invited researchers attention in recent years. Several techniques and tools have been proposed to counter propagation of propaganda over social media. This work pursues to analyse the trends in research studies in the recent past which address this issue. Our purpose is to conduct a comprehensive literature review of studies focusing on this area. We perform meta-analysis, categorization, and classification of several existing scholarly articles to increase the understanding of the state-of-the-art in the mentioned field.",NLP for Social Media "SocialNLP is an inter-disciplinary area of natural language processing (NLP) and social computing. SocialNLP has three directions: (1) addressing issues in social computing using NLP techniques; (2) solving NLP problems using information from social networks or social media; and (3) handling new problems related to both social computing and natural language processing. The 11th SocialNLP workshop is held at TheWebConf 2023. We accepted nine papers with acceptance ratio 56%. We sincerely thank to all authors, program committee members, and workshop chairs, for their great contributions and help in this edition of SocialNLP workshop.",NLP for Social Media "Current urbanization trends are leading to heightened demand of smarter technologies to facilitate a variety of applications in intelligent transportation systems. Automated crowdsensing constitutes a strong base for ITS applications by providing novel and rich data streams regarding congestion tracking and real-time navigation. Along with these well-leveraged data streams, drivers and passengers tend to report traffic information to social media platforms. Despite their abundance, the use of social media data in ITS has gained more and more attention as of now. In this article, we develop an automated Natural Language Processing (NLP)-based framework to empower and complement traffic reporting solutions by text mining social media, extracting desired information, and generating alerts and warning for drivers. We employ the fine-tuned Bidirectional Encoder Representations from Transformers classification model to filer and classify data. Then, we apply the Question-Answering model to extract necessary information characterizing the reported incident such as its location, occurrence time, and nature of the incidents. Afterwards, we convert the collected information into alerts to be integrated into personal navigation assistants. Finally, we compare the recently posted incident reports from both official authorities and social media in order to provide more complete incident pictures and suggest some open research directions.",NLP for Social Media "for timely and efficient reactions to disasters, collecting vital and right information is essential. In recent decades, social media platforms such as Twitter, Facebook, Linkedin, Instagram have become valuable sources of information in disaster times. However, the reliability, volume, and velocity of information remain a major concern; this is particular about information issued from disaster locations. This paper proposes an approach for tracking the location of people in danger during times of disaster. The procedure is based on the Twitter API by using Natural Language Processing and Big Data tools. A number of tweets were analyzed and an accuracy of 86.11% was actualized.",NLP for Social Media "This paper presents our contributions to the MediaEval 2021 task namely""WaterMM: Water Quality in Social Multimedia"". The task aims at analyzing social media posts relevant to water quality with particular focus on the aspects like watercolor, smell, taste, and related illnesses. To this aim, a multimodal dataset containing both textual and visual information along with meta-data is provided. Considering the quality and quantity of available content, we mainly focus on textual information by employing three different models individually and jointly in a late-fusion manner. These models include (i) Bidirectional Encoder Representations from Transformers (BERT), (ii) Robustly Optimized BERT Pre-training Approach (XLM-RoBERTa), and a (iii) custom Long short-term memory (LSTM) model obtaining an overall F1-score of 0.794, 0.717, 0.663 on the official test set, respectively. In the fusion scheme, all the models are treated equally and no significant improvement is observed in the performance over the best performing individual model.",NLP for Social Media "Marketing has changed fundamentally in the new millennium. At the same time, sustainable marketing strategies have evolved to meet the challenges of environmental issues. In this study, we examined the trends in sustainable marketing strategies and the role of social media in these. Based on specific keywords per the objective, this study collected 33 published articles from the Scopus database from 1991 to 2022 (2012–2022). The KNIME (Konstanz Information Miner) and VOSviewer tools were deployed to provide detailed classification and prediction of the various trends in sustainable marketing, with a particular focus on the role of social media. The study method applied text mining and latent semantic analysis to predict the latest trends. The top three trends were Green Marketing and Consumer Behavior, Sustainable Social Media Marketing, and Influencer Social Media Marketing Practices. This NLP-based review and the clustering of research directions provide immense value to marketers and policymakers.",NLP for Social Media "From the day web showed up, the hour of long reach relational correspondence developed. In any case, no one may have figured web would be a huge gathering of different surprising organizations like the relational communication. In today’s world staying connected virtually has become a part of human life. Various people from arranged age packs go through hours consistently on such destinations. Notwithstanding how people are really related together through online media, these workplaces convey along enormous risks with them, for instance, advanced attacks, which joins cyberbullying.",NLP for Social Media "Internet usage has made social media an integral part of our everyday lives. With the aid of Natural Language Tool Kit (NLTK), sentiment analysis refers to the process of identifying and analyzing a piece of writing in order to determine whether its sentiment, opinions, views, and emotions are positive, negative, or neutral towards a specific issue, item, etc. People today depend on social media to stay connected. Users are allowed to put their ideologies on Twitter, a widely used communication site. People can write short messages and leave comments. An organization can analyze Twitter sentiments to find out how its image is discussed by individuals. With numerous applications for different spaces, there are numerous methods of sentiment analysis. The two main strategies for analyzing opinions are knowledge base and machine learning. In this study, the Twitter data was collected from tweets that were tagged in voting systems. Text mining was used to pre-process Tweets. Then, using the inverse document frequency and term frequency, a vector space model was constructed, and then sentiment analysis was carried out with Random Forest Classifier, Decision Tree Classifier, and Logistic Regression algorithms. Experiments are discussed and conclusions are drawn.",NLP for Social Media "The recent advances of deep learning have dramatically changed how machine learning, especially in the domain of natural language processing, can be applied to legal domain. However, this shift to the data-driven approaches calls for larger and more diverse datasets, which are nevertheless still small in number, especially in non-English languages. Here we present the first large-scale benchmark of Korean legal AI datasets, LBOX OPEN, that consists of one legal corpus, two classification tasks, two legal judgement prediction (LJP) tasks, and one summarization task. The legal corpus consists of 147k Korean precedents (259M tokens), of which 63k are sentenced in last 4 years and 96k are from the first and the second level courts in which factual issues are reviewed. The two classification tasks are case names (11.3k) and statutes (2.8k) prediction from the factual description of individual cases. The LJP tasks consist of (1) 10.5k criminal examples where the model is asked to predict fine amount, imprisonment with labor, and imprisonment without labor ranges for the given facts, and (2) 4.7k civil examples where the inputs are facts and claim for relief and outputs are the degrees of claim acceptance. The summarization task consists of the Supreme Court precedents and the corresponding summaries (20k). We also release realistic variants of the datasets by extending the domain (1) to infrequent case categories in case name (31k examples) and statute (17.7k) classification tasks, and (2) to long input sequences in the summarization task (51k). Finally, we release LCUBE, the first Korean legal language model trained on the legal corpus from this study. Given the uniqueness of the Law of South Korea and the diversity of the legal tasks covered in this work, we believe that LBOX OPEN contributes to the multilinguality of global legal research. LBOX OPEN and LCUBE will be publicly available.",NLP for the Legal Domain "Lately, propelled by the phenomenal advances around the transformer architecture, the legal NLP field has enjoyed spectacular growth. To measure progress, well curated and challenging benchmarks are crucial. However, most benchmarks are English only and in legal NLP specifically there is no multilingual benchmark available yet. Additionally, many benchmarks are saturated, with the best models clearly outperforming the best humans and achieving near perfect scores. We survey the legal NLP literature and select 11 datasets covering 24 languages, creating LEXTREME. To provide a fair comparison, we propose two aggregate scores, one based on the datasets and one on the languages. The best baseline (XLM-R large) achieves both a dataset aggregate score a language aggregate score of 61.3. This indicates that LEXTREME is still very challenging and leaves ample room for improvement. To make it easy for researchers and practitioners to use, we release LEXTREME on huggingface together with all the code required to evaluate models and a public Weights and Biases project with all the runs.",NLP for the Legal Domain "This paper presents the Open Knowledge Extraction (OKE) tools combined with natural language analysis of the sentence in order to enrich the semantic of the legal knowledge extracted from legal text. In particular the use case is on international private law with specific regard to the Rome I Regulation EC 593/2008, Rome II Regulation EC 864/2007, and Brussels I bis Regulation EU 1215/2012. A Knowledge Graph (KG) is built using OKE and Natural Language Processing (NLP) methods jointly with the main ontology design patterns defined for the legal domain (e.g., event, time, role, agent, right, obligations, jurisdiction). Using critical questions, underlined by legal experts in the domain, we have built a question answering tool capable to support the information retrieval and to answer to these queries. The system should help the legal expert to retrieve the relevant legal information connected with topics, concepts, entities, normative references in order to integrate his/her searching activities.",NLP for the Legal Domain "Extracting and formalising legal norms from legal documents is a time-consuming and complex procedure. Therefore, the automatic methods that can accelerate this process are in high demand. In this paper, we address two major questions related to this problem: (i) what are the challenges in formalising legal documents into a machine understandable formalism? (ii) to what extent can the data-driven state-of-the-art approaches developed in the Natural Language Processing (NLP) community be used to automate the normative mining process. The results of our experiments indicate that NLP technologies such as relation extraction and semantic parsing are promising research avenues to advance research in this area.",NLP for the Legal Domain "We present JEC-QA, the largest question answering dataset in the legal domain, collected from the National Judicial Examination of China. The examination is a comprehensive evaluation of professional skills for legal practitioners. College students are required to pass the examination to be certified as a lawyer or a judge. The dataset is challenging for existing question answering methods, because both retrieving relevant materials and answering questions require the ability of logic reasoning. Due to the high demand of multiple reasoning abilities to answer legal questions, the state-of-the-art models can only achieve about 28% accuracy on JEC-QA, while skilled humans and unskilled humans can reach 81% and 64% accuracy respectively, which indicates a huge gap between humans and machines on this task. We will release JEC-QA and our baselines to help improve the reasoning ability of machine comprehension models.",NLP for the Legal Domain "BERT has achieved impressive performance in several NLP tasks. However, there has been limited investigation on its adaptation guidelines in specialised domains. Here we focus on the legal domain, where we explore several approaches for applying BERT models to downstream legal tasks, evaluating on multiple datasets. Our findings indicate that the previous guidelines for pre-training and fine-tuning, often blindly followed, do not always generalize well in the legal domain. Thus we propose a systematic investigation of the available strategies when applying BERT in specialised domains. These are: (a) use the original BERT out of the box, (b) adapt BERT by additional pre-training on domain-specific corpora, and (c) pre-train BERT from scratch on domain-specific corpora. We also propose a broader hyper-parameter search space when fine-tuning for downstream tasks and we release LEGAL-BERT, a family of BERT models intended to assist legal NLP research, computational law, and legal technology applications.",NLP for the Legal Domain "Transformer-based models have become the de facto standard in the field of Natural Language Processing (NLP). By leveraging large unlabeled text corpora, they enable efficient transfer learning leading to state-of-the-art results on numerous NLP tasks. Nevertheless, for low resource languages and highly specialized tasks, transformer models tend to lag behind more classical approaches (e.g. SVM, LSTM) due to the lack of aforementioned corpora. In this paper we focus on the legal domain and we introduce a Romanian BERT model pre-trained on a large specialized corpus. Our model outperforms several strong baselines for legal judgement prediction on two different corpora consisting of cases from trials involving banks in Romania.",NLP for the Legal Domain "Natural language processing (NLP) based approaches have recently received attention for legal systems of several countries. It is of interest to study the wide variety of legal systems that have so far not received any attention. In particular, for the legal system of the Republic of Turkey, codified in Turkish, no works have been published. We first review the state-of-the-art of NLP in law, and then study the problem of predicting verdicts for several different courts, using several different algorithms. This study is much broader than earlier studies in the number of different courts and the variety of algorithms it includes. Therefore it provides a reference point and baseline for further studies in this area. We further hope the scope and systematic nature of this study can set a framework that can be applied to the study of other legal systems. We present novel results on predicting the rulings of the Turkish Constitutional Court and Courts of Appeal, using only fact descriptions, and without seeing the actual rulings. The methods that are utilized are based on Decision Trees (DTs), Random Forests (RFs), Support Vector Machines (SVMs) and state-of-the-art deep learning (DL) methods; specifically Gated Recurrent Units (GRUs), Long Short-Term Memory networks (LSTMs) and bidirectional LSTMs (BiLSTMs), with the integration of an attention mechanism for each model. The prediction results for all algorithms are given in a comparative and detailed manner. We demonstrate that outcomes of the courts of Turkish legal system can be predicted with high accuracy, especially with deep learning based methods. The presented results exhibit similar performance to earlier work in the literature for other languages and legal systems.",NLP for the Legal Domain "One of the principal tasks of machine learning with major applications is text classification. This paper focuses on the legal domain and, in particular, on the classification of lengthy legal documents. The main challenge that this study addresses is the limitation that current models impose on the length of the input text. In addition, the present paper shows that dividing the text into segments and later combining the resulting embeddings with a BiLSTM architecture to form a single document embedding can improve results. These advancements are achieved by utilising a simpler structure, rather than an increasingly complex one, which is often the case in NLP research. The dataset used in this paper is obtained from an online public database containing lengthy legal documents with highly domain-specific vocabulary and thus, the comparison of our results to the ones produced by models implemented on the commonly used datasets would be unjustified. This work provides the foundation for future work in document classification in the legal field.",NLP for the Legal Domain "Legal Artificial Intelligence (LegalAI) focuses on applying the technology of artificial intelligence, especially natural language processing, to benefit tasks in the legal domain. In recent years, LegalAI has drawn increasing attention rapidly from both AI researchers and legal professionals, as LegalAI is beneficial to the legal system for liberating legal professionals from a maze of paperwork. Legal professionals often think about how to solve tasks from rule-based and symbol-based methods, while NLP researchers concentrate more on data-driven and embedding methods. In this paper, we introduce the history, the current state, and the future directions of research in LegalAI. We illustrate the tasks from the perspectives of legal professionals and NLP researchers and show several representative applications in LegalAI. We conduct experiments and provide an in-depth analysis of the advantages and disadvantages of existing works to explore possible future directions. ",NLP for the Legal Domain "Many specialized domains remain untouched by deep learning, as large labeled datasets require expensive expert annotators. We address this bottleneck within the legal domain by introducing the Contract Understanding Atticus Dataset (CUAD), a new dataset for legal contract review. CUAD was created with dozens of legal experts from The Atticus Project and consists of over 13,000 annotations. The task is to highlight salient portions of a contract that are important for a human to review. We find that Transformer models have nascent performance, but that this performance is strongly influenced by model design and training dataset size. Despite these promising results, there is still substantial room for improvement. As one of the only large, specialized NLP benchmarks annotated by experts, CUAD can serve as a challenging research benchmark for the broader NLP community.",NLP for the Legal Domain "In recent years, there has been an increased interest in the application of Natural Language Processing (NLP) to legal documents. The use of convolutional and recurrent neural networks along with word embedding techniques have presented promising results when applied to textual classification problems, such as sentiment analysis and topic segmentation of documents. This paper proposes the use of NLP techniques for textual classification, with the purpose of categorizing the descriptions of the services provided by the Public Prosecutor's Office of the State of Paraná to the population in one of the areas of law covered by the institution. Our main goal is to automate the process of assigning petitions to their respective areas of law, with a consequent reduction in costs and time associated with such process while allowing the allocation of human resources to more complex tasks. In this paper, we compare different approaches to word representations in the aforementioned task: including document-term matrices and a few different word embeddings. With regards to the classification models, we evaluated three different families: linear models, boosted trees and neural networks. The best results were obtained with a combination of Word2Vec trained on a domain-specific corpus and a Recurrent Neural Network (RNN) architecture (more specifically, LSTM), leading to an accuracy of 90% and F1-Score of 85% in the classification of eighteen categories (law areas).",NLP for the Legal Domain "Searching legal texts for relevant information is a complex and expensive activity. The search solutions offered by present-day legal portals are targeted primarily at legal professionals. These solutions are not adequate for requirements analysts whose objective is to extract domain knowledge including stakeholders, rights and duties, and business processes that are relevant to legal requirements. Semantic Web technologies now enable smart search capabilities and can be exploited to help requirements analysts in elaborating legal requirements. In our previous work, we developed an automated framework for extracting semantic metadata from legal texts. In this paper, we investigate the use of our metadata extraction framework as an enabler for smart legal search with a focus on requirements engineering activities. We report on our industrial experience helping the Government of Luxembourg provide an advanced search facility over Luxembourg's Income Tax Law. The experience shows that semantic legal metadata can be successfully exploited for answering requirements engineering-related legal queries. Our results also suggest that our conceptualization of semantic legal metadata can be further improved with new information elements and relations.",NLP for the Legal Domain "Legal technology is currently receiving a lot of attention from various angles. In this contribution we describe the main technical components of a system that is currently under development in the European innovation project Lynx, which includes partners from industry and research. The key contribution of this paper is a workflow manager that enables the flexible orchestration of workflows based on a portfolio of Natural Language Processing and Content Curation services as well as a Multilingual Legal Knowledge Graph that contains semantic information and meaningful references to legal documents. We also describe different use cases with which we experiment and develop prototypical solutions.",NLP for the Legal Domain "Legal-ES is an open source resource kit for legal Spanish. It consists of a large scale Spanish corpus of open legal texts and different kinds of language models including word embeddings and topic models. The corpus includes over 1000 million words covering a collection of legislative and administrative open access documents in Spanish from different sources representing international, national and regional entities. The corpus is pre-processed and tokenized using Spacy. For the word embeddings, gensim was used on the collection of tokens, producing a representation space that is especially suited to reflect the inherent characteristics of the legal domain. We calculate also topic models to obtain a convenient tool to understand the main topics in the corpus and to navigate through the documents exploiting the semantic similarity among documents. We will analyse the time structure of a dynamic topic model to infer changes in the legal production of Spanish jurisdiction that have occurred over the analysed time framework.",NLP for the Legal Domain "In this paper, we summarize the current state of the field of NLP & Law with a specific focus on recent technical and substantive developments. To support our analysis, we construct and analyze a nearly complete corpus of more than six hundred NLP & Law related papers published over the past decade. Our analysis highlights several major trends. Namely, we document an increasing number of papers written, tasks undertaken, and languages covered over the course of the past decade. We observe an increase in the sophistication of the methods which researchers deployed in this applied context. Slowly but surely, Legal NLP is beginning to match not only the methodological sophistication of general NLP but also the professional standards of data availability and code reproducibility observed within the broader scientific community. We believe all of these trends bode well for the future of the field, but many questions in both the academic and commercial sphere still remain open.",NLP for the Legal Domain "Bidirectional Encoder Representations from Transformers (BERT) has achieved state-of-the-art performances on several text classification tasks, such as GLUE and sentiment analysis. Recent work in the legal domain started to use BERT on tasks, such as legal judgement prediction and violation prediction. A common practise in using BERT is to fine-tune a pre-trained model on a target task and truncate the input texts to the size of the BERT input (e.g. at most 512 tokens). However, due to the unique characteristics of legal documents, it is not clear how to effectively adapt BERT in the legal domain. In this work, we investigate how to deal with long documents, and how is the importance of pre-training on documents from the same domain as the target task. We conduct experiments on the two recent datasets: ECHR Violation Dataset and the Overruling Task Dataset, which are multi-label and binary classification tasks, respectively. Importantly, on average the number of tokens in a document from the ECHR Violation Dataset is more than 1,600. While the documents in the Overruling Task Dataset are shorter (the maximum number of tokens is 204). We thoroughly compare several techniques for adapting BERT on long documents and compare different models pre-trained on the legal and other domains. Our experimental results show that we need to explicitly adapt BERT to handle long documents, as the truncation leads to less effective performance. We also found that pre-training on the documents that are similar to the target task would result in more effective performance on several scenario.",NLP for the Legal Domain "Legal documents such as contracts contain complex and domain-specific jargons, long and nested sentences, and often present with several details that may be difficult to understand for laypeople without domain expertise. In this paper, we explore the problem of text simplification (TS) in legal domain. The main challenge to this is the lack of availability of complex-simple parallel datasets for the legal domain. We investigate some of the existing datasets, methods, and metrics in the TS literature for simplifying legal texts, and perform human evaluation to analyze the gaps. We present some of the challenges involved, and outline a few open questions that need to be addressed for future research in this direction.",NLP for the Legal Domain "Today’s legislation lags behind the needs and practices of the Internet, electronic governance and digital economy in general. It is widely believed that enhancement of the legal system with IT to facilitate law-making and law enforcement can contribute to improvement of business and social environment, as well as people’s quality of life. Our paper is a study of foundations and means for building Legal Knowledge-Based Systems and transition to the so-called computational law. Particularly, we outline the application of legal and regulatory documents indexing technologies for legal language processing (LLP) and construct domain ontology for real estate legislation. The implementation of the approach may decrease the number of errors, over-complexities and ambiguities in legal texts, allow automated search for relevant documents, and categorize complicated legal relations. These should save the practitioners from spending too much time on routine tasks, simplify decision-making in law enforcement and reduce the subjectivity, and ultimately contribute to creating uniform and consistent legislation.",NLP for the Legal Domain "We explore in this study the effects of domain adaptation in NLP using the state-of-the-art pre-trained language model BERT. Using its German pre-trained version and a dataset from OpenLegalData containing over 100,000 German court decisions, we fine-tuned the language model and inserted legal domain vocabulary to create a German Legal BERT model. We evaluate the performance of this model on downstream tasks including classification, regression and similarity. For each task, we compare simple yet robust machine learning methods such as TFIDF and FastText against different BERT models, mainly the Multilingual BERT, the German BERT and our fine-tuned German Legal BERT. For the classification task, the reported results reveal that all models were equally performant. For the regression task, our German Legal BERT model was able to slightly improve over FastText and the other BERT models but it is still considerably outperformed by TFIDF. In a within-subject study (N=16), we asked subjects to evaluate the relevancy of documents retrieved by similarity compared to a reference case law. Our findings indicate that the German Legal BERT, to a small degree, was able to capture better legal information for comparison. We observed that further fine-tuning a BERT model in the legal domain when the pre-trained language model already included legal data yields marginal gains in performance.",NLP for the Legal Domain "Large pre-trained vision-language models like CLIP have shown great potential in learning representations that are transferable across a wide range of downstream tasks. Different from the traditional representation learning that is based mostly on discretized labels, vision-language pre-training aligns images and texts in a common feature space, which allows zero-shot transfer to a downstream task via prompting, i.e., classification weights are synthesized from natural language describing classes of interest. In this work, we show that a major challenge for deploying such models in practice is prompt engineering, which requires domain expertise and is extremely time-consuming—one needs to spend a significant amount of time on words tuning since a slight change in wording could have a huge impact on performance. Inspired by recent advances in prompt learning research in natural language processing (NLP), we propose Context Optimization (CoOp), a simple approach specifically for adapting CLIP-like vision-language models for downstream image recognition. Concretely, CoOp models a prompt’s context words with learnable vectors while the entire pre-trained parameters are kept fixed. To handle different image recognition tasks, we provide two implementations of CoOp: unified context and class-specific context. Through extensive experiments on 11 datasets, we demonstrate that CoOp requires as few as one or two shots to beat hand-crafted prompts with a decent margin and is able to gain significant improvements over prompt engineering with more shots, e.g., with 16 shots the average gain is around 15% (with the highest reaching over 45%). Despite being a learning-based approach, CoOp achieves superb domain generalization performance compared with the zero-shot model using hand-crafted prompts.",Prompt Engineering "Paraphrase generation, a crucial task in Natural Language Processing (NLP), is pivotal for the effectiveness of AI chatbots. However, generating high-quality paraphrases that are contextually relevant, semantically equivalent, and linguistically diverse remains a challenge. This paper explores the use of prompt engineering to enhance the paraphrasing capabilities of AI chatbots, specifically focusing on ChatGPT, Bing, and Bard. We introduce a new dataset of 5000 sentences generated by ChatGPT across diverse topics and propose two distinct prompts for paraphrase generation: a direct approach and an engineered prompt. The engineered prompt explicitly instructs the chatbot to generate paraphrases that exhibit lexical diversity, phrasal variations, syntactical differences, fluency, language acceptableness, and relevance, while preserving the original meaning. We conduct a comprehensive evaluation of the generated paraphrases using a range of metrics, including BERTScore, STS-B, METEOR for semantic similarity; ROUGE, BLEU, GLEU for diversity; and CoLA, Perplexity for language acceptableness or fluency. Our findings reveal that the use of the engineered prompt results in higher quality paraphrases across all three chatbots, demonstrating the potential of prompt engineering as a tool for improving chatbot communication.",Prompt Engineering "This research investigates the application of Large Language Models (LLMs) to augment conversational agents in process mining, aiming to tackle its inherent complexity and diverse skill requirements. While LLM advancements present novel opportunities for conversational process mining, generating efficient outputs is still a hurdle. We propose an innovative approach that amend many issues in existing solutions, informed by prior research on Natural Language Processing (NLP) for conversational agents. Leveraging LLMs, our framework improves both accessibility and agent performance, as demonstrated by experiments on public question and data sets. Our research sets the stage for future explorations into LLMs' role in process mining and concludes with propositions for enhancing LLM memory, implementing real-time user testing, and examining diverse data sets.",Prompt Engineering "This paper describes how one researcher learned to overcome artificial intelligence (AI) paralysis and embrace ChatPDF. This freely available AI application uses natural language processing (NLP) to respond to user queries about an uploaded PDF. Researcher insights from experimenting with the AI tool ChatPDF for qualitative data analysis are presented, highlighting the advantages, pitfalls, and application-related considerations. As a two-phase curiosity experiment, the researcher engaged in a theory-building exercise to explore key concepts for understanding when using ChatPDF to assist researchers in qualitative data analysis. The experiment generated insights about the purposeful use of AI tools that incorporate NLP for analysis and the risks of inaccuracy when researchers are not familiar with the data or skilled in prompt engineering. Insights raise questions about whether ChatPDF is a viable research assistant for qualitative researchers, ethical issues with specific forms of qualitative data, and the potential of AI tools for community and student researchers.",Prompt Engineering "GPT-3 and several other language models (LMs) can effectively address various natural language processing (NLP) tasks, including machine translation and text summarization. Recently, they have also been successfully employed in the business process management (BPM) domain, e.g., for predictive process monitoring and process extraction from text. This, however, typically requires fine-tuning the employed LM, which, among others, necessitates large amounts of suitable training data. A possible solution to this problem is the use of prompt engineering, which leverages pre-trained LMs without fine-tuning them. Recognizing this, we argue that prompt engineering can help bring the capabilities of LMs to BPM research. We use this position paper to develop a research agenda for the use of prompt engineering for BPM research by identifying the associated potentials and challenges.",Prompt Engineering "This paper presents a comprehensive exploration of the evolution of prompt engineering and generation in the field of natural language processing (NLP). Starting from the early language models and information retrieval systems, we trace the key developments that have shaped prompt engineering over the years. The introduction of attention mechanisms in 2015 revolutionized language understanding, leading to advancements in controllability and context-awareness. Subsequent breakthroughs in reinforcement learning techniques further enhanced prompt engineering, addressing issues like exposure bias and biases in generated text. We examine the significant contributions in 2018 and 2019, focusing on fine-tuning strategies, control codes, and template-based generation. The paper also discusses the growing importance of fairness, human-AI collaboration, and low-resource adaptation. In 2020 and 2021, contextual prompting and transfer learning gained prominence, while 2022 and 2023 witnessed the emergence of advanced techniques like unsupervised pre-training and novel reward shaping. Throughout the paper, we reference specific research studies that exemplify the impact of various developments on prompt engineering. The journey of prompt engineering continues, with ethical considerations being paramount for the responsible and inclusive future of AI systems.",Prompt Engineering "Pre-trained models have been shown effective in many code intelligence tasks, such as automatic code summarization and defect prediction. These models are pre-trained on large-scale unlabeled corpus and then fine-tuned in downstream tasks. However, as the inputs to pre-training and downstream tasks are in different forms, it is hard to fully explore the knowledge of pre-trained models. Besides, the performance of fine-tuning strongly relies on the amount of downstream task data, while in practice, the data scarcity scenarios are common. Recent studies in the natural language processing (NLP) field show that prompt tuning, a new paradigm for tuning, alleviates the above issues and achieves promising results in various NLP tasks. In prompt tuning, the prompts inserted during tuning provide task-specific knowledge, which is especially beneficial for tasks with relatively scarce data. In this article, we empirically evaluate the usage and effect of prompt tuning in code intelligence tasks. We conduct prompt tuning on popular pre-trained models CodeBERT and CodeT5 and experiment with four code intelligence tasks including defect prediction, code search, code summarization, and code translation. Our experimental results show that prompt tuning consistently outperforms fine-tuning in all four tasks. In addition, prompt tuning shows great potential in low-resource scenarios, e.g., improving the BLEU scores of fine-tuning by more than 26% on average for code summarization. Our results suggest that instead of fine-tuning, we could adapt prompt tuning for code intelligence tasks to achieve better performance, especially when lacking task-specific data. We also discuss the implications for adapting prompt tuning in code intelligence tasks.",Prompt Engineering "This paper presents a comprehensive exploration of the evolution of prompt engineering and generation in the field of natural language processing (NLP). Starting from the early language models and information retrieval systems, we trace the key developments that have shaped prompt engineering over the years. The introduction of attention mechanisms in 2015 revolutionized language understanding, leading to advancements in controllability and context-awareness. Subsequent breakthroughs in reinforcement learning techniques further enhanced prompt engineering, addressing issues like exposure bias and biases in generated text. We examine the significant contributions in 2018 and 2019, focusing on fine-tuning strategies, control codes, and template-based generation. The paper also discusses the growing importance of fairness, human-AI collaboration, and low-resource adaptation. In 2020 and 2021, contextual prompting and transfer learning gained prominence, while 2022 and 2023 witnessed the emergence of advanced techniques like unsupervised pre-training and novel reward shaping. Throughout the paper, we reference specific research studies that exemplify the impact of various developments on prompt engineering. The journey of prompt engineering continues, with ethical considerations being paramount for the responsible and inclusive future of AI systems.",Prompt Engineering "Pre-training models have shown their power in sequential recommendation. Recently, prompt has been widely explored and verified for tuning in NLP pre-training, which could help to more effectively and efficiently extract useful knowledge from pre-training models for downstream tasks, especially in cold-start scenarios. However, it is challenging to bring prompt-tuning from NLP to recommendation, since the tokens in recommendation (i.e., items) do not have explicit explainable semantics, and the sequence modeling should be personalized. In this work, we first introduces prompt to recommendation and propose a novel Personalized prompt-based recommendation (PPR) framework for cold-start recommendation. Specifically, we build the personalized soft prefix prompt via a prompt generator based on user profiles and enable a sufficient training of prompts via a prompt-oriented contrastive learning with both prompt- and behavior-based augmentations. We conduct extensive evaluations on various tasks. In both few-shot and zero-shot recommendation, PPR models achieve significant improvements over baselines on various metrics in three large-scale open datasets. We also conduct ablation tests and sparsity analysis for a better understanding of PPR. Moreover, We further verify PPR's universality on different pre-training models, and conduct explorations on PPR's other promising downstream tasks including cross-domain recommendation and user profile prediction.",Prompt Engineering "Training and evaluating language models increasingly requires the construction of meta-datasets --diverse collections of curated data with clear provenance. Natural language prompting has recently lead to improved zero-shot generalization by transforming existing, supervised datasets into a diversity of novel pretraining tasks, highlighting the benefits of meta-dataset curation. While successful in general-domain text, translating these data-centric approaches to biomedical language modeling remains challenging, as labeled biomedical datasets are significantly underrepresented in popular data hubs. To address this challenge, we introduce BigBIO a community library of 126+ biomedical NLP datasets, currently covering 12 task categories and 10+ languages. BigBIO facilitates reproducible meta-dataset curation via programmatic access to datasets and their metadata, and is compatible with current platforms for prompt engineering and end-to-end few/zero shot language model evaluation. We discuss our process for task schema harmonization, data auditing, contribution guidelines, and outline two illustrative use cases: zero-shot evaluation of biomedical prompts and large-scale, multi-task learning. BigBIO is an ongoing community effort and is available at https://github.com/bigscience-workshop/biomedical",Prompt Engineering "This paper presents an AI-assisted programming tool called Copilot for Xcode for program composition and design to support human software developers. By seamlessly integrating cloud-based Large Language Models (LLM) with Apple's local development environment, Xcode, this tool enhances productivity and unleashes creativity for software development in Apple software ecosystem (e.g., iOS apps, macOS). Leveraging advanced natural language processing (NLP) techniques, Copilot for Xcode effectively processes source code tokens and patterns within code repositories, enabling features such as code generation, autocompletion, documentation, and error detection. Software developers can also query and make""small""decisions for program composition, some of which can be made simultaneously, and this is facilitated through prompt engineering in a chat interface of Copilot for Xcode. Finally, we present simple case studies as evidence of the effectiveness of utilizing NLP in Xcode to prompt popular LLM services like OpenAI ChatGPT for program composition and design.",Prompt Engineering "Large language models trained on a mixture of NLP tasks that are converted into a text-to-text format using prompts, can generalize into novel forms of language and handle novel tasks. A large body of work within prompt engineering attempts to understand the effects of input forms and prompts in achieving superior performance. We consider an alternative measure and inquire whether the way in which an input is encoded affects social biases promoted in outputs. In this paper, we study T0, a large-scale multi-task text-to-text language model trained using prompt-based learning. We consider two different forms of semantically equivalent inputs: question-answer format and premise-hypothesis format. We use an existing bias benchmark for the former BBQ and create the first bias benchmark in natural language inference BBNLI with hand-written hypotheses while also converting each benchmark into the other form. The results on two benchmarks suggest that given two different formulations of essentially the same input, T0 conspicuously acts more biased in question answering form, which is seen during training, compared to premise-hypothesis form which is unlike its training examples.",Prompt Engineering "Pre-trained language models (PLMs) have demonstrated significant proficiency in solving a wide range of general natural language processing (NLP) tasks. Researchers have observed a direct correlation between the performance of these models and their sizes. As a result, the sizes of these models have notably expanded in recent years, persuading researchers to adopt the term large language models (LLMs) to characterize the larger-sized PLMs. The size expansion comes with a distinct capability called in-context learning (ICL), which represents a special form of prompting and allows the models to be utilized through the presentation of demonstration examples without modifications to the model parameters. Although interesting, privacy concerns have become a major obstacle in its widespread usage. Multiple studies have examined the privacy risks linked to ICL and prompting in general, and have devised techniques to alleviate these risks. Thus, there is a necessity to organize these mitigation techniques for the benefit of the community. This survey provides a systematic overview of the privacy protection methods employed during ICL and prompting in general. We review, analyze, and compare different methods under this paradigm. Furthermore, we provide a summary of the resources accessible for the development of these frameworks. Finally, we discuss the limitations of these frameworks and offer a detailed examination of the promising areas that necessitate further exploration.",Prompt Engineering "The exption of Chinese natural language processing (NLP) has stimulated research in the broader NLP domain. However, existing large language models have limitations in comprehending and reasoning in Chinese. This paper addresses these limitations by enhancing Chinese language models comprehension and reasoning capabilities while minimizing resource requirements. We propose LLaMA-LoRA, a neural prompt engineering framework that builds upon the LLaMA-13B model and incorporates the Low-Rank Adaptation(LoRA) of Large Language Models technique for refinement. Chain-of-Thought(CoT) are crucial for generating intermediate reasoning chains in language models, but their effectiveness can be limited by isolated language patterns. Erroneous reasoning resulting from conventional prompts negatively impacts model performance. Automatic prompts are introduced to encourage reasoning chain generation and accurate answer inference. Training the model with an extensive corpus of Chinese CoT data enhances its comprehension and reasoning abilities. The LLaMA-LoRA model demonstrates exceptional performance across numerous Chinese language tasks, surpassing benchmark performance achieved by related language models such as GPT-3.5, Chat-GLM, and OpenAssistant, delivering accurate, comprehensive, and professional answers. The availability of our open-source model code facilitates further research in the field of Chinese text logical reasoning thinking chains.",Prompt Engineering "Despite significant progress in the fields of machine learning and deep learning, there remains a sense of mistrust regarding the use of these models in real-world scenarios. This mistrust can be partly attributed to semantic biases in text, especially within the realm of commercial natural language processing (NLP). In this work, we analyze genre bias in movie reviews using the Word Embedding Association Test (WEAT). We compare bias across foundational transformer models, including BERT, DistilBert, RoBERTa, T5, XLNet, and GPT2, along with traditional approaches like Glove and Word2Vec. Our analysis shows that while the underlying data contains bias, different models exhibit varied bias levels due to their distinct architectures and training objectives. To mitigate bias, we propose a simple yet effective prompt engineering technique. Incorporating prompts led to a noticeable reduction in bias across different genres, with the effect sizes indicating that using prompts decreased bias by approximately 35% on average compared to scenarios without prompts. Our work provides new analysis that sheds light on prompt engineering techniques to address the pressing issue of semantic bias in NLP models. We believe continued research in this direction can lead to more transparent and fair AI systems",Prompt Engineering "With the emergence of models such as chatGPT and Baidu AI Wenxin Yiyan, the research and application of NLP (Natural Language Processing) is increasingly centered on PLM (Pretrained Language Model),It marks that the current machine learning model has reached a new height. This article first introduces the background of the large language model, and introduces pre-training + fine-tuning and the current popular Prompt from the four major paradigms of NLP. Understand the workflow and function of Prompt, focusing on Prompt engineering, and structures, and looking ahead to future challenges for Prompt.",Prompt Engineering "In recent years, semantic embeddings for text has played a bigger role in the field of natural language processing (NLP), additionally, it has shown great potential in real-life applications like search and recommendation systems. Therefore, models for generating semantic text embeddings have received extensive study. State-of-the-art solutions for text embeddings have evolved from traditional methods (like Word2Vec, Glove, etc.) to deep neural network based solutions (such as LSTM, Transformer, and pre-trained models like BERT and RoBERTa, etc), besides, frameworks like Sentence Transformer have already lowered the bar of training models for semantic text representation using customized models and datasets. In this paper, we investigated several well trained models according to Massive Text Embedding Benchmark (MTEB) in Huggingface website. Enlighted by the extensive use of prompt engineering in large language models like Llama or GPT3, we proposed STEP: a novel method using prompt to improve performance of text embeddings on downstream tasks, making it applicable to almost any pre-trained language models for text embeddings. Besides, STEP does not need to modify base model structure. In the experiment, we applied STEP to five pre-trained models chosen from MTEB, trained and evaluated our approach on two separated datasets, final results indicated that our approach could improve performance of tasks related to semantic text similarity.",Prompt Engineering "This paper presents a comprehensive survey and review of financial prompt patterns, exploring the innovative integration of ChatGPT in finance-related tasks. With the rapid evolution of AI and its increasing application in various sectors, the finance industry stands at the forefront of this technological revolution. Our research delves into the myriad ways in which prompt engineering with ChatGPT can enhance financial analyses, risk assessment, investment strategy formulation, and customer service in the finance sector. We systematically categorize and evaluate a wide array of prompt patterns, drawing insights from real-world applications and theoretical frameworks. This survey not only identifies the current state of prompt engineering in finance but also forecasts future trends, challenges, and opportunities. By providing a detailed examination of various prompt designs and their outcomes, this paper aims to serve as a foundational guide for practitioners and researchers seeking to leverage ChatGPT's capabilities for optimized financial decision-making and innovation. The findings underscore the transformative potential of tailoredprompts in elevating the accuracy, efficiency, and scope of financial services and strategies",Prompt Engineering " major roadblock in the seamless digitization of medical records remains the lack of interoperability of existing records. Extracting relevant medical information required for further treatment planning or even research is a time consuming labour intensive task involving the much valuable time of doctors. In this demo paper we present, MedPromptExtract an automated tool using a combination of semi supervised learning, large language models, natural lanuguage processing and prompt engineering to convert unstructured medical records to structured data which is amenable to further analysis.",Prompt Engineering "Large language models (LLMs) have shown remarkable capabilities in Natural Language Processing (NLP), especially in domains where labeled data is scarce or expensive, such as clinical domain. However, to unlock the clinical knowledge hidden in these LLMs, we need to design effective prompts that can guide them to perform specific clinical NLP tasks without any task-specific training data. This is known as in-context learning, which is an art and science that requires understanding the strengths and weaknesses of different LLMs and prompt engineering approaches. In this paper, we present a comprehensive and systematic experimental study on prompt engineering for five clinical NLP tasks: Clinical Sense Disambiguation, Biomedical Evidence Extraction, Coreference Resolution, Medication Status Extraction, and Medication Attribute Extraction. We assessed the prompts proposed in recent literature, including simple prefix, simple cloze, chain of thought, and anticipatory prompts, and introduced two new types of prompts, namely heuristic prompting and ensemble prompting. We evaluated the performance of these prompts on three state-of-the-art LLMs: GPT-3.5, BARD, and LLAMA2. We also contrasted zero-shot prompting with few-shot prompting, and provide novel insights and guidelines for prompt engineering for LLMs in clinical NLP. To the best of our knowledge, this is one of the first works on the empirical evaluation of different prompt engineering approaches for clinical NLP in this era of generative AI, and we hope that it will inspire and inform future research in this area.",Prompt Engineering "Twitter is a social media platform that allows users to share thoughts or information with others for all to see. However, twitters often use abbreviations, slang, and incorrect grammar because tweets are limited to 280 characters. Topic detection often has problems with low accuracy, one method that can be used to overcome this problem is feature expansion. Feature expansion on Twitter is a semantic addition to the process of expanding the original text syllables to make it look like a large Document. That way, feature expansion is used to reduce word mismatches. This study uses the expansion of the GloVe feature with the Convolutional Neural Network (CNN) and Gated Recurrent Unit (GRU) classification methods. The results show that the topic detection system with the GloVe feature extension and CNN-GRU hybrid classification has an accuracy of 94.41%",Acronyms and Abbreviations Detection and Expansion "Abbreviation is a method of word formation that aims to construct the shortened term from the first letters of the initial phrase. Implicit abbreviations frequently cause the comprehension difficulties for unprepared readers. In this paper, we propose an efficient ML-based algorithm which allows to identify the abbreviations in Russian texts. The method achieves ROC AUC score 0.926 and F1 score 0.706 which are confirmed as competitive in comparison with the baselines. Along with the pipeline, we also establish first to our knowledge Russian dataset that is relevant for the desired task.",Acronyms and Abbreviations Detection and Expansion "Expanding abbreviations is an important text normalization technique used for the purpose of either increasing developer comprehension or supporting the application of natural-language-based tools for source code identifiers. This paper closely studies abbreviations and where their expansions occur in different software artifacts. Without abbreviation expansion, developers will spend more time in comprehending the code they need to update, and tools analyzing software may obtain weak or non-generalizable results. There are numerous techniques for expanding abbreviations, most of which struggle to reach an average expansion accuracy of 59-62% on general source code identifiers. In this paper, we reveal some characteristics of abbreviations and their expansions through an empirical study of 861 abbreviation-expansion pairs extracted from 5 open-source systems in addition to analyzing previous literature. We use these characteristics to identify how current approaches may be complementary and how their results should be reported in the future to help maximize both our understanding of how they compare with other expansion techniques and their reproducibility.",Acronyms and Abbreviations Detection and Expansion "Pre-processing plays an essential role in disambiguating the meaning of short-texts, not only in applications that classify short-texts but also for clustering and anomaly detection. Pre-processing can have a considerable impact on overall system performance; however, it is less explored in the literature in comparison to feature extraction and classification. This paper analyzes twelve different pre-processing techniques on three pre-classified Twitter datasets on hate speech and observes their impact on the classification tasks they support. It also proposes a systematic approach to text pre-processing to apply different pre-processing techniques in order to retain features without information loss. In this paper, two different word-level feature extraction models are used, and the performance of the proposed package is compared with state-of-the-art methods. To validate gains in performance, both traditional and deep learning classifiers are used. The experimental results suggest that some pre-processing techniques impact negatively on performance, and these are identified, along with the best performing combination of pre-processing techniques.",Acronyms and Abbreviations Detection and Expansion "Acronym extraction is the task of identifying acronyms and their expanded forms in texts that is necessary for various NLP applications. Despite major progress for this task in recent years, one limitation of existing AE research is that they are limited to the English language and certain domains (i.e., scientific and biomedical). As such, challenges of AE in other languages and domains is mainly unexplored. Lacking annotated datasets in multiple languages and domains has been a major issue to hinder research in this area. To address this limitation, we propose a new dataset for multilingual multi-domain AE. Specifically, 27,200 sentences in 6 typologically different languages and 2 domains, i.e., Legal and Scientific, is manually annotated for AE. Our extensive experiments on the proposed dataset show that AE in different languages and different learning settings has unique challenges, emphasizing the necessity of further research on multilingual and multi-domain AE.",Acronyms and Abbreviations Detection and Expansion "We present our systems submitted for the shared tasks of Acronym Identification (AI) and Acronym Disambiguation (AD) held under Workshop on SDU. We mainly experiment with BERT and SciBERT. In addition, we assess the effectiveness of ""BIOless"" tagging and blending along with the prowess of ensembling in AI. For AD, we formulate the problem as a span prediction task, experiment with different training techniques and also leverage the use of external data. Our systems rank 11th and 3rd in AI and AD tasks respectively.",Acronyms and Abbreviations Detection and Expansion "In this information-accumulating world, each of us must learn continuously. To participate in a new field, or even a sub-field, one must be aware of the terminology including the acronyms that specialists know so well, but newcomers do not. Building on state-of-the art acronym tools, our end-to-end acronym expander system called AcX takes a document, identifies its acronyms, and suggests expansions that are either found in the document or appropriate given the subject matter of the document. As far as we know, AcX is the first open source and extensible system for acronym expansion that allows mixing and matching of different inference modules. As of now, AcX works for English, French, and Portuguese with other languages in progress. This paper describes the design and implementation of AcX, proposes three new acronym expansion benchmarks, compares state-of-the-art techniques on them, and proposes ensemble techniques that improve on any single technique. Finally, the paper evaluates the performance of AcX and related work MadDog system in end-to-end experiments on a new human-annotated dataset of Wikipedia documents. Our experiments show that AcX outperforms MadDog but that human performance is still substantially better than the best automated approaches. Thus, achieving Acronym Expansion at a human level is still a rich and open challenge.",Acronyms and Abbreviations Detection and Expansion "Computational syntactic processing is a fundamental technique in natural language processing. It normally serves as a pre-processing method to transform natural language into structured and normalized texts, yielding syntactic features for downstream task learning. In this work, we propose a systematic survey of low-level syntactic processing techniques, namely: microtext normalization, sentence boundary disambiguation, part-of-speech tagging, text chunking, and lemmatization. We summarize and categorize widely used methods in the aforementioned syntactic analysis tasks, investigate the challenges, and yield possible research directions to overcome the challenges in future work.",Acronyms and Abbreviations Detection and Expansion "Typing every character in a text message may require more time or effort than strictly necessary. Skipping spaces or other characters may be able to speed input and reduce a user’s physical input effort. This can be particularly important for people with motor impairments. In a large crowdsourced study, we found workers frequently abbreviated text by omitting mid-word vowels. We designed a recognizer optimized for expanding noisy abbreviated input where users often omit spaces and mid-word vowels. We show using neural language models for selecting conversational-style training text and for rescoring the recognizer’s n-best sentences improved accuracy. On noisy touchscreen data collected from hundreds of users, we found accurate abbreviated input was possible even if a third of characters was omitted. Finally, in a study where users had to dwell for a second on each key, sentence abbreviated input was competitive with a conventional keyboard with word predictions. After practice, users wrote abbreviated sentences at 9.6 words-per-minute versus word input at 9.9 words-per-minute.",Acronyms and Abbreviations Detection and Expansion "Acronyms and abbreviations are the short-form of longer phrases and they are ubiquitously employed in various types of writing. Despite their usefulness to save space in writing and reader's time in reading, they also provide challenges for understanding the text especially if the acronym is not defined in the text or if it is used far from its definition in long texts. To alleviate this issue, there are considerable efforts both from the research community and software developers to build systems for identifying acronyms and finding their correct meanings in the text. However, none of the existing works provide a unified solution capable of processing acronyms in various domains and to be publicly available. Thus, we provide the first web-based acronym identification and disambiguation system which can process acronyms from various domains including scientific, biomedical, and general domains.",Acronyms and Abbreviations Detection and Expansion "Correctly interpreting an ambiguous word in a given context is a critical step for medical natural language processing tasks. Medical word sense disambiguation assumes that all meanings (senses) of an ambiguous word are predetermined in a sense inventory. However, the sense inventory sometimes does not cover all senses or is outdated as new concepts arise in the practice of medicine. Obtaining all word senses is therefore the prerequisite work for word sense disambiguation. A classical method for word sense induction is string expansion, a rule-based method that searches the corpus for full forms of an abbreviation or acronym. Yet, it cannot be applied to ambiguous words that are not abbreviations. In this paper, we study methods that can semi-automatically discover word senses from a large-scale medical corpus, regardless of whether the word is an abbreviation. We conducted a comparative evaluation of four unsupervised data-driven methods, including context clustering, two types of word clustering, and sparse coding in word vector space. Overall, sparse coding outperforms the other methods. This demonstrates the feasibility of using sparse coding to discover more complete word senses. By comparing the senses discovered by sparse coding with those in senses inventory, we observed new word senses. For more than half of the ambiguous words in the MSH WSD data set (sense inventory maintained by National Library of Medicine), sparse coding detected more than one new word sense. This result shows an opportunity in enhancing medical word sense inventories with unsupervised data-driven methods.",Acronyms and Abbreviations Detection and Expansion "Abbreviations are broadly used in clinical texts and most of them have more than one meaning which makes them highly ambiguous. Determining the right sense of an abbreviation is considered a Word Sense Disambiguation (WSD) task in clinical natural language processing (NLP). Many approaches are applied to disambiguate abbreviations in clinical narrative. However, supervised machine learning approaches are studied in this field extensively and have proven a good performance at tackling this problem. We have investigated four strategies that integrate pre-trained word embedding as features to train two supervised machine learning models: Support Vector Machines (SVM) and Naive Bayes (NB). Our training features include information of the context of target abbreviation, which is applied on 500 sentences for each of the 13 abbreviations that have been extracted from public clinical notes data sets from the University of Minnesota-affiliated (UMN) Fairview Health Services in the Twin Cities. Our results showed that SVM performs better than NB in all four strategies; the highest accuracy being 97.08% using a pre-trained model trained from Wikipedia, PubMed and PMC(PubMedCentral) texts.",Acronyms and Abbreviations Detection and Expansion "In the medical domain, user-generated social media text is increasingly used as a valuable complementary knowledge source to scientific medical literature: it contains the unprompted experiences of the patient. Yet, lexical normalization of such data has not been addressed properly. This paper presents a sequential, unsupervised pipeline for automatic lexical normalization of domain-specific abbreviations and spelling mistakes. This pipeline led to an absolute reduction of out-of-vocabulary terms of 0.82% and 0.78% in two cancer-related forums. Our approach mainly targeted, and thus corrected, medical concepts. Consequently, our pipeline may significantly improve downstream IR tasks.",Acronyms and Abbreviations Detection and Expansion "The communication nowadays has reached a need to express the idea in short text. This kind of communication is delivered in various media such as short messages service (SMS), Facebook status, Twitter post, chat messages, comments, and any form of short text. These various kinds of short text are known as microtext. The microtext usually has one sentence or less, written informally, consists of abbreviations, acronyms, emoticons, hashtags, and others. These features of the microtext become a particular challenge to the text classification. These features cannot be processed directly as in the traditional text processing, because it may lead to inaccuracy. Therefore, it requires microtext normalization to transform these features into well-written texts before applying text processing. This research aims to normalize some of these features, which are abbreviations and acronyms. The normalization applied dictionary-based and longest common subsequence (LCS) approach to the microtext in Bahasa Indonesia. Dictionary-based has shown an excellenct performance instead of LCS. However, it is limited to pre-defined abbreviations and acronyms. Besides, the LCS offers dynamic microtext normalization. Nevertheless, the combination of both approaches increases normalization performance slightly.",Acronyms and Abbreviations Detection and Expansion "Most new concepts both in the Russian and English languages are expressed using phrases or compound words, because such complex words make it possible to represent a particular concept with completeness and accuracy. But multicomponent terms—complex words and phrases—are cumbersome; therefore, there is a need to abbreviate them in one way or another. In some cases this leads to the use of short versions of the term in the form of only one main component, while in others, various types of abbreviations are used, which can save time. However, their imprecise or incorrect translation can change or confuse the intended meaning. The paper discusses the differences in using abbreviations and acronyms in British and American scientific texts, as well as difficulties of their translation and optimal strategies of interlanguage adaptation. The investigation is performed using various research techniques, including a comparative method, a continuous sampling method, semantic structure analysis, and contextual analysis. It is shown that the existing modern classifications of abbreviations greatly differ in linguistic scientific literature and lexical units are abbreviated using various methods. It is found that there exist various traditions of their usage in scientific and technical texts. It is demonstrated that various standards for introducing, spelling, and punctuating abbreviations and acronyms in British and American scientific journals pose additional difficulties in the work of a translator in the field of science and technology, provokes translation errors and requires the use of normalization and explication as the main strategies for their translation. The paper may be of interest for those who translate scientific texts for British and American readership.",Acronyms and Abbreviations Detection and Expansion "An acronym is a textual form used to refer an entity and to stress the important concepts. Over the last two decades, many researchers worked for mining acronym expansion pairs from plain text and Web. This is mainly used in language processing, information retrieval, Web search, ontology mapping, question answering, SMS, and social media posting. Acronyms are dynamically growing day by day, and discovering its definition/expansion is becoming a challenging task because of its diversified characteristics. Manually edited online repositories have acronym definition pairs, but it is an overwhelming task to update all possible definitions systematically. To extend the support, different approaches are employed for the automatic detection of acronym definitions from text and Web documents. This paper presents those approaches and also reveals the Web-based methods used for disambiguating, ranking, finding popularity score, and context words of the expansions. The scope for the future work in this research area is also conferred in this paper.",Acronyms and Abbreviations Detection and Expansion "To parse free text medical notes into structured data such as disease names, drugs, procedures, and other important medical information first, it is necessary to detect medical entities. It is important for an Electronic Medical Record (EMR) to have structured data with semantic interoperability to serve as a seamless communication platform whenever a patient migrates from one physician to another. However, in free text notes, medical entities are often expressed using informal abbreviations. An informal abbreviation is a non-standard or undetermined abbreviation, made in diverse writing styles, which may burden the semantic interoperability between EMR systems. Therefore, a detection of informal abbreviations is required to tackle this issue. Objectives: We attempt to achieve highly reliable detection of informal abbreviations made in diverse writing styles. Methods: In this study, we apply the Long Short-Term Memory (LSTM) model to detect informal abbreviations in free text medical notes. Additionally, we use sliding windows to tackle the limited data issue and sample generator for the imbalance class issue, while introducing additional pre-trained features (bag of words and word2vec vectors) to the model. Results: The LSTM model was able to detect informal abbreviations with precision of 93.6%, recall of 57.6%, and F1-score of 68.9%. Conclusion: Our method was able to recognize informal abbreviations using small data set with high precision. The detection can be used to recognize informal abbreviations in real-time while the physician is typing it and raise appropriate indicators for the informal abbreviation meaning confirmation, thus increase the semantic interoperability.",Acronyms and Abbreviations Detection and Expansion "The adoption of Electronic Health Record (EHR) and other e-health infrastructures over the years has been characterized by an increase in medical errors. This is primarily a result of the widespread usage of medical acronyms and abbreviations with multiple possible senses (i.e., ambiguous acronyms). The advent of Artificial Intelligence (AI) technology, specifically Natural Language Processing (NLP), has presented a promising avenue for tackling the intricate issue of automatic sense resolution of acronyms. Notably, the application of Machine Learning (ML) techniques has proven to be highly effective in the development of systems aimed at this objective, garnering significant attention and interest within the research and industry domains in recent years. The significance of automating the resolution of medical acronym senses cannot be overstated, especially in the context of modern healthcare delivery with the widespread use of EHR. However, it is disheartening to note that comprehensive studies examining the global adoption of EHR, assessing the impact of acronym usage on medical errors within EHR systems, and reporting on the latest trends and advancements in ML-based NLP solutions for disambiguating medical acronyms remain severely limited. In this current study, we present a detailed overview on medical error, its origins, unintended effects, and EHR-related errors as a subclass of clinical error. Furthermore, this paper investigates the adoption of EHR systems in developed and developing nations, as well as the review concludes with an examination of various artificial intelligence techniques, particularly machine learning algorithms for medical acronym and abbreviation disambiguation in EHRs.",Acronyms and Abbreviations Detection and Expansion "Providing precise definitions of all project specific terms is a crucial task in requirements engineering. In order to support the glossary building process, many previous tools rely on the assumption that the requirements set has a certain level of quality. Question/problem: Yet, the parallel detection and correction of quality weaknesses in the context of glossary terms is beneficial to requirements definition. In this paper, we focus on detection of uncontrolled usage of abbreviations by identification of abbreviation-expansion pair (AEP) candidates. Principal ideas/results: We compare our feature-based approach (ILLOD) to other similarity measures to detect AEPs. It shows that feature-based methods are more accurate than syntactic and semantic similarity measures. The goal is to extend the glossary term extraction (GTE) and synonym clustering with AEP-specific methods. First experiments with a PROMISE data-set extended with uncontrolled abbreviations show that ILLOD is able to extract abbreviations as well as match their expansions viably in a real-world setting and is well suited to augment previous term clusters with clusters that combine AEP candidates. Contribution: In this paper, we present ILLOD, a novel feature-based approach to AEP detection and propose a workflow for its integration to clustering of glossary term candidates.",Acronyms and Abbreviations Detection and Expansion "Abbreviations often have several distinct meanings, often making their use in text ambiguous. Expanding them to their intended meaning in context is important for Machine Reading tasks such as document search, recommendation and question answering. Existing approaches mostly rely on manually labeled examples of abbreviations and their correct long-forms. Such data sets are costly to create and result in trained models with limited applicability and flexibility. Importantly, most current methods must be subjected to a full empirical evaluation in order to understand their limitations, which is cumbersome in practice. In this paper, we present an entirely unsupervised abbreviation disambiguation method (called UAD) that picks up abbreviation definitions from unstructured text. Creating distinct tokens per meaning, we learn context representations as word vectors. We demonstrate how to further boost abbreviation disambiguation performance by obtaining better context representations using additional unstructured text. Our method is the first abbreviation disambiguation approach with a transparent model that allows performance analysis without requiring full-scale evaluation, making it highly relevant for real-world deployments. In our thorough empirical evaluation, UAD achieves high performance on large real-world data sets from different domains and outperforms both baseline and state-of-the-art methods. UAD scales well and supports thousands of abbreviations with multiple different meanings within a single model. In order to spur more research into abbreviation disambiguation, we publish a new data set, that we also use in our experiments.",Acronyms and Abbreviations Detection and Expansion "While Transformers have had significant success in paragraph generation, they treat sentences as linear sequences of tokens and often neglect their hierarchical information. Prior work has shown that decomposing the levels of granularity~(e.g., word, phrase, or sentence) for input tokens has produced substantial improvements, suggesting the possibility of enhancing Transformers via more fine-grained modeling of granularity. In this work, we propose a continuous decomposition of granularity for neural paraphrase generation (C-DNPG). In order to efficiently incorporate granularity into sentence encoding, C-DNPG introduces a granularity-aware attention (GA-Attention) mechanism which extends the multi-head self-attention with: 1) a granularity head that automatically infers the hierarchical structure of a sentence by neurally estimating the granularity level of each input token; and 2) two novel attention masks, namely, granularity resonance and granularity scope, to efficiently encode granularity into attention. Experiments on two benchmarks, including Quora question pairs and Twitter URLs have shown that C-DNPG outperforms baseline models by a remarkable margin and achieves state-of-the-art results in terms of many metrics. Qualitative analysis reveals that C-DNPG indeed captures fine-grained levels of granularity with effectiveness.",Paraphrase and Rephrase Generation "Text style transfer and paraphrasing of texts are actively growing areas of NLP, dozens of methods for solving these tasks have been recently introduced. In both tasks, the system is supposed to generate a text which should be semantically similar to the input text. Therefore, these tasks are dependent on methods of measuring textual semantic similarity. However, it is still unclear which measures are the best to automatically evaluate content preservation between original and generated text. According to our observations, many researchers still use BLEU-like measures, while there exist more advanced measures including neural-based that significantly outperform classic approaches. The current problem is the lack of a thorough evaluation of the available measures. We close this gap by conducting a large-scale computational study by comparing 57 measures based on different principles on 19 annotated datasets. We show that measures based on cross-encoder models outperform alternative approaches in almost all cases. We also introduce the Mutual Implication Score (MIS), a measure that uses the idea of paraphrasing as a bidirectional entailment and outperforms all other measures on the paraphrase detection task and performs on par with the best measures in the text style transfer task.",Paraphrase and Rephrase Generation "Paraphrase generation aims to rewrite a text with different words while keeping the same meaning. Previous work performs the task based solely on the given dataset while ignoring the availability of external linguistic knowledge. However, it is intuitive that a model can generate more expressive and diverse paraphrase with the help of such knowledge. To fill this gap, we propose Knowledge-Enhanced Paraphrase Network (KEPN), a transformer-based framework that can leverage external linguistic knowledge to facilitate paraphrase generation. (1) The model integrates synonym information from the external linguistic knowledge into the paraphrase generator, which is used to guide the decision on whether to generate a new word or replace it with a synonym. (2) To locate the synonym pairs more accurately, we adopt an incremental encoding scheme to incorporate position information of each synonym. Besides, a multi-task architecture is designed to help the framework jointly learn the selection of synonym pairs and the generation of expressive paraphrase. Experimental results on both English and Chinese datasets show that our method significantly outperforms the state-of-the-art approaches in terms of both automatic and human evaluation.",Paraphrase and Rephrase Generation "This paper presents a new linguistic resource for the generation of paraphrases in Portuguese, based on the lexicon-grammar framework. The resource components include: (i) a lexicon-grammar based dictionary of 2100 predicate nouns co-occurring with the support verb ser de ‘be of’, such as in ser de uma ajuda inestimável ‘be of invaluable help’; (ii) a lexicon-grammar based dictionary of 6000 predicate nouns co-occurring with the support verb fazer ‘do’ or ‘make’, such as in fazer uma comparação ‘make a comparison’; and (iii) a lexicon-grammar based dictionary of about 5000 human intransitive adjectives co-occurring with the copula verbs ser and/or estar ‘be’, such as in ser simpático ‘be kind’ or estar entusiasmado ‘be enthusiastic’. A set of local grammars explore the properties described in linguistic resources, enabling a variety of text transformation tasks for paraphrasing applications. The paper highlights the different complementary and synergistic components and integration efforts, and presents some preliminary evaluation results on the inclusion of such resources in the eSPERTo paraphrase generation system.",Paraphrase and Rephrase Generation "Paraphrase generation plays an essential role in natural language process (NLP), and it has many downstream applications. However, training supervised paraphrase models requires many annotated paraphrase pairs, which are usually costly to obtain. On the other hand, the paraphrases generated by existing unsupervised approaches are usually syntactically similar to the source sentences and are limited in diversity. In this paper, we demonstrate that it is possible to generate syntactically various paraphrases without the need for annotated paraphrase pairs. We propose Syntactically controlled Paraphrase Generator (SynPG), an encoder-decoder based model that learns to disentangle the semantics and the syntax of a sentence from a collection of unannotated texts. The disentanglement enables SynPG to control the syntax of output paraphrases by manipulating the embedding in the syntactic space. Extensive experiments using automatic metrics and human evaluation show that SynPG performs better syntactic control than unsupervised baselines, while the quality of the generated paraphrases is competitive. We also demonstrate that the performance of SynPG is competitive or even better than supervised models when the unannotated data is large. Finally, we show that the syntactically controlled paraphrases generated by SynPG can be utilized for data augmentation to improve the robustness of NLP models.",Paraphrase and Rephrase Generation "Despite significant progress in text generation models, a serious limitation is their tendency to produce text that is factually inconsistent with information in the input. Recent work has studied whether textual entailment systems can be used to identify factual errors; however, these sentence-level entailment models are trained to solve a different problem than generation filtering and they do not localize which part of a generation is non-factual. In this paper, we propose a new formulation of entailment that decomposes it at the level of dependency arcs. Rather than focusing on aggregate decisions, we instead ask whether the semantic relationship manifested by individual dependency arcs in the generated output is supported by the input. Human judgments on this task are difficult to obtain; we therefore propose a method to automatically create data based on existing entailment or paraphrase corpora. Experiments show that our dependency arc entailment model trained on this data can identify factual inconsistencies in paraphrasing and summarization better than sentence-level methods or those based on question generation, while additionally localizing the erroneous parts of the generation.",Paraphrase and Rephrase Generation "Paraphrase generation reflects the ability to understand the meaning from the language surface form and rephrase it to other expressions. Recent paraphrase generation works have paid attention to unsupervised approaches based on Pre-trained Language Models (PLMs) to avoid heavy reliance on parallel data by utilizing PLMs’ generation ability. However, the generated pairs of existing unsupervised methods are usually weak either in semantic equivalence or expression diversity. In this paper, we present a novel unsupervised paraphrase generation framework called Paraphrase Machine. By employing multi-aspect equivalence constraints and multi-granularity diversifying mechanisms, Paraphrase Machine is able to achieve good semantic equivalence and expressive diversity, producing a high-quality unsupervised paraphrase dataset. Based on this dataset, we train a general paraphrase model, which can be directly applied to rewrite the input sentence of various domains without any fine-tuning, and achieves substantial gains of 9.1% and 3.3% absolutely in BLEU score over previous SOTA on Quora and MSCOCO. By further fine-tuning our model with domain-specific training sets, the improvement can be increased to even 18.0% and 4.6%. Most importantly, by applying it to language understanding and generation tasks under the low-resource setting, we demonstrate that our model can serve as a universal data augmentor to boost the few-shot performance (e.g., average 2.0% gain on GLUE).",Paraphrase and Rephrase Generation "Paraphrasing and summarizing using Artificial intelligence have been able to bring about tremendous changes in various industries by giving different and efficient suggestions of how sentences can be written in different forms but also maintaining the meaning intact to its users. Incorporating AI in the stream of paraphrasing as deemed to be useful to wide range of industries, as we can get sentence recommendations based on the needs of different industries. The model can be trained with datasets relevant to the industries to tune the outputs according to the respective needs. This quality of machine learning can be considered as a boon to people who create content as it paves the way to get automated results having the same efficiency of a human. The core idea of artificial intelligence is to mimic the human thinking process. As computer engineers we have a role to leverage these models and help people who need it the most. The proposed methodology provides greater efficiency and higher accuracy, adequacy and diversity than the other methodologies.",Paraphrase and Rephrase Generation "Paraphrase generation using deep learning has been a research hotspot of natural language processing in the past few years. While previous studies tackle the problem from different aspects, the essence of paraphrase generation is to retain the key semantics of the source sentence and rewrite the rest of the content. Inspired by this observation, we propose a novel two-stage model, PGKPR, for paraphrase generation with keyword and part-of-speech reconstruction. The rationale is to capture simultaneously the possible keywords of a source sentence and the relations between them to facilitate the rewriting. In the first stage, we identify the possible keywords using a prediction attribution technique, where the words obtaining higher attribution scores are more likely to be the keywords. In the second stage, we train a transformer-based model via multi-task learning for paraphrase generation. The novel learning task is the reconstruction of the keywords and part-of-speech tags, respectively, from a perturbed sequence of the source sentence. The learned encodings are then decoded to generate the paraphrase. We conduct the experiments on two commonly-used datasets, and demonstrate the superior performance of PGKPR over comparative models on multiple evaluation metrics.",Paraphrase and Rephrase Generation "Question paraphrasing aims to restate a given question with different expressions but keep the original meaning. Recent approaches are mostly based on neural networks following a sequence-to-sequence fashion, however, these models tend to generate unpredictable results. To overcome this drawback, we propose a pipeline model based on templates. It follows three steps, a) identifies template from the input question, b) retrieves candidate templates, c) fills candidate templates with original topic words. Experiment results on two self-constructed datasets show that our model outperforms the sequence-to-sequence model in a large margin and the advantage is more promising when the size of training sample is small.",Paraphrase and Rephrase Generation "Text Simplification improves the readability of sentences through several rewriting transformations, such as lexical paraphrasing, deletion, and splitting. Current simplification systems are predominantly sequence-to-sequence models that are trained end-to-end to perform all these operations simultaneously. However, such systems limit themselves to mostly deleting words and cannot easily adapt to the requirements of different target audiences. In this paper, we propose a novel hybrid approach that leverages linguistically-motivated rules for splitting and deletion, and couples them with a neural paraphrasing model to produce varied rewriting styles. We introduce a new data augmentation method to improve the paraphrasing capability of our model. Through automatic and manual evaluations, we show that our proposed model establishes a new state-of-the-art for the task, paraphrasing more often than the existing systems, and can control the degree of each simplification operation applied to the input texts.",Paraphrase and Rephrase Generation "The problem of generating a set of diverse paraphrase sentences while (1) not compromising the original meaning of the original sentence, and (2) imposing diversity in various semantic aspects, such as a lexical or syntactic structure, is examined. Existing work on paraphrase generation has focused more on the former, and the latter was trained as a fixed style transfer, such as transferring from positive to negative sentiments, even at the cost of losing semantics. In this work, we consider style transfer as a means of imposing diversity, with a paraphrasing correctness constraint that the target sentence must remain a paraphrase of the original sentence. However, our goal is to maximize the diversity for a set of k generated paraphrases, denoted as the diversified paraphrase (DP) problem. Our key contribution is deciding the style guidance at generation towards the direction of increasing the diversity of output with respect to those generated previously. As pre-materializing training data for all style decisions is impractical, we train with biased data, but with debiasing guidance. Compared to state-of-the-art methods, our proposed model can generate more diverse and yet semantically consistent paraphrase sentences. That is, our model, trained with the MSCOCO dataset, achieves the highest embedding scores, .94/.95/.86, similar to state-of-the-art results, but with a lower mBLEU score (more diverse) by 8.73%.",Paraphrase and Rephrase Generation "This paper addresses the quality issues in existing Twitter-based paraphrase datasets, and discusses the necessity of using two separate definitions of paraphrase for identification and generation tasks. We present a new Multi-Topic Paraphrase in Twitter (MultiPIT) corpus that consists of a total of 130k sentence pairs with crowdsoursing (MultiPIT_crowd) and expert (MultiPIT_expert) annotations using two different paraphrase definitions for paraphrase identification, in addition to a multi-reference test set (MultiPIT_NMR) and a large automatically constructed training set (MultiPIT_Auto) for paraphrase generation. With improved data annotation quality and task-specific paraphrase definition, the best pre-trained language model fine-tuned on our dataset achieves the state-of-the-art performance of 84.2 F1 for automatic paraphrase identification. Furthermore, our empirical results also demonstrate that the paraphrase generation models trained on MultiPIT_Auto generate more diverse and high-quality paraphrases compared to their counterparts fine-tuned on other corpora such as Quora, MSCOCO, and ParaNMT.",Paraphrase and Rephrase Generation "Paraphrase generation is a challenging task that involves expressing the meaning of a sentence using synonyms or different phrases, either to achieve variations or a certain stylistic response. Most previous sequence-to-sequence (Seq2Seq) models focus on either generating variations or preserving the content. We mainly address the issue of preserving the content in a sentence while generating diverse paraphrases. In this paper, we propose a novel approach for paraphrase generation using variational autoencoder (VAE) and Pointer Generator Network (PGN). The proposed model uses a copy mechanism to control the content transfer, a VAE to introduce variations and a training technique to restrict the gradient flow for efficient learning. Our evaluations on QUORA and MS COCO datasets show that our model outperforms the state-of-the-art approaches and the generated paraphrases are highly diverse as well as consistent with their original meaning.",Paraphrase and Rephrase Generation "Paraphrase Generation is one of the most important and challenging tasks in the field of Natural Language Generation. The paraphrasing techniques help to identify or to extract/generate phrases/sentences conveying the similar meaning. The paraphrasing task can be bifurcated into two sub-tasks namely, Paraphrase Identification (PI) and Paraphrase Generation (PG). Most of the existing proposed state-of-the-art systems have the potential to solve only one problem at a time. This paper proposes a light-weight unified model that can simultaneously classify whether given pair of sentences are paraphrases of each other and the model can also generate multiple paraphrases given an input sentence. Paraphrase Generation module aims to generate fluent and semantically similar paraphrases and the Paraphrase Identification system aims to classify whether sentences pair are paraphrases of each other or not. The proposed approach uses an amalgamation of data sampling or data variety with a granular fine-tuned Text-To-Text Transfer Transformer (T5) model. This paper proposes a unified approach which aims to solve the problems of Paraphrase Identification and generation by using carefully selected data-points and a fine-tuned T5 model. The highlight of this study is that the same light-weight model trained by keeping the objective of Paraphrase Generation can also be used for solving the Paraphrase Identification task. Hence, the proposed system is light-weight in terms of the model’s size along with the data used to train the model which facilitates the quick learning of the model without having to compromise with the results. The proposed system is then evaluated against the popular evaluation metrics like BLEU (BiLingual Evaluation Understudy):, ROUGE (Recall-Oriented Understudy for Gisting Evaluation), METEOR, WER (Word Error Rate), and GLEU (Google-BLEU) for Paraphrase Generation and classification metrics like accuracy, precision, recall and F1-score for Paraphrase Identification system. The proposed model achieves state-of-the-art results on both the tasks of Paraphrase Identification and paraphrase Generation.",Paraphrase and Rephrase Generation "Paraphrase plays an important role in various Natural Language Processing (NLP) problems, such as question answering, information retrieval, conversation systems, etc. Previous approaches mainly concentrate on producing paraphrases with similar semantics, namely fidelity, while recent ones begin to focus on the diversity of generated paraphrases. However, most of the existing models fail to explicitly emphasize on both metrics above. To fill this gap, we propose a submodular optimization-based VAE-transformer model to generate more consistent and diverse phrases. Through extensive experiments on datasets like Quora and Twitter, we demonstrate that our proposed model outperforms state-of-the-art baselines on BLEU, METEOR, TERp and n-distinct grams. Furthermore, through ablation study, our results suggest that incorporating VAE and submodularity functions could effectively promote fidelity and diversity respectively.",Paraphrase and Rephrase Generation "Paraphrasing is a useful natural language processing task that can contribute to more diverse generated or translated texts. Natural language inference (NLI) and paraphrasing share some similarities and can benefit from a joint approach. We propose a novel methodology for the extraction of paraphrasing datasets from NLI datasets and cleaning existing paraphrasing datasets. Our approach is based on bidirectional entailment; namely, if two sentences can be mutually entailed, they are paraphrases. We evaluate our approach using several large pretrained transformer language models in the monolingual and cross-lingual setting. The results show high quality of extracted paraphrasing datasets and surprisingly high noise levels in two existing paraphrasing datasets.",Paraphrase and Rephrase Generation "This study explores four methods of generating paraphrases in Malayalam, utilizing resources available for English paraphrasing and pre-trained Neural Machine Translation (NMT) models. We evaluate the resulting paraphrases using both automated metrics, such as BLEU, METEOR, and cosine similarity, as well as human annotation. Our findings suggest that automated evaluation measures may not be fully appropriate for Malayalam, as they do not consistently align with human judgment. This discrepancy underscores the need for more nuanced paraphrase evaluation approaches especially for highly agglutinative languages.",Paraphrase and Rephrase Generation "Automatic paraphrase generation plays a key role in many natural language applications. The dominant paraphrase generation models are the encoder-decoder neural networks with attention, where the decoder uses the information of the source text while predicting target text. However, the outputs of these paraphrase models often suffer the semantic error problem. This problem is caused by the inadequate information of the decoder. In this work, we introduce a novel neural model to solve this problem, called Collaboration between the Forward and the Backward Decoder. Specifically, the hidden states of the backward decoder are used as supplementary information of the forward decoder. Therefore, the forward decoder can generate more reasonable paraphrase text using the target-side future contextual. Conversely, the backward decoder employs the hidden states of the forward decoder to prevent the semantic error problem. As two experimental examples show, the proposed model can generate the high-quality paraphrase through this collaboration mechanism. The empirical study on two benchmark datasets demonstrates that our model outperforms some baselines and achieves the state-of-the-art performance.",Paraphrase and Rephrase Generation "Paraphrasing exists at different granularity levels, such as lexical level, phrasal level and sentential level. This paper presents Decomposable Neural Paraphrase Generator (DNPG), a Transformer-based model that can learn and generate paraphrases of a sentence at different levels of granularity in a disentangled way. Specifically, the model is composed of multiple encoders and decoders with different structures, each of which corresponds to a specific granularity. The empirical study shows that the decomposition mechanism of DNPG makes paraphrase generation more interpretable and controllable. Based on DNPG, we further develop an unsupervised domain adaptation method for paraphrase generation. Experimental results show that the proposed model achieves competitive in-domain performance compared to the state-of-the-art neural models, and significantly better performance when adapting to a new domain.",Paraphrase and Rephrase Generation "Popular solutions to Named Entity Recognition (NER) include conditional random fields, sequence-to-sequence models, or utilizing the question-answering framework. However, they are not suitable for nested and overlapping spans with large ontologies and for predicting the position of the entities. To fill this gap, we introduce a new model for NER task -- an RNN transducer (RNN-T). These models are trained using paired input and output sequences without explicitly specifying the alignment between them, similar to other seq-to-seq models. RNN-T models learn the alignment using a loss function that sums over all alignments. In NER tasks, however, the alignment between words and target labels are available from the human annotations. We propose a fixed alignment RNN-T model that utilizes the given alignment, while preserving the benefits of RNN-Ts such as modeling output dependencies. As a more general case, we also propose a constrained alignment model where users can specify a relaxation of the given input alignment and the model will learn an alignment within the given constraints. In other words, we propose a family of seq-to-seq models which can leverage alignments between input and target sequences when available. Through empirical experiments on a challenging real-world medical NER task with multiple nested ontologies, we demonstrate that our fixed alignment model outperforms the standard RNN-T model, improving F1-score from 0.70 to 0.74.",NER for Nested Entities "Named entity recognition (NER) is a well-studied task in natural language processing. Traditional NER research only deals with flat entities and ignores nested entities. The span-based methods treat entity recognition as a span classification task. Although these methods have the innate ability to handle nested NER, they suffer from high computational cost, ignorance of boundary information, under-utilization of the spans that partially match with entities, and difficulties in long entity recognition. To tackle these issues, we propose a two-stage entity identifier. First we generate span proposals by filtering and boundary regression on the seed spans to locate the entities, and then label the boundary-adjusted span proposals with the corresponding categories. Our method effectively utilizes the boundary information of entities and partially matched spans during training. Through boundary regression, entities of any length can be covered theoretically, which improves the ability to recognize long entities. In addition, many low-quality seed spans are filtered out in the first stage, which reduces the time complexity of inference. Experiments on nested NER datasets demonstrate that our proposed method outperforms previous state-of-the-art models.",NER for Nested Entities "Nested entities are observed in many domains due to their compositionality, which cannot be easily recognized by the widely-used sequence labeling framework.A natural solution is to treat the task as a span classification problem.To learn better span representation and increase classification performance, it is crucial to effectively integrate heterogeneous factors including inside tokens, boundaries, labels, and related spans which could be contributing to nested entities recognition.To fuse these heterogeneous factors, we propose a novel triaffine mechanism including triaffine attention and scoring.Triaffine attention uses boundaries and labels as queries and uses inside tokens and related spans as keys and values for span representations.Triaffine scoring interacts with boundaries and span representations for classification.Experiments show that our proposed method outperforms previous span-based methods, achieves the state-of-the-art F_1 scores on nested NER datasets GENIA and KBP2017, and shows comparable results on ACE2004 and ACE2005.",NER for Nested Entities "Named entity recognition (NER) is a well-studied task in natural language processing. However, the widely-used sequence labeling framework is usually difficult to detect entities with nested structures. The span-based method that can easily detect nested entities in different subsequences is naturally suitable for the nested NER problem. However, previous span-based methods have two main issues. First, classifying all subsequences is computationally expensive and very inefficient at inference. Second, the span-based methods mainly focus on learning span representations but lack of explicit boundary supervision. To tackle the above two issues, we propose a boundary enhanced neural span classification model. In addition to classifying the span, we propose incorporating an additional boundary detection task to predict those words that are boundaries of entities. The two tasks are jointly trained under a multitask learning framework, which enhances the span representation with additional boundary supervision. In addition, the boundary detection model has the ability to generate high-quality candidate spans, which greatly reduces the time complexity during inference. Experiments show that our approach outperforms all existing methods and achieves 85.3, 83.9, and 78.3 scores in terms of F1 on the ACE2004, ACE2005, and GENIA datasets, respectively.",NER for Nested Entities "Nested named entities (nested NEs) refer to the situation where one named entity is included or nested within another named entity, which cannot be recognized by the traditional sequence labeling methods. Recently, span-based methods have become the mainstream methods for nested Named Entity Recognition (nested NER). The fundamental concept behind this method is to enumerate nearly all potential spans as entity mentions and subsequently classify them. However, span-based methods independently classify spans without considering the semantic relations among them, which negatively impacts the span representation. To address the issue, we propose a novel deep learning architecture for nested NER that explores interactive and contrastive relations among spans. Specifically, we design a scale transformation mechanism that embeds geometric information into span representations, which enhances the model's ability to encode interactive relations between spans. Additionally, we introduce a supervised contrastive learning loss that pulls apart highly overlapping spans in the embedding space to encode the contrastive relations. Experiments show that our method achieves state-of-the-art or competitive performance on three publicly nested NER datasets, thus validating its effectiveness.",NER for Nested Entities "Named Entity Recognition (NER) is a fundamental problem in natural language processing (NLP). Apart from flat entities, nested entities are also commonly existed in real-life textual data. However, the current methods are not capable of handling nested structures for NER effectively. In this paper, we propose a novel segment driven modeling method (NeurSEG) for the nested NER problem, which can effectively extract entities from the nested structures in complex nesting scenarios. The proposed NeurSEG model first finds the nested label of each word in a sentence and determines the positional relationships between neighbouring words, and then extracts the entities and predicts the corresponding entity types. In addition, we also propose an augmented training method for further improving the performance. We have conducted extensive experiments based on both flat and nested NER benchmark datasets. The performance results have shown that our proposed NeurSEG model has achieved promising performance while retaining its runtime efficiency for the nested NER task. Moreover, the proposed model has also achieved very competitive results when compared with the existing models for the flat NER task, demonstrating its capability for tackling both nested and flat NER tasks.",NER for Nested Entities "Named entity recognition (NER) is a basic task in natural language processing. Traditionally, sequence labeling methods are applied to named entity recognition and achieve good performance. However, sequence labeling methods can not be straightly applied to recognize nested named entities where an entity is included in another entity. Recently, some new methods are proposed for nested named entity recognition. Most of them ignore that entity type information can help recognize entity boundaries or ignore that entity boundary information can help recognize entity type, which limits the performance of nested NER. Considering the effect of entity type information and entity boundary information, in this paper, we propose a multi-agent communication module to utilize these two kinds of information. Our multi-agent communication module contains a type labeling agent and a boundary labeling agent. The type labeling agent can utilize boundary information from boundary labeling agent to recognize entity type. And the boundary labeling agent can utilize type information from type labeling agent to recognize entity boundaries. They communicate and collaborate iteratively to finish the entity boundary recognition. Compared with previous methods, with the assist of entity type information and entity boundary information, the performance of boundary recognition improves. The improvement of boundary recognition is beneficial to recognizing nested named entities, which improves the performance of nested named entity recognition. Empirical experiments are conducted on three nested NER datasets. And the experimental results show the effectiveness of our model.",NER for Nested Entities "In this paper, we propose a novel bipartite flat-graph network (BiFlaG) for nested named entity recognition (NER), which contains two subgraph modules: a flat NER module for outermost entities and a graph module for all the entities located in inner layers. Bidirectional LSTM (BiLSTM) and graph convolutional network (GCN) are adopted to jointly learn flat entities and their inner dependencies. Different from previous models, which only consider the unidirectional delivery of information from innermost layers to outer ones (or outside-to-inside), our model effectively captures the bidirectional interaction between them. We first use the entities recognized by the flat NER module to construct an entity graph, which is fed to the next graph module. The richer representation learned from graph module carries the dependencies of inner entities and can be exploited to improve outermost entity predictions. Experimental results on three standard nested NER datasets demonstrate that our BiFlaG outperforms previous state-of-the-art models.",NER for Nested Entities "Named entity recognition (NER) is one of the best studied tasks in natural language processing. However, most approaches are not capable of handling nested structures which are common in many applications. In this paper we introduce a novel neural network architecture that first merges tokens and/or entities into entities forming nested structures, and then labels each of them independently. Unlike previous work, our merge and label approach predicts real-valued instead of discrete segmentation structures, which allow it to combine word and nested entity embeddings while maintaining differentiability. We evaluate our approach using the ACE 2005 Corpus, where it achieves state-of-the-art F1 of 74.6, further improved with contextual embeddings (BERT) to 82.4, an overall improvement of close to 8 F1 points over previous approaches trained on the same data. Additionally we compare it against BiLSTM-CRFs, the dominant approach for flat NER structures, demonstrating that its ability to predict nested structures does not impact performance in simpler cases.",NER for Nested Entities "Nested named entity recognition (NER) aims to identify the entity boundaries and recognize categories of the named entities in a complex hierarchical sentence. Some works have been done using character-level, word-level, or lexicon-level based models. However, such researches ignore the role of the complementary annotations. In this paper, we propose a trigger-based graph neural network (Trigger-GNN) to leverage the nested NER. It obtains the complementary annotation embeddings through entity trigger encoding and semantic matching, and tackle nested entity utilizing an efficient graph message passing architecture, aggregation-update mode. We posit that using entity triggers as external annotations can add in complementary supervision signals on the whole sentences. It helps the model to learn and generalize more efficiently and cost-effectively. Experiments show that the Trigger-GNN consistently outperforms the baselines on four public NER datasets, and it can effectively alleviate the nested NER.",NER for Nested Entities "Nested Named Entity Recognition (NER) is an information extraction task that aims to identify entities that may be nested within other entity mentions. Despite the availability of several corpora with nested entities in the Spanish clinical domain, most previous work has overlooked them due to the lack of models and a clear annotation scheme for dealing with the task. To fill this gap, this paper provides an empirical study of straightforward methods for tackling the nested NER task on two Spanish clinical datasets, Clinical Trials, and the Chilean Waiting List. We assess the advantages and limitations of two sequence labeling approaches; one based on Multiple LSTM-CRF architectures and another on Joint labeling models. To better understand the differences between these models, we compute task-specific metrics that adequately measure the ability of models to detect nested entities and perform a fine-grained comparison across models. Our experimental results show that employing domain-specific language models trained from scratch significantly improves the performance obtained with strong domain-specific and general-domain baselines, achieving state-of-the-art results in both datasets. Specifically, we obtained F1 scores of 89.21 and 83.16 in Clinical Trials and the Chilean Waiting List, respectively. Interestingly enough, we observe that the task-specific metrics and analysis properly reflect the limitations of the models when recognizing nested entities. Finally, we perform a case study on an aggregated NER dataset created from several clinical corpora in Spanish. We highlight how entity length and the simultaneous recognition of inner and outer entities are the most critical variables for the nested NER task.",NER for Nested Entities "The Open EPPI corpus comprises 151 full-text 001 papers annotated by domain experts for entity 002 mentions, protein-protein interactions (PPIs), 003 and normalisation of entities to publicly avail-004 able ontologies. The corpus is publicly avail-005 able at [ANON]. We benchmark recent nested 006 NER and relation extraction models. Results 007 show that, although existing nested NER mod-008 els achieve good performance on outermost 009 and innermost entity mentions, they struggle 010 with other types of nested mentions. Bench-011 mark results for relation extraction show sub-012 stantial room for improvement with precision 013 under 70 and recall around 40 to 52 . 014",NER for Nested Entities "Named entity recognition (NER) is the task to detect and classify entity spans in the text. When entity spans overlap between each other, the task is named as nested NER. Span-based methods have been widely used to tackle nested NER. Most of these methods get a score matrix, where each entry corresponds to a span. However, previous work ignores spatial relations in the score matrix. In this paper, we propose using Convolutional Neural Network (CNN) to model these spatial relations. Despite being simple, experiments in three commonly used nested NER datasets show that our model surpasses several recently proposed methods with the same pre-trained encoders. Further analysis shows that using CNN can help the model find more nested entities. Besides, we find that different papers use different sentence tokenizations for the three nested NER datasets, which will influence the comparison. Thus, we release a pre-processing script to facilitate future comparison.",NER for Nested Entities "Named-entity recognition (NER) is one of the primary components in various natural language processing tasks such as relation extraction, information retrieval, question answering, etc. The majority of the research work deals with flat entities. However, it was observed that the entities were often embedded within other entities. Most of the current state-of-the-art models deal with the problem of embedded/nested entity recognition with very complex neural network architectures. In this research work, we proposed to solve the problem of nested named-entity recognition using the transfer-learning approach. For this purpose, different variants of fine-tuned, pretrained, BERT-based language models were used for the problem using the joint-labeling modeling technique. Two nested named-entity-recognition datasets, i.e., GENIA and GermEval 2014, were used for the experiment, with four and two levels of annotation, respectively. Also, the experiments were performed on the JNLPBA dataset, which has flat annotation. The performance of the above models was measured using F1-score metrics, commonly used as the standard metrics to evaluate the performance of named-entity-recognition models. In addition, the performance of the proposed approach was compared with the conditional random field and the Bi-LSTM-CRF model. It was found that the fine-tuned, pretrained, BERT-based models outperformed the other models significantly without requiring any external resources or feature extraction. The results of the proposed models were compared with various other existing approaches. The best-performing BERT-based model achieved F1-scores of 74.38, 85.29, and 80.68 for the GENIA, GermEval 2014, and JNLPBA datasets, respectively. It was found that the transfer learning (i.e., pretrained BERT models after fine-tuning) based approach for the nested named-entity-recognition task could perform well and is a more generalized approach in comparison to many of the existing approaches.",NER for Nested Entities "Nested named entity recognition (nested NER) is a fundamental task in natural language processing. Various span-based methods have been proposed to detect nested entities with span representations. However, span-based methods do not consider the relationship between a span and other entities or phrases, which is helpful in the NER task. Besides, span-based methods have trouble predicting long entities due to limited span enumeration length. To mitigate these issues, we present the Propose-and-Refine Network (PnRNet), a two-stage set prediction network for nested NER. In the propose stage, we use a span-based predictor to generate some coarse entity predictions as entity proposals. In the refine stage, proposals interact with each other, and richer contextual information is incorporated into the proposal representations. The refined proposal representations are used to re-predict entity boundaries and classes. In this way, errors in coarse proposals can be eliminated, and the boundary prediction is no longer constrained by the span enumeration length limitation. Additionally, we build multi-scale sentence representations, which better model the hierarchical structure of sentences and provide richer contextual information than token-level representations. Experiments show that PnRNet achieves state-of-the-art performance on four nested NER datasets and one flat NER dataset.",NER for Nested Entities "Named Entity Recognition (NER) is an important task in Natural Language Processing that aims to identify text spans belonging to predefined categories. Traditional NER systems ignore nested entities, which are entities contained in other entity mentions. Although several methods have been proposed to address this case, most of them rely on complex task-specific structures and ignore potentially useful baselines for the task. We argue that this creates an overly optimistic impression of their performance. This paper revisits the Multiple LSTM-CRF (MLC) model, a simple, overlooked, yet powerful approach based on training independent sequence labeling models for each entity type. Extensive experiments with three nested NER corpora show that, regardless of the simplicity of this model, its performance is better or at least as well as more sophisticated methods. Furthermore, we show that the MLC architecture achieves state-of-the-art results in the Chilean Waiting List corpus by including pre-trained language models. In addition, we implemented an open-source library that computes task-specific metrics for nested NER. The results suggest that metrics used in previous work do not measure well the ability of a model to detect nested entities, while our metrics provide new evidence on how existing approaches handle the task.",NER for Nested Entities "In existing research on nested named entity recognition, span-based methods are employed to treat named entity recognition problems as span classification tasks, wherein finetuned pretrained models are utilized to facilitate more efficient recognition of nested entities. However, issues such as domain knowledge deficiency and failure to realize multi-classification are still unsolved. To address these issues, this paper innovatively put forward a Multi-Head Model Based on Knowledge Embedding (MKE), which: (1) introduces domain-specific background knowledge in the form of entity matrices, allowing the background knowledge to be embedded without any loss, and (2) transforms named entity recognition into a multi-head selection process, followed by scoring the candidate spans using the attention score model. This method realizes entity multi-classification, while also ensuring accurate identification of nested entity boundaries. The experimental results show that the background knowledge embedding realized by the entity matrix method is able to not only elevate the recognition accuracy, but also achieve the state-of-theart performance on seven nested and flat named entity recognition datasets.",NER for Nested Entities "Nested entities commonly exist in news articles and biomedical corpora. The performance of nested NER is still a great challenge in the field of named entity recognition (NER). Unlike the structural models in previous work, this paper presents a comprehensive study of nested NER by means of text-of-interest (ToI) detection. This paper presents a novel ToI-CNN with dual transformer encoders (ToI-CNN + DTE) model for this solution. We design a directional self-attention mechanism to encode contextual representation over the whole-sentence in the forward and backward directions. The features of the entities are extracted from the contextual token representations by a convolutional neural network. Moreover, we use HAT pooling operation to convert the various length ToIs to a fixed length vector and connect with a fully connected network for classification. The layer where the nested entities are located can be evaluated by multi-task learning jointly with layer classification. The experimental results show that our model achieves excellent performance in F1 score, training cost and layer evaluation on the nested NER datasets.",NER for Nested Entities "Contract analysis can significantly ease the work for humans using AI techniques. This paper shows a lengthy nested NER problem of element tagging on insurance policy (ETIP). Compared to NER, ETIP deals with not only different types of entities which vary from a short phrase to a long sentence, but also phrase or clause entities that could be nested. We present a novel hybrid framework of deep learning and heuristic filtering method to recognize the lengthy nested elements. First, a convolutional neural network is constructed to obtain good initial candidates of sliding windows with high softmax probability. Then, the concatenation operator on adjacent candidate segments is introduced to create phrase, clause, or sentence candidates. We design an effective voting strategy to resolve the classification conflict of the concatenated candidates and present a theoretical proof of F1-score optimization. In experiments, we have collected a large Chinese insurance contract dataset to test the performance of the proposed method. An extensive set of experiments is performed to investigate how sliding window candidates can work effectively in our filtering and voting strategy. The optimal parameters are determined by statistical analysis of the experimental data. The results show the promising performance of our method in the ETIP problem.",NER for Nested Entities "Named entity recognition (NER) is a widely studied task in natural language processing. Recently, a growing number of studies have focused on the nested NER. The span-based methods consider the named entity recognition as span classification task, can deal with nested entities naturally. But they suffer from class imbalance problem because the number of non-entity spans accounts for the majority of total spans. To address this issue, we propose a two stage model for nested NER. We utilize an entity proposal module to filter an easy non-entity spans for efficient training. In addition, we combine all variants of the model to improve overall accuracy of our system. Our method achieves 1st place on the Vietnamese NER shared task at the 8th International Workshop on Vietnamese Language and Speech Processing (VLSP) with F1-score of 62.71 on the private test dataset.",NER for Nested Entities