Dataset Viewer
Auto-converted to Parquet
Abstracts
stringlengths
379
1.97k
Class
stringclasses
21 values
Sign language is one of the oldest and most natural forms of language for communication, but since most people do not know sign language and interpreters are very difficult to come by, we have come up with a real-time method using neural networks for fingerspelling-based Indian Sign Language. We collected a dataset of depth based segmented RGB image for classifying 36 different gestures (alphabets and numerals). The system takes in a hand gesture as input and returns the corresponding recognized character as output in real time on the monitor screen. For classification we used Convolutional Neural Network. Our method provides 95.7 % accuracy for the 36-hand gesture.
Sign Language and Fingerspelling Recognition
Sign language recognition is one of the most challenging tasks of today__ era. Most of the researchers working in this domain have focused on different types of implementations for sign recognition. These implementations require the development of smart prototypes for capturing and classifying sign gestures. Keeping in mind the aspects of prototype design, sensor-based, vision-based, and hybrid approach-based prototypes have been designed. The authors in this paper have designed sensor-based assistive gloves to capture signs for the alphabet and digits. These signs are a small but important fraction of the ASL dictionary since they play an essential role in fingerspelling, which is a universal signed linguistic strategy for expressing personal names, technical terms, gaps in the lexicon, and emphasis. A scaled conjugate gradient-based back propagation algorithm is used to train a fully-connected neural network on a self-collected dataset of isolated static postures of digits, alphabetic, and alphanumeric characters. The authors also analyzed the impact of activation functions on the performance of neural networks. Successful implementation of the recognition network produced promising results for this small dataset of static gestures of digits, alphabetic, and alphanumeric characters.
Sign Language and Fingerspelling Recognition
Sign language users tend to be socially restricted due to the general population__ lack of knowledge of sign language. Some attempts have been made to develop technologies that improve this aspect by translating sign language. However; these approaches generally use a third-person camera for collecting the information, limiting sign users to environments prepared for this purpose.In this study, we develop a first-person view Japanese fingerspelling recognition system using an Optical See-Through Head Mount Display (OSTHMD). The system estimates the hand posture from the camera mounted on the OSTHMD and applies machine learning to the hand posture data to classify the hand gestures and convert them into speech. 37 Japanese sign language fingerspelling were successfully recognized by using a Microsoft pose extractor. Next, using a support vector machine, 37 out of 53 Japanese sign language fingerspelling were successfully identified with more than 70% identification rate. Finally, the specified labels were converted into speech using the speech output module with Azure API.The main purpose of this research is to propose a system that enables sign language users to communicate with verbal people without environmental restrictions.
Sign Language and Fingerspelling Recognition
Although not a global language, sign language is an essential tool for the deaf community. Communication between these communities and hearing population is severely hampered by this, as human-based interpretation can be both costly and time-consuming. In this paper, we present a real-time American Sign Language (ASL) generation and recognition system that makes use of Convolutional Neural Networks and deep learning (CNNs). Despite differences in lighting, skin tones, and backdrops, our technology is capable of correctly identifying and generating ASL signs. We trained our model on a large dataset of ASL signs in order to obtain a high level of accuracy. Our findings show that, with accuracy rates of 98.53% and 98.84%, respectively, our system achieves high accuracy rates in both training and validation. Our approach uses the advantages of CNNs to accomplish quick and precise recognition of individual letters and words, making it particularly effective for sign fingerspelling recognition. We believe that our technology has the ability to transform communication between the hearing community and the deaf and hard-of-hearing communities by providing a dependable and cost-effective way of sign language interpretation. Our method could help people who use sign language communicate more easily and live better in a range of environments, including schools, hospitals, and public places.
Sign Language and Fingerspelling Recognition
Sign Language Recognition (SLR) is a Computer Vision (CV) and Machine Learning (ML) task, with potential applications that would be beneficial to the Deaf community, which includes not only deaf persons but also hearing people who use Sign Languages. SLR is particularly challenging due to the lack of training datasets for CV and ML models, which impacts their overall accuracy and robustness. In this paper, we explore the use of synthetic images to augment a dataset of fingerspelling signs and we evaluate whether this could be used to reliably increase the performance of an SLR system. Our model is based on a pretrained convolutional network, fine-tuned using synthetic images, and tested using a corpus dataset of real recordings of native signers. An accuracy of 71% recognition was achieved using skeletal wireframe image training datasets and using the MediaPipe pose estimation model in the test pipeline. This compares favourably with state-of-the-art CV models which achieve up to 62% accuracy with __n-the-wild_ fingerspelling test datasets.
Sign Language and Fingerspelling Recognition
Sign language is a method of communication using hand gestures that are usually used by Deaf people. In Indonesia, there are 2 types of sign language, namely SIBI and BISINDO. However, in everyday life, BISINDO is more often used. Communication gaps often occur between Deaf people and hearing people. So that we need media that can bridge their communication. one of the technologies that can be used is SLR (Sign Language Recognition). SLR itself has various kinds of approaches, one of which is a vision-based SLR. Vision-based SLR has an advantage, such as not requiring a special device attached to the hand, but simply making gestures with bare hands in front of the camera. In this study, we created a machine learning model with a vision-based SLR approach. The model we created was using the CNN (Convolutional Neural Network) architecture. The CNN model was trained and tested on the BISINDO alphabet (A-Z) dataset that we created on our own. This model achieves an accuracy of 99.28% on validation accuracy, 98.57% on testing accuracy, and 98.07% on real-time testing accuracy.
Sign Language and Fingerspelling Recognition
The goal of sign language technologies is to develop a bridging solution for the communication gap between the hearing-impaired community and the rest of society. Real-time Sign Language Recognition (SLR) is a state-of-the-art subject that promises to facilitate communication between the hearing-impaired community and others. Our research uses transfer learning to provide vision-based sign language recognition. We investigated recent works that use CNN-based methods and provided a literature review on deep learning systems for the sign language recognition (SLR) problem. This paper discusses the architecture of deep learning methods for SLR systems and explains a transfer learning application for fingerspelling sign classification. For the experiments, we used the Azerbaijani Sign Language Fingerspelling dataset and got 88.0% accuracy.
Sign Language and Fingerspelling Recognition
The recent development of disability studies in academic bodies has expedited the promotion of investigation on disability. With computer-aided tools, communication between the impaired person and someone who does not understand sign language could be accessible. A large number of people across the world are using sign language (e.g., British Sign Language (BSL), Asian Sign Language (ASL), Indian Sign Language (ISL), etc.) with hand gestures for communication. In BSL recognition, the involvement of both hands overlapping each other becomes the main challenge. Moreover, BSL comprises ambiguous signs concerning viewpoint. However, existing traditional techniques seem in-stable, less accurate, and inefficient. In this work, the BSL fingerspelling alphabet recognition problem explores using a Deep learning framework to address the above-mentioned concerns. Convolutional Neural Network (CNN) is employed to detect and recognize for classification of 26 alphabets after being trained on the BSL corpus dataset. The proposed work outperforms the existing works with better precision (6%), recall (4%), and F-measure (5_%). It reported better results on the BSL corpus dataset and webcam videos. The model achieved better accuracy (98.0%) for a large lexicon of words than previous models (Goh & Holden [6]: 69.5%, Rambhau [9]: 79.2%, and Liwicki et al. [8]: 92.5%). The 3D CNN-based proposal performs robust hand detection, much more accurate sign recognition, more scalability, and less ambiguity in BSL finger-spelling recognition.
Sign Language and Fingerspelling Recognition
Sign language is used by deaf and hard hearing people to exchange information between their own community and with other people. Fingerspelling recognition method from isolate sign language has attracted research interest in computer vision and human-computer interaction based on a novel technique. The essential for real-time recognition of isolate sign language has grown with the emergence of better-capturing devices such as Kinect sensors. The purpose of this paper is to design a user independent framework for automatic recognition of American Sign Language which can recognize several one-handed dynamic isolated signs and interpreting their meaning. We built datasets as a raw data for alphabets (A__) or numbers (1_20) by used left-hand the 3D point (XL, YL, ZL) or switch by right-hand (XR, YR, ZR) centroid as one of contribution. The proposed approach was tested for gestures that involve left-hand or right-hand and was compared with other approach and gave better accuracy. Two machine learning methods are involved like Hidden Conditional Random Field (HCRF), and Random Decision Forest (RDF) for the classification part. The third contribution based on low lighting condition and cluttered background. In this research work is achieved for recognition accuracy over 99.7%.
Sign Language and Fingerspelling Recognition
Sign Language Recognition(SLR) is a complex gesture recognition problem because of the quick and highly coarticulated motion involved in gestures. This research work focuses on Fingerspelling recognition task, which constitutes 35% of the American Sign Language (ASL). Fingerspelling identifies the word letter by letter. Fingerspelling is used for signing the words which do not have designated ASL signs such as technical terms, content words and proper nouns. In our proposed work for ASL Fingerspelling recognition, we consider ChicagoFSWild dataset which consists of occlusions and images captured in varying illuminations, lighting conditions (in the wild environments). The optical flow is obtained from Lucas-Kanade algorithm, prior is generated, images are resized and cropped with face-roi technique to get the region of interest (ROI). The visual attention mechanism attends to the ROI iteratively. ResNet, pretrained on Imagenet is used for the extraction of spatial features. The Bi-LSTM network with Connectionist Temporal Classification (CTC) is used to predict the sign. It provides the accuracy of 57% on ChicagoFSWild dataset for Fingerspelling recognition task.
Sign Language and Fingerspelling Recognition
Natural Language Processing (NLP) is a vital field of artificial intelligence that automates the study of human language. However for Malay manuscripts (MM) written in old jawi, its exposure on such field is limited. Besides, most of the studies related to MM studies and NLP were focused on rule based or rule based machine transliteration (RBMT). Hence the objective of this study is to propose a statistical approach for old jawi to modern jawi transliteration of Malay manuscript contents using Phrase Based Statistical Machine Translation (PBSMT) as its model. In order to achieve such purpose, quality score of Word Error Rate (WER) was computed on the transliteration output. Besides, the issues formerly encountered by rule based approach such as vocals limitation and homograph, reduplication, letters error and combination of multiple words were observed in the implementation. Moreover, this paper utilized exploratory approach as its research strategy and mixed method as its research method. The data for the analysis were extracted from a MM titled Bid¨¡yat al-Mubtad¨© bi-F¨¡lillah al-Muhd¨©. Quality score of WER was computed for the evaluation of SMT output. Afterwards, related issues were identified and assessed. The research found that quality score of PBSMT for old jawi to modern jawi transliteration was high in terms of WER, however the issues of rule based were generally addressed by PBSMT except homograph. The research is however limited to the approach of SMT that solely focused on PBSMT as its model. Moreover, the corpus size was limited to one manuscript while SMT relies on corpus size. Nevertheless the research contributes to the wider coverage on Malay language as one of the under resource languages in NLP, in form of old and modern jawi. Besides, to the best of the researcher¡¯s knowledge, it is also the first to apply SMT (PBSMT) approach on old jawi transliteration. Most importantly, the study is to contribute on MM¡¯s.
Rule-based MT (RBMT)
This paper presents a comparison of post-editing (PE) changes performed on English-to-Finnish neural (NMT), rule-based (RBMT) and statistical machine translation (SMT) output, combining a product-based and a process-based approach. A total of 33 translation students acted as participants in a PE experiment providing both post-edited texts and edit process data. Our product-based analysis of the post-edited texts shows statistically significant differences in the distribution of edit types between machine translation systems. Deletions were the most common edit type for the RBMT, insertions for the SMT, and word form changes as well as word substitutions for the NMT system. The results also show significant differences in the correctness and necessity of the edits, particularly in the form of a large number of unnecessary edits in the RBMT output. Problems related to certain verb forms and ambiguity were observed for NMT and SMT, while RBMT was more likely to handle them correctly. Process-based comparison of effort indicators shows a slight increase of keystrokes per word for NMT output, and a slight decrease in average pause length for NMT compared to RBMT and SMT in specific text blocks. A statistically significant difference was observed in the number of visits per sub-segment, which is lower for NMT than for RBMT and SMT. The results suggest that although different types of edits were needed to outputs from NMT, RBMT and SMT systems, the difference is not necessarily reflected in process-based effort indicators.
Rule-based MT (RBMT)
Machine translation has witnessed great development in the recent decades and we have entered the era of neural machine translation (NMT). A review of MT is necessary for a better understanding of the relationship between MT and human translators and translation teaching in this era when MT has flourished. This paper first briefs the machine translation (MT) development in the past decades, focusing on the features, application, and drawbacks of each main paradigm of rule-based machine translation (RBMT), corpus-based translation (CBMT), and long-short term memory (LSTM), a main paradigm of NMT. It continues with a discussion of what MT means to human translators and translation teaching in universities. It concludes that MT should not and could not replace human translators which will always be vital in some fields and aspects; only a good integration between the two can ensure satisfying output with post-editing by human translators to meet the increasingly demanding market. This signifies that translation teaching in universities should embrace MT knowledge.
Rule-based MT (RBMT)
This article re-looks into machine translation (MT) errors and proposes a function-oriented MT post-editing (MTPE) typology in a new technological context. Driven by the technological advances of the neural machine translation (NMT) system over the past several years, the author thinks that we should re-examine MT errors created by NMT systems, and understand whether the NMT system can resolve the issues the rule-based MT (RBMT) and statistical MT (SMT) systems have encountered. A mixed-methods approach is used to complete this study, and technical texts, journalistic texts and web-based company texts are chosen as analytical materials. The three-phased procedure consists of (1) cross-checking the differences between source texts (STs), MT outputs and corresponding human translations (HTs) to identify MT errors, (2) proposing a three-tier MTPE typology to supplement the current binary MTPE typology and (3) exploring empirical and theoretical implications of this research. The findings differ from previous MTPE studies in three aspects: (1) amending linguistic, pragmatic and affective MT errors with the strategies of “accurateenough editing,” “clear-enough editing” and “attractive-enough editing,” not the strategies of light editing and full editing; (2) replacing the existing editor-driven MTPE typology with a functiondriven MTPE typology; and (3) using a progressive, flexible MTPE typology to meet the textual functions of different types of MT texts. Overall, this article re-examines MT errors and MTPE strategies, and raises an alternative MTPE typology from the perspective of textual functions in the framework of the NMT scenario. It expects to add some novel insights to contemporary MT studies.
Rule-based MT (RBMT)
To build an Indonesian Machine Translation (MT), it is not only needed a related syntactic analysis to the correct spelling of words but also needed related contextual analysis, consist type and function of word, morphology, and semantic. The dictionaries usage is needed to translates Indonesian basic words and to captures good word translations through the semantic and context of words in a sentence or document. This study purposes to extracts Indonesian and Tolaki words for building a good MT by comparing the development of Indonesian MT which focuses on deep cases of morphology and syntactic. We developed morphtool to captures the morphological elements of Indonesian and Tolaki words. For working in deep syntactic case, we build a rule to captures the function and type of word that can affect the word itself translation in the sentence. We combine supervised and unsupervised techniques to work on the text extraction in the words, sentences, and documents through the morphonemic rules of Indonesian- Tolaki syntaxis manner. Then, we use hybrid MT, combining Statistical MT (SMT) and Rule Based MT (RBMT), for sentence translation process. The hybrid MT evaluation process from the Indonesian-Tolaki to English translation performance test shows the accuracy result is 0.74. Meanwhile, the performance test of the English to Indonesian-Tolaki translation shows the accuracy result is 0.71. These results indicate that the proposed MT method can work better than the SMT and RBMT methods with an average accuracy of around 70%.
Rule-based MT (RBMT)
In this paper we describe a rule-based, bi-directional machine translation system for the Finnish—English language pair. The baseline system was based on the existing data of FinnWordNet, omorfi and apertium-eng. We have built the disambiguation, lexical selection and translation rules by hand. The dictionaries and rules have been developed based on the shared task data. We describe in this article the use of the shared task data as a kind of a test-driven development workflow in RBMT development and show that it suits perfectly to a modern software engineering continuous integration workflow of RBMT and yields big increases to BLEU scores with minimal effort.
Rule-based MT (RBMT)
Corpus-based approaches to machine translation (MT) have difficulties when the amount of parallel corpora to use for training is scarce, especially if the languages involved in the translation are highly inflected. This problem can be addressed from different perspectives, including data augmentation, transfer learning, and the use of additional resources, such as those used in rule-based MT. This paper focuses on the hybridisation of rule-based MT and neural MT for the Breton–French under-resourced language pair in an attempt to study to what extent the rule-based MT resources help improve the translation quality of the neural MT system for this particular under-resourced language pair. We combine both translation approaches in a multi-source neural MT architecture and find out that, even though the rule-based system has a low performance according to automatic evaluation metrics, using it leads to improved translation quality.
Rule-based MT (RBMT)
Machine translation is to translate one language into another language, which has undergone a great evolution. The model of machine translation has been continuously improved, aiming to make the translation effect closer to the artificial translation. This article briefly summarizes the development history of machine translation, and introduces the main models of each stage of development. The initial machine translation mode is the Rule Based Machine Translation (RBMT) and Statistical Machine Translation (SMT). Recent mainstream translation approach enables Neural Machine Translation (NMT). It includes the input and the output, attention mechanism, and BLEU evaluation method. On this basis, there are also many expansion and innovation models, such as GPKD and other models to improve the evaluation effect. In general, machine translation can replace a part of human translation. However, it cannot completely replace human beings, because of the different human thinking and machine logic. People and machines have to cooperate with each other to improve the common efficiency.
Rule-based MT (RBMT)
This article aimed to address the problems of word order confusion, context dependency, and ambiguity in traditional machine translation (MT) methods for verb recognition. By applying advanced intelligent algorithms of artificial intelligence, verb recognition can be better processed and the quality and accuracy of MT can be improved. Based on Neural machine translation (NMT), basic attention mechanisms, historical attention information, dynamically obtain information related to the generated words, and constraint mechanisms were introduced to embed semantic information, represent polysemy, and annotate semantic roles of verbs. This article used the Workshop on machine translation (WMT), British National Corpus (BNC), Gutenberg, Reuters Corpus, OpenSubtitles corpus, and enhanced the data in the corpus. The improved NMT model was compared with traditional NMT models, Rule Based machine translation (RBMT), and Statistical machine translation (SMT). The experimental results showed that the average verb semantic matching degree of the improved NMT model in 5 corpora was 0.85, and the average Bilingual Evaluation Understudy (BLEU) score in 5 corpora was 0.90. The improved NMT model in this article can effectively improve the accuracy of verb recognition in MT, providing new methods for verb recognition in MT.
Rule-based MT (RBMT)
Machine translation (MT) systems translate text between different languages by automatically learning in-depth knowledge of bilingual lexicons, grammar and semantics from the training examples. Although neural machine translation (NMT) has led the field of MT, we have a poor understanding on how and why it works. In this paper, we bridge the gap by assessing the bilingual knowledge learned by NMT models with phrase table -- an interpretable table of bilingual lexicons. We extract the phrase table from the training examples that an NMT model correctly predicts. Extensive experiments on widely-used datasets show that the phrase table is reasonable and consistent against language pairs and random seeds. Equipped with the interpretable phrase table, we find that NMT models learn patterns from simple to complex and distill essential bilingual knowledge from the training examples. We also revisit some advances that potentially affect the learning of bilingual knowledge (e.g., back-translation), and report some interesting findings. We believe this work opens a new angle to interpret NMT with statistic models, and provides empirical supports for recent advances in improving NMT models.
Rule-based MT (RBMT)
The landscape of transformer model inference is increasingly diverse in model size, model characteristics, latency and throughput requirements, hardware requirements, etc. With such diversity, designing a versatile inference system is challenging. DeepSpeed-Inference addresses these challenges by (1) a multi-GPU inference solution to minimize latency while maximizing throughput for both dense and sparse transformers when the model fits in aggregate GPU memory, and (2) a heterogeneous inference solution that leverages CPU/NVMe/GPU memory to enable high-throughput inference for models larger than aggregate GPU memory. DeepSpeed-Inference reduces latency by 6.4× and increases throughput by 1.5 ×over the state-of-the-art. It enables trillion parameter scale inference under real-time latency constraints by leveraging hundreds of GPUs, an unprecedented scale for inference. It can inference 25 ×larger models than with GPU-only solutions, while delivering a high throughput of 84 TFLOPS (over 50% of A6000 peak).
Transformer Models
The transformer is the most critical algorithm innovation of the Nature Language Processing (NLP) field in recent years. Unlike the Recurrent Neural Network (RNN) models, transformers are able to process on dimensions of sequence lengths in parallel, therefore leads to better accuracy on long sequences. However, efficient deployments of them for online services in data centers equipped with GPUs are not easy. First, more computation introduced by transformer structures makes it more challenging to meet the latency and throughput constraints of serving. Second, NLP tasks take in sentences of variable length. The variability of input dimensions brings a severe problem to efficient memory management and serving optimization. To solve the above challenges, this paper designed a transformer serving system called TurboTransformers, which consists of a computing runtime and a serving framework. Three innovative features make it stand out from other similar works. An efficient parallel algorithm is proposed for GPU-based batch reduction operations, like Softmax and LayerNorm, which are major hot spots besides BLAS routines. A memory allocation algorithm, which better balances the memory footprint and allocation/free efficiency, is designed for variable-length input situations. A serving framework equipped with a new batch scheduler using dynamic programming achieves the optimal throughput on variable-length requests. The system can achieve the state-of-the-art transformer model serving performance on GPU platforms and can be seamlessly integrated into your PyTorch code with a few lines of code.
Transformer Models
Transformer is the state-of-the-art model in recent machine translation evaluations. Two strands of research are promising to improve models of this kind: the first uses wide networks (a.k.a. Transformer-Big) and has been the de facto standard for development of the Transformer system, and the other uses deeper language representation but faces the difficulty arising from learning deep networks. Here, we continue the line of research on the latter. We claim that a truly deep Transformer model can surpass the Transformer-Big counterpart by 1) proper use of layer normalization and 2) a novel way of passing the combination of previous layers to the next. On WMT’16 English-German and NIST OpenMT’12 Chinese-English tasks, our deep system (30/25-layer encoder) outperforms the shallow Transformer-Big/Base baseline (6-layer encoder) by 0.4-2.4 BLEU points. As another bonus, the deep model is 1.6X smaller in size and 3X faster in training than Transformer-Big.
Transformer Models
Transformer architectures are highly expressive because they use self-attention mechanisms to encode long-range dependencies in the input sequences. In this paper, we present a literature review on Transformer-based (TB) models, providing a detailed overview of each model in comparison to the Transformer’s standard architecture. This survey focuses on TB models used in the field of Natural Language Processing (NLP) for textual-based tasks. We begin with an overview of the fundamental concepts at the heart of the success of these models. Then, we classify them based on their architecture and training mode. We compare the advantages and disadvantages of popular techniques in terms of architectural design and experimental value. Finally, we discuss open research, directions, and potential future work to help solve current TB application challenges in NLP.
Transformer Models
The question answering system is frequently applied in the area of natural language processing (NLP) because of the wide variety of applications. It consists of answering questions using natural language. The problem is, in general, solved by employing a dataset that consists of an input text, a query, and the text segment or span from the input text that provides the question’s answer. The ability to make human-level predictions from data has improved significantly thanks to deep learning models, particularly the Transformer architecture, which has been state-of-the-art in text-based models in recent years. This paper reviews studies related to the use of transformer models in the implementation of question-answering (QA) systems. The paper’s first focus is on the attention and transformer models. A brief description of the architectures is presented by classifying them into models based on encoders, decoders, and on both Encoder-Decoder. Following that, we examine the most recent research trends in textual QA datasets by highlighting the architecture of QA systems and categorizing them according to various criteria. We survey also a significant set of evaluation metrics that have been developed in order to evaluate the models’ performance. Finally, we highlight solutions built to simplify the implementation of Transformer models.
Transformer Models
Transformer-based sequence-to-sequence architectures, while achieving state-of-the-art results on a large number of NLP tasks, can still suffer from overfitting during training. In practice, this is usually countered either by applying regularization methods (e.g. dropout, L2-regularization) or by providing huge amounts of training data. Additionally, Transformer and other architectures are known to struggle when generating very long sequences. For example, in machine translation, the neural-based systems perform worse on very long sequences when compared to the preceding phrase-based translation approaches (Koehn and Knowles, 2017). We present results which suggest that the issue might also be in the mismatch between the length distributions of the training and validation data combined with the aforementioned tendency of the neural networks to overfit to the training data. We demonstrate on a simple string editing tasks and a machine translation task that the Transformer model performance drops significantly when facing sequences of length diverging from the length distribution in the training data. Additionally, we show that the observed drop in performance is due to the hypothesis length corresponding to the lengths seen by the model during training rather than the length of the input sequence.
Transformer Models
Transformer-based models are the state-of-the-art for Natural Language Understanding (NLU) applications. Models are getting bigger and better on various tasks. However, Transformer models remain computationally challenging since they are not efficient at inference-time compared to traditional approaches. In this paper, we present FastFormers, a set of recipes to achieve efficient inference-time performance for Transformer-based models on various NLU tasks. We show how carefully utilizing knowledge distillation, structured pruning and numerical optimization can lead to drastic improvements on inference efficiency. We provide effective recipes that can guide practitioners to choose the best settings for various NLU tasks and pretrained models. Applying the proposed recipes to the SuperGLUE benchmark, we achieve from 9.8x up to 233.9x speed-up compared to out-of-the-box models on CPU. On GPU, we also achieve up to 12.4x speed-up with the presented methods. We show that FastFormers can drastically reduce cost of serving 100 million requests from 4,223 USD to just 18 USD on an Azure F16s_v2 instance. This translates to a sustainable runtime by reducing energy consumption 6.9x - 125.8x according to the metrics used in the SustaiNLP 2020 shared task.
Transformer Models
Transformer-based deep NLP models are trained using hundreds of millions of parameters, limiting their applicability in computationally constrained environments. In this paper, we study the cause of these limitations by defining a notion of Redundancy, which we categorize into two classes: General Redundancy and Task-specific Redundancy. We dissect two popular pretrained models, BERT and XLNet, studying how much redundancy they exhibit at a representation-level and at a more fine-grained neuron-level. Our analysis reveals interesting insights, such as: i) 85% of the neurons across the network are redundant and ii) at least 92% of them can be removed when optimizing towards a downstream task. Based on our analysis, we present an efficient feature-based transfer learning procedure, which maintains 97% performance while using at-most 10% of the original neurons.
Transformer Models
In this paper, we present a new approach to time series forecasting. Time series data are prevalent in many scientific and engineering disciplines. Time series forecasting is a crucial task in modeling time series data, and is an important area of machine learning. In this work we developed a novel method that employs Transformer-based machine learning models to forecast time series data. This approach works by leveraging self-attention mechanisms to learn complex patterns and dynamics from time series data. Moreover, it is a generic framework and can be applied to univariate and multivariate time series data, as well as time series embeddings. Using influenza-like illness (ILI) forecasting as a case study, we show that the forecasting results produced by our approach are favorably comparable to the state-of-the-art.
Transformer Models
We introduce DropHead, a structured dropout method specifically designed for regularizing the multi-head attention mechanism which is a key component of transformer. In contrast to the conventional dropout mechanism which randomly drops units or connections, DropHead drops entire attention heads during training to prevent the multi-head attention model from being dominated by a small portion of attention heads. It can help reduce the risk of overfitting and allow the models to better benefit from the multi-head attention. Given the interaction between multi-headedness and training dynamics, we further propose a novel dropout rate scheduler to adjust the dropout rate of DropHead throughout training, which results in a better regularization effect. Experimental results demonstrate that our proposed approach can improve transformer models by 0.9 BLEU score on WMT14 En-De translation task and around 1.0 accuracy for various text classification tasks.
Transformer Models
Generalization is a key element behind a strong performing neural network: models that generalize perform well even with novel inputs. We investigated a specific form of generalization known as systematic compositionality, the algebraic capacity to understand and produce a potentially infinite number of novel combinations from known components [Chomsky and Lightfoot, 2002, Montague, 1970]. The principle of systematic compositionality is especially adequate in explaining efficient language learning in humans. For example, once a child learns the meaning of the word “jump” and the meaning of the word “twice,” he or she can understand the utterance “jump twice.” However, it is not clear whether neural networks, particularly RNNs, compose systematically as humans do. Identifying systematic compositionality in RNNs, or lack thereof, can give insight to their need for large sets of training examples.
Recurrent Neural Networks (RNNs)
Recurrent neural networks (RNNs) have demonstrated very impressive performances in learning sequential data, such as in language translation and music generation. Here, we show that the intrinsic computational aspect of RNNs is very similar to that of classical stress update algorithms in modeling history-dependent materials with an emphasis on viscoelasticity. Several numerical examples are designed, including 1-dimensional and 3-dimensional cases, which testify the ability of RNN model to compute the viscoelastic response when predicting on unseen test data. Additionally, it is found that the RNN model trained only on linear and step strain inputs can perform very well on prediction of completely different quadratic strain inputs, demonstrating certain level of generalization ability in extrapolation. Moreover, it is observed that the extrapolation ability depends on the types of strain inputs. The performance is better for continuous strain inputs than that for jump strain inputs. The differences in the generalization ability of RNN models in viscoelasticity and other history-dependent materials are discussed. It suggests that RNN data-driven modeling can be an alternative to the conventional viscoelasticity models.
Recurrent Neural Networks (RNNs)
Recurrent neural networks (RNNs) have been widely adopted in research areas concerned with sequential data, such as text, audio, and video. However, RNNs consisting of sigma cells or tanh cells are unable to learn the relevant information of input data when the input gap is large. By introducing gate functions into the cell structure, the long short-term memory (LSTM) could handle the problem of long-term dependencies well. Since its introduction, almost all the exciting results based on RNNs have been achieved by the LSTM. The LSTM has become the focus of deep learning. We review the LSTM cell and its variants to explore the learning capacity of the LSTM cell. Furthermore, the LSTM networks are divided into two broad categories: LSTM-dominated networks and integrated LSTM networks. In addition, their various applications are discussed. Finally, future research directions are presented for LSTM networks.
Recurrent Neural Networks (RNNs)
Recurrent neural networks (RNNs) are widely used throughout neuroscience as models of local neural activity. Many properties of single RNNs are well characterized theoretically, but experimental neuroscience has moved in the direction of studying multiple interacting areas, and RNN theory needs to be likewise extended. We take a constructive approach towards this problem, leveraging tools from nonlinear control theory and machine learning to characterize when combinations of stable RNNs will themselves be stable. Importantly, we derive conditions which allow for massive feedback connections between interacting RNNs. We parameterize these conditions for easy optimization using gradient-based techniques, and show that stability-constrained"networks of networks"can perform well on challenging sequential-processing benchmark tasks. Altogether, our results provide a principled approach towards understanding distributed, modular function in the brain.
Recurrent Neural Networks (RNNs)
Back-propagation through time (BPTT) has been widely used for training Recurrent Neural Networks (RNNs). BPTT updates RNN parameters on an instance by back-propagating the error in time over the entire sequence length, and as a result, leads to poor trainability due to the well-known gradient explosion/decay phenomena. While a number of prior works have proposed to mitigate vanishing/explosion effect through careful RNN architecture design, these RNN variants still train with BPTT. We propose a novel forward-propagation algorithm, FPTT , where at each time, for an instance, we update RNN parameters by optimizing an instantaneous risk function. Our proposed risk is a regularization penalty at time t that evolves dynamically based on previously observed losses, and allows for RNN parameter updates to converge to a stationary solution of the empirical RNN objective. We consider both sequence-to-sequence as well as terminal loss problems. Empirically FPTT outperforms BPTT on a number of well-known benchmark tasks, thus enabling architectures like LSTMs to solve long range dependencies problems.
Recurrent Neural Networks (RNNs)
This paper addresses the synchronization of multiple fractional-order recurrent neural networks (RNNs) with time-varying delays under event-triggered communications. Based on the assumption of the existence of strong connectivity or a spanning tree in the communication digraph, two sets of sufficient conditions are derived for achieving event-triggered synchronization. Moreover, an additional condition is derived to preclude Zeno behaviors. As a generalization of existing results, the criteria herein are also applicable to the event-triggered synchronization of multiple integer-order RNNs with or without delays. Two numerical examples are elaborated to illustrate the new results.
Recurrent Neural Networks (RNNs)
In this paper, we address the Clifford-valued distributed optimization subject to linear equality and inequality constraints. The objective function of the optimization problems is composed of the sum of convex functions defined in the Clifford domain. Based on the generalized Clifford gradient, a system of multiple Clifford-valued recurrent neural networks (RNNs) is proposed for solving the distributed optimization problems. Each Clifford-valued RNN minimizes a local objective function individually, with local interactions with others. The convergence of the neural system is rigorously proved based on the Lyapunov theory. Two illustrative examples are delineated to demonstrate the viability of the results in this article.
Recurrent Neural Networks (RNNs)
Variants of deep networks have been widely used for hyperspectral image (HSI)-classification tasks. Among them, in recent years, recurrent neural networks (RNNs) have attracted considerable attention in the remote sensing community. However, complex geometries cannot be learned easily by the traditional recurrent units [e.g., long short-term memory (LSTM) and gated recurrent unit (GRU)]. In this article, we propose a geometry-aware deep recurrent neural network (Geo-DRNN) for HSI classification. We build this network upon two modules: a U-shaped network (U-Net) and RNNs. We first input the original HSI patches to the U-Net, which can be trained with very few images and obtain a preliminary classification result. We then add RNNs on the top of the U-Net so as to mimic the human brain to refine continuously the output-classification map. However, instead of using the traditional dot product in each gate of the RNNs, we introduce a Net-Gated GRU that increases the nonlinear representation power. Finally, we use a pretrained ResNet as a regularizer to improve further the ability of the proposed network to describe complex geometries. To this end, we construct a geometry-aware ResNet loss, which leverages the pretrained ResNet’s knowledge about the different structures in the real world. Our experimental results on real HSIs and road topology images demonstrate that our approach outperforms the state-of-the-art classification methods and can learn complex geometries.
Recurrent Neural Networks (RNNs)
This paper presents a sentiment analysis solution on tweets using Recurrent Neural Networks (RNNs). The method is can classifying tweets with an 80.74% accuracy rate, considering a binary task, after experimenting with 20 different design approaches. The solution integrates an attention mechanism aiming to enhance the network, with a two-way localization system: at memory cell level and at network level. We present an in-depth literature review for Twitter sentiment analysis and the building blocks that grounded the design decisions of our solution, employed as a core classification component within a sentiment indicator of the SynergyCrowds platform.
Recurrent Neural Networks (RNNs)
State-of-the-art solutions in the areas of "Language Modelling & Generating Text", "Speech Recognition", "Generating Image Descriptions" or "Video Tagging" have been using Recurrent Neural Networks as the foundation for their approaches. Understanding the underlying concepts is therefore of tremendous importance if we want to keep up with recent or upcoming publications in those areas. In this work we give a short overview over some of the most important concepts in the realm of Recurrent Neural Networks which enables readers to easily understand the fundamentals such as but not limited to "Backpropagation through Time" or "Long Short-Term Memory Units" as well as some of the more recent advances like the "Attention Mechanism" or "Pointer Networks". We also give recommendations for further reading regarding more complex topics where it is necessary.
Recurrent Neural Networks (RNNs)
Large Language Models (LLMs) have emerged as a groundbreaking technology with their unparalleled text generation capabilities across various applications. Nevertheless, concerns persist regarding the accuracy and appropriateness of their generated content. A contemporary methodology, self-correction, has been proposed as a remedy to these issues. Building upon this premise, this paper critically examines the role and efficacy of self-correction within LLMs, shedding light on its true potential and limitations. Central to our investigation is the notion of intrinsic self-correction, whereby an LLM attempts to correct its initial responses based solely on its inherent capabilities, without the crutch of external feedback. In the context of reasoning, our research indicates that LLMs struggle to self-correct their responses without external feedback, and at times, their performance even degrades after self-correction. Drawing from these insights, we offer suggestions for future research and practical applications in this field.
Large Language Models (LLMs)
Large Language Models (LLMs) are a type of artificial intelligence that has been revolutionizing various fields, including biomedicine. They have the capability to process and analyze large amounts of data, understand natural language, and generate new content, making them highly desirable in many biomedical applications and beyond. In this workshop, we aim to introduce the attendees to an in-depth understanding of the rise of LLMs in biomedicine, and how they are being used to drive innovation and improve outcomes in the field, along with associated challenges and pitfalls.
Large Language Models (LLMs)
Large language models (LLMs), such as GPT-4, have shown remarkable performance in natural language processing (NLP) tasks, including challenging mathematical reasoning. However, most existing open-source models are only pre-trained on large-scale internet data and without math-related optimization. In this paper, we present WizardMath, which enhances the mathematical reasoning abilities of Llama-2, by applying our proposed Reinforcement Learning from Evol-Instruct Feedback (RLEIF) method to the domain of math. Through extensive experiments on two mathematical reasoning benchmarks, namely GSM8k and MATH, we reveal the extraordinary capabilities of our model. WizardMath surpasses all other open-source LLMs by a substantial margin. Furthermore, our model even outperforms ChatGPT-3.5, Claude Instant-1, PaLM-2 and Minerva on GSM8k, simultaneously surpasses Text-davinci-002, PaLM-1 and GPT-3 on MATH.
Large Language Models (LLMs)
Large language models (LLMs) can perform complex reasoning by generating intermediate reasoning steps. Providing these steps for prompting demonstrations is called chain-of-thought (CoT) prompting. CoT prompting has two major paradigms. One leverages a simple prompt like"Let's think step by step"to facilitate step-by-step thinking before answering a question. The other uses a few manual demonstrations one by one, each composed of a question and a reasoning chain that leads to an answer. The superior performance of the second paradigm hinges on the hand-crafting of task-specific demonstrations one by one. We show that such manual efforts may be eliminated by leveraging LLMs with the"Let's think step by step"prompt to generate reasoning chains for demonstrations one by one, i.e., let's think not just step by step, but also one by one. However, these generated chains often come with mistakes. To mitigate the effect of such mistakes, we find that diversity matters for automatically constructing demonstrations. We propose an automatic CoT prompting method: Auto-CoT. It samples questions with diversity and generates reasoning chains to construct demonstrations. On ten public benchmark reasoning tasks with GPT-3, Auto-CoT consistently matches or exceeds the performance of the CoT paradigm that requires manual designs of demonstrations. Code is available at https://github.com/amazon-research/auto-cot
Large Language Models (LLMs)
Since the recent prosperity of Large Language Models (LLMs), there have been interleaved discussions regarding how to reduce hallucinations from LLM responses, how to increase the factuality of LLMs, and whether Knowledge Graphs (KGs), which store the world knowledge in a symbolic form, will be replaced with LLMs. In this paper, we try to answer these questions from a new angle: How knowledgeable are LLMs? To answer this question, we constructed Head-to-Tail, a benchmark that consists of 18K question-answer (QA) pairs regarding head, torso, and tail facts in terms of popularity. We designed an automated evaluation method and a set of metrics that closely approximate the knowledge an LLM confidently internalizes. Through a comprehensive evaluation of 16 publicly available LLMs, we show that existing LLMs are still far from being perfect in terms of their grasp of factual knowledge, especially for facts of torso-to-tail entities.
Large Language Models (LLMs)
Generative Large Language Models (LLMs) such as GPT-3 are capable of generating highly fluent responses to a wide variety of user prompts. However, LLMs are known to hallucinate facts and make non-factual statements which can undermine trust in their output. Existing fact-checking approaches either require access to the output probability distribution (which may not be available for systems such as ChatGPT) or external databases that are interfaced via separate, often complex, modules. In this work, we propose"SelfCheckGPT", a simple sampling-based approach that can be used to fact-check the responses of black-box models in a zero-resource fashion, i.e. without an external database. SelfCheckGPT leverages the simple idea that if an LLM has knowledge of a given concept, sampled responses are likely to be similar and contain consistent facts. However, for hallucinated facts, stochastically sampled responses are likely to diverge and contradict one another. We investigate this approach by using GPT-3 to generate passages about individuals from the WikiBio dataset, and manually annotate the factuality of the generated passages. We demonstrate that SelfCheckGPT can: i) detect non-factual and factual sentences; and ii) rank passages in terms of factuality. We compare our approach to several baselines and show that our approach has considerably higher AUC-PR scores in sentence-level hallucination detection and higher correlation scores in passage-level factuality assessment compared to grey-box methods.
Large Language Models (LLMs)
Large Language Models (LLMs) have demonstrated remarkable zero-shot generalization across various language-related tasks, including search engines. However, existing work utilizes the generative ability of LLMs for Information Retrieval (IR) rather than direct passage ranking. The discrepancy between the pre-training objectives of LLMs and the ranking objective poses another challenge. In this paper, we first investigate generative LLMs such as ChatGPT and GPT-4 for relevance ranking in IR. Surprisingly, our experiments reveal that properly instructed LLMs can deliver competitive, even superior results to state-of-the-art supervised methods on popular IR benchmarks. Furthermore, to address concerns about data contamination of LLMs, we collect a new test set called NovelEval, based on the latest knowledge and aiming to verify the model's ability to rank unknown knowledge. Finally, to improve efficiency in real-world applications, we delve into the potential for distilling the ranking capabilities of ChatGPT into small specialized models using a permutation distillation scheme. Our evaluation results turn out that a distilled 440M model outperforms a 3B supervised model on the BEIR benchmark.
Large Language Models (LLMs)
Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS-1000, we unveil the exceptional capabilities of our model. It surpasses all other open-source Code LLMs by a substantial margin. Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+.
Large Language Models (LLMs)
The performance of large language models (LLMs) on existing reasoning benchmarks has significantly improved over the past years. In response, we present JEEBench, a considerably more challenging benchmark dataset for evaluating the problem solving abilities of LLMs. We curate 515 challenging pre-engineering mathematics, physics and chemistry problems from the highly competitive IIT JEE-Advanced exam. Long-horizon reasoning on top of deep in-domain knowledge is essential for solving problems in this benchmark. Our evaluation on various open-source and proprietary models reveals that the highest performance, even after using techniques like self-consistency, self-refinement and chain-of-thought prompting, is less than 40%. The typical failure modes of GPT-4, the best model, are errors in algebraic manipulation, difficulty in grounding abstract concepts into mathematical equations accurately and failure in retrieving relevant domain-specific concepts. We also observe that by mere prompting, GPT-4 is unable to assess risk introduced by negative marking for incorrect answers. For this, we develop a post-hoc confidence-thresholding method over self-consistency, which enables effective response selection. We hope that our challenging benchmark will guide future re-search in problem-solving using LLMs.
Large Language Models (LLMs)
Large language models (LLMs) have emerged as a widely-used tool for information seeking, but their generated outputs are prone to hallucination. In this work, our aim is to allow LLMs to generate text with citations, improving their factual correctness and verifiability. Existing work mainly relies on commercial search engines and human evaluation, making it challenging to reproduce and compare different modeling approaches. We propose ALCE, the first benchmark for Automatic LLMs' Citation Evaluation. ALCE collects a diverse set of questions and retrieval corpora and requires building end-to-end systems to retrieve supporting evidence and generate answers with citations. We develop automatic metrics along three dimensions -- fluency, correctness, and citation quality -- and demonstrate their strong correlation with human judgements. Our experiments with state-of-the-art LLMs and novel prompting strategies show that current systems have considerable room for improvement -- For example, on the ELI5 dataset, even the best models lack complete citation support 50% of the time. Our analyses further highlight promising future directions, including developing better retrievers, advancing long-context LLMs, and improving the ability to synthesize information from multiple sources.
Large Language Models (LLMs)
Bilingual Lexicon Induction (BLI) aims at inducing word translations in two distinct languages. The generated bilingual dictionaries via BLI are essential for cross-lingual NLP applications. Most existing methods assume that a mapping matrix can be learned to project the embedding of a word in the source language to that of a word in the target language which shares the same meaning. However, a single matrix may not be able to provide sufficiently large parameter space and to tailor to the semantics of words across different domains and topics due to the complicated nature of linguistic regularities. In this paper, we propose a Soft Piecewise Mapping Model (SPMM). It generates word alignments in two languages by learning multiple mapping matrices with orthogonal constraint. Each matrix encodes the embedding translation knowledge over a distribution of latent topics in the embedding spaces. Such learning problem can be formulated as an extended version of the Wahba’s problem, with a closed-form solution derived. To address the limited size of training data for low-resourced languages and emerging domains, an iterative boosting method based on SPMM is used to augment training dictionaries. Experiments conducted on both general and domain-specific corpora show that SPMM is effective and outperforms previous methods.
Bilingual Lexicon Induction (BLI)
Much recent work in bilingual lexicon induction (BLI) views word embeddings as vectors in Euclidean space. As such, BLI is typically solved by finding a linear transformation that maps embeddings to a common space. Alternatively, word embeddings may be understood as nodes in a weighted graph. This framing allows us to examine a node's graph neighborhood without assuming a linear transform, and exploits new techniques from the graph matching optimization literature. These contrasting approaches have not been compared in BLI so far. In this work, we study the behavior of Euclidean versus graph-based approaches to BLI under differing data conditions and show that they complement each other when combined.
Bilingual Lexicon Induction (BLI)
Most Bilingual Lexicon Induction (BLI) methods retrieve word translation pairs by finding the closest target word for a given source word based on cross-lingual word embeddings (WEs). However, we find that solely retrieving translation from the source-to-target perspective leads to some false positive translation pairs, which significantly harm the precision of BLI. To address this problem, we propose a novel and effective method to improve translation pair retrieval in cross-lingual WEs. Specifically, we consider both source-side and target-side perspectives throughout the retrieval process to alleviate false positive word pairings that emanate from a single perspective. On a benchmark dataset of BLI, our proposed method achieves competitive performance compared to existing state-of-the-art (SOTA) methods. It demonstrates effectiveness and robustness across six experimental languages, including similar language pairs and distant language pairs, under both supervised and unsupervised settings.
Bilingual Lexicon Induction (BLI)
Bilingual word lexicons map words in one language to their synonyms in another language. Numerous papers have explored bilingual lexicon induction (BLI) in high-resource scenarios, framing a typical pipeline that consists of two steps: (i) unsupervised bitext mining and (ii) unsupervised word alignment. At the core of those steps are pre-trained large language models (LLMs).In this paper we present the analysis of the BLI pipeline for German and two of its dialects, Bavarian and Alemannic. This setup poses a number of unique challenges, attributed to the scarceness of resources, relatedness of the languages and lack of standardization in the orthography of dialects. We analyze the BLI outputs with respect to word frequency and the pairwise edit distance. Finally, we release an evaluation dataset consisting of manual annotations for 1K bilingual word pairs labeled according to their semantic similarity.
Bilingual Lexicon Induction (BLI)
Bilingual Lexicon Induction (BLI) is a core task in multilingual NLP that still, to a large extent, relies on calculating cross-lingual word representations. Inspired by the global paradigm shift in NLP towards Large Language Models (LLMs), we examine the potential of the latest generation of LLMs for the development of bilingual lexicons. We ask the following research question: Is it possible to prompt and fine-tune multilingual LLMs (mLLMs) for BLI, and how does this approach compare against and complement current BLI approaches? To this end, we systematically study 1) zero-shot prompting for unsupervised BLI and 2) few-shot in-context prompting with a set of seed translation pairs, both without any LLM fine-tuning, as well as 3) standard BLI-oriented fine-tuning of smaller LLMs. We experiment with 18 open-source text-to-text mLLMs of different sizes (from 0.3B to 13B parameters) on two standard BLI benchmarks covering a range of typologically diverse languages. Our work is the first to demonstrate strong BLI capabilities of text-to-text mLLMs. The results reveal that few-shot prompting with in-context examples from nearest neighbours achieves the best performance, establishing new state-of-the-art BLI scores for many language pairs. We also conduct a series of in-depth analyses and ablation studies, providing more insights on BLI with (m)LLMs, also along with their limitations.
Bilingual Lexicon Induction (BLI)
The word embedding models such as Word2vec and FastText simultaneously learn dual representations of input vectors and output vectors. In contrast, almost all existing unsupervised bilingual lexicon induction (UBLI) methods use only input vectors without utilizing output vectors. In this article, we propose a novel approach to making full use of both input and output vectors for more robust and strong UBLI. We discover the Common Difference Property that one orthogonal transformation can connect not only the input vectors of two languages but also the output vectors. Therefore, we can learn just one transformation to induce two different dictionaries from the input and output vectors, respectively. Between these two quite different dictionaries, a more accurate lexicon with less noise can be induced by taking the intersection of them in UBLI procedure. Extensive experiments show that our method achieves much more robust and strong results than state-of-the-art methods in distant language pairs, while reserving comparable performances in similar language pairs.
Bilingual Lexicon Induction (BLI)
Contextualized word embeddings have emerged as the most important tool for performing NLP tasks in a large variety of languages. In order to improve the cross- lingual representation and transfer learning quality, contextualized embedding alignment techniques, such as mapping and model fine-tuning, are employed. Existing techniques however are time-, data- and computational resource-intensive. In this paper we analyze these techniques by utilizing three tasks: bilingual lexicon induction (BLI), word retrieval and cross-lingual natural language inference (XNLI) for a high resource (German-English) and a low resource (Bengali-English) language pair. In contrast to previous works which focus only on a few popular models, we compare five multilingual and seven monolingual language models and investigate the effect of various aspects on their performance, such as vocabulary size, number of languages used for training and number of parameters. Additionally, we propose a parameter-, data- and runtime-efficient technique which can be trained with 10% of the data, less than 10% of the time and have less than 5% of the trainable parameters compared to model fine-tuning. We show that our proposed method is competitive with resource heavy models, even outperforming them in some cases, even though it relies on less resource
Bilingual Lexicon Induction (BLI)
Bilingual Lexicon Induction (BLI) aims to map words in one language to their translations in another, and is typically through learning linear projections to align monolingual word representation spaces. Two classes of word representations have been explored for BLI: static word embeddings and contextual representations, but there is no studies to combine both. In this paper, we propose a simple yet effective mechanism to combine the static word embeddings and the contextual representations to utilize the advantages of both paradigms. We test the combination mechanism on various language pairs under the supervised and unsupervised BLI benchmark settings. Experiments show that our mechanism consistently improves performances over robust BLI baselines on all language pairs by averagely improving 3.2 points in the supervised setting, and 3.1 points in the unsupervised setting.
Bilingual Lexicon Induction (BLI)
Bilingual Lexicon Induction (BLI), where words are translated between two languages, is an important NLP task. While noticeable progress on BLI in rich resource languages using static word embeddings has been achieved. The word translation performance can be further improved by incorporating information from contextualized word embeddings. In this paper, we introduce ProMap, a novel approach for BLI that leverages the power of prompting pretrained multilingual and multidialectal language models to address these challenges. To overcome the employment of subword tokens in these models, ProMap relies on an effective padded prompting of language models with a seed dictionary that achieves good performance when used independently. We also demonstrate the effectiveness of ProMap in re-ranking results from other BLI methods such as with aligned static word embeddings. When evaluated on both rich-resource and low-resource languages, ProMap consistently achieves state-of-the-art results. Furthermore, ProMap enables strong performance in few-shot scenarios (even with less than 10 training examples), making it a valuable tool for low-resource language translation. Overall, we believe our method offers both exciting and promising direction for BLI in general and low-resource languages in particular.
Bilingual Lexicon Induction (BLI)
Bilingual lexicon induction (BLI) with limited bilingual supervision is a crucial yet challenging task in multilingual NLP. Current state-of-the-art BLI methods rely on the induction of cross-lingual word embeddings (CLWEs) to capture cross-lingual word similarities; such CLWEs are obtained 1) via traditional static models (e.g., VecMap), or 2) by extracting type-level CLWEs from multilingual pretrained language models (mPLMs), or 3) through combining the former two options. In this work, we propose a novel semi-supervised post-hoc reranking method termed BLICEr (BLI with Cross-Encoder Reranking), applicable to any precalculated CLWE space, which improves their BLI capability. The key idea is to 'extract' cross-lingual lexical knowledge from mPLMs, and then combine it with the original CLWEs. This crucial step is done via 1) creating a word similarity dataset, comprising positive word pairs (i.e., true translations) and hard negative pairs induced from the original CLWE space, and then 2) fine-tuning an mPLM (e.g., mBERT or XLM-R) in a cross-encoder manner to predict the similarity scores. At inference, we 3) combine the similarity score from the original CLWE space with the score from the BLI-tuned cross-encoder. BLICEr establishes new state-of-the-art results on two standard BLI benchmarks spanning a wide spectrum of diverse languages: it substantially outperforms a series of strong baselines across the board. We also validate the robustness of BLICEr with different CLWEs.
Bilingual Lexicon Induction (BLI)
Abstract In the article, we describe recent trends in the detection of hate speech and offensive language on social media. We accord from the latest studies and scientific contributions. The article describes current trends and the most used methods in connection with the detection of hate speech and offensive language. At the same time, we focus on the importance of emoticons, hashtags, and swearing in the field of social networks. We point out the topicality of the selected topic, describe the next direction of our work, and suggest possible solutions to current problems in this field of research.
Hate and Offensive Speech Detection
Preprocessing is a crucial step for each task related to text classification. Preprocessing can have a significant impact on classification performance, but at present there are few large-scale studies evaluating the effectiveness of preprocessing techniques and their combinations. In this work, we explore the impact of 26 widely used text preprocessing techniques on the performance of hate and offensive speech detection algorithms. We evaluate six common machine learning models, such as logistic regression, random forest, linear support vector classifier, convolutional neural network, bidirectional encoder representations from transformers (BERT), and RoBERTa, on four common Twitter benchmarks. Our results show that some preprocessing techniques are useful for improving the accuracy of models while others may even cause a loss of efficiency. In addition, the effectiveness of preprocessing techniques varies depending on the chosen dataset and the classification method. We also explore two ways to combine the techniques that have proved effective during a separate evaluation. Our results show that combining techniques can produce different results. In our experiments, combining techniques works better for traditional machine learning methods than for other methods.
Hate and Offensive Speech Detection
Offensive language and Hate Speech are rampant on social media platforms (Facebook, Twitter, etc.) in Egypt for quite a while now, appearing in Tweets, Facebook posts and comments, etc., It is an increasingly outreaching problem that needs immediate attention. This paper focuses on the problem of detecting and classifying both offensive language and Hate Speech using State-of-the-art techniques in text classification. Pre-trained transformer models have gained a reputation of astounding general language understanding that could be fine-tuned for language-specific tasks like Text classification, We collected an Egyptian-Arabic dialect Custom dataset of about 8,000 text samples manually labelled into 5 distinct classes: (Neutral, Offensive, Sexism, Religious Discrimination, Racism), It was used to fine-tune and evaluate multiple different Arabic pre-trained transformer models based on different transformer architectures and pre-training approaches for the Natural Language Processing downstream task of text classification. We achieved an average accuracy of about 96% across all fine-tuned transformer models.
Hate and Offensive Speech Detection
The easily accessibility of different online platform allows every individuals people to express their ideas and share experiences easily without any restriction because of freedom of speech. Since social media don't have general framework to identify hate and neutral speech this results anonymity. However, the propagation of hate speech on social media distresses the society in many aspects, such as affecting the mental health of targeted audiences, affects social interaction and distraction of properties. This research proposed the SVM with TF-IDF, N-gram, and W2vec feature extraction to construct dataset which is binary classifier to detect hate speech for Afaan Oromoo language. To construct dataset for this study first we crawl data from Facebook posts and comments by using Face pager and scrap storm API. After we collect we labeled the collected data to two class hate and neutral class. The general objective of this research is to design a framework which classify hate and neutral speech. Furthermore, when we compare the results of different Machine Learning algorithms. The experiment is evaluated based on accuracy, F-score, recall and precision measurements. The framework based on SVM with n-gram combination with TF-IDF achieve 96% in all metrics.
Hate and Offensive Speech Detection
On social media networks like Twitter, Facebook, and Tumblr, people frequently share information. However, these platforms are also notorious for the spread of hate speech and insults, often posted anonymously. Hate speech involves using violent, abusive, or aggressive language towards a particular group based on factors such as gender, race, religion, or region. The prevalence of hate speech on these websites is a major concern, and manually detecting it can be time-consuming. To address this issue, this study presents an automated hate speech detection model that is evaluated on a publicly available Twitter dataset. The proposed method emphasizes data pre-processing, including stemming, term frequency-inverse document frequency (TF-IDF) for feature extraction, and various sampling techniques (random sampler, synthetic minority over-sampling technique (SMOTE), and ALL-KNN) to balance an imbalanced dataset. The logistic regression, support vector machine (SVM), and k-nearest neighbor (k-NN) machine learning classifiers were trained and tested using hold-out cross-validation to reduce overfitting and evaluate performance. The performance was evaluated using metrics such as accuracy, precision, and confusion matrix. The results showed that the logistic regression classifier using the SMOTE approach had the best performance, with an accuracy of 82%, a macro average of precision, recall, and an F1-score of 80%, 82%, and 79%, respectively.
Hate and Offensive Speech Detection
The user-generated content on the internet includ- ing that on social media may contain offensive language and hate speech which negatively affect the mental health of the whole internet society and may lead to hate crimes. Intelligent models for automatic detection of offensive language and hate speech have attracted significant attention recently. In this paper, we propose an automatic method for detecting offensive language and fine-grained hate speech from Arabic tweets. We compare between BERT and two conventional machine learning techniques (SVM, logistic regression). We also investigate the use of sentiment analysis and emojis descriptions as appending features along with the textual content of the tweets. The experiments shows that BERT-based model gives the best results, surpassing the best benchmark systems in the literature, on all three tasks: (a) offensive language detection with 84.3% F1-score, (b) hate speech detection with 81.8% F1-score, and (c) fine-grained hate-speech recognition (e.g., race, religion, social class, etc.) with 45.1% F1-score. The use of sentiment analysis slightly improves the performance of the models when detecting offensive language and hate speech but has no positive effect on the performance of the models when recognising the type of the hate speech. The use of textual emoji description as features can improve or deteriorate the performance of the models depending on the size of the examples per class and whether the emojis are considered among distinctive features between classes or not.
Hate and Offensive Speech Detection
The prevalence of social media platforms prompted detecting any language that is intended to harm or intimidate another person or group of people in online posts and comments. On Twitter, for instance, users are susceptible to cyberbullying and hate speech, which may develop into physical and psychological violence. A transformer-based approach is presented in this study to address the offensive speech detection issue. This model employs versions of the CAMeLBERT model and is validated using a mixture of four benchmark Twitter Arabic datasets annotated for hate speech detection task, including the (OSACT5 2022) workshop shared task dataset. The presented model was capable of recognizing Arabic tweets containing offensive speech with 87.15 % accuracy and 83.6 % F1 score.
Hate and Offensive Speech Detection
Social media often serves as a breeding ground for various hateful and offensive content. Identifying such content on social media is crucial due to its impact on the race, gender, or religion in an unprejudiced society. However, while there is extensive research in hate speech detection in English, there is a gap in hateful content detection in low-resource languages like Bengali. Besides, a current trend on social media is the use of Romanized Bengali for regular interactions. To overcome the existing research’s limitations, in this study, we develop an annotated dataset of 10K Bengali posts consisting of 5K actual and 5K Romanized Bengali tweets. We implement several baseline models for the classification of such hateful posts. We further explore the interlingual transfer mechanism to boost classification performance. Finally, we perform an in-depth error analysis by looking into the misclassified posts by the models. While training actual and Romanized datasets separately, we observe that XLM-Roberta performs the best. Further, we witness that on joint training and few-shot training, MuRIL outperforms other models by interpreting the semantic expressions better. We make our code and dataset public for others.
Hate and Offensive Speech Detection
With online social platforms becoming more and more accessible to the common masses, the volume of public utterances on a range of issues, events, and persons etc. has increased profoundly. Though most of the content is a manifestation of personal feelings of the individuals, yet a lot of this content often comprises of hate and offensive speech. Exchange of hate and offensive speech has now become a global phenomenon with increased intolerance among societies. However companies running these social media platforms need to discern and remove such unwanted content. This article focuses on automatic detection of hate and offensive speech from Twitter data by employing both conventional machine learning algorithms as well as deep learning architectures. We conducted extensive experiments on a benchmark 25K Twitter dataset with traditional machine learning algorithms as well as using deep learning architectures. The results obtained by us using deep learning architectures are better than state-of-the-art methods used for hate and offensive speech detection.
Hate and Offensive Speech Detection
Internet and social media usage has skyrocketed over the past two decades, changing how people communicate with one another on a basic level. Numerous favourable results have resulted from this. The risks and harms that come with it are also there. It is impossible for humans to control the amount of damaging content, such as hate speech, that is available online. Researching automated methods for hate speech identification has drawn more attention from academics. Through the creation of a single homogeneous dataset, we investigate various publicly accessible datasets in this work. We establish a baseline model and enhance model performance scores using various optimisation strategies after classifying them into two categories: hate or non-hate. After achieving a competitive performance score, we develop a tool that, using the same feedback, quickly locates and evaluates a page with an effective measure. This tool then retrains our model using the new data. In three languages: English, German, and Spanish. We demonstrate the superior performance of our multilingual approach. In comparison to most monolingual models, this results in performance that is equal to or better.
Hate and Offensive Speech Detection
Because of the rapid advancement of technology over the last several years, the number of internet users is growing at an exponential rate, and as a result, email communication has become popular as a means of exchanging information over the internet. Sending data and communicating with peers via email is the most cost-effective method. These email services also cause problems for users by sending electronic junk mail, often known as spam mail. Spam email is a privacy concern that is linked to a slew of commercial and dangerous websites, causing phishing, virus distribution, and a slew of other problems. This study examines several aspects that have been used for email spam classification, as well as offering an overview of a handful of classifiers or algorithms that have been successfully evaluated, as well as exploratory data analysis. The proposed email spam classifier uses three parallel layers of machine learning and deep learning techniques, followed by a decision function to determine whether or not the emails are spam. During testing, it was found that the proposed classifier beats similar systems on the standard dataset with an accuracy of 98.4%.
Email Spam and Phishing Detection
Email Spam has become a vital issue currently, with high-speed growth of internet users. Some people are using them for illegal conducts, phishing and fraud. Sending malicious link through spam emails which can harm our system and may also they will seek into our system. The need of email spam detection is to prevent spam messages from lagging into user’s inbox so it’ll improve user experience. This project will identify those spam emails by using machine learning approach. Machine learning is one amongst the applications of Artificial Intelligence that allow systems to read and improve from experience without being specific programmed. This paper will discuss the machine learning algorithm which is Naïve Bayes. It is a probabilistic classifier, which means it predicts on the idea of the probability of an object and it is selected for the email spam detection having best precision and accuracy.
Email Spam and Phishing Detection
Email spam has become a prevalent issue in recent times, with the growing number of internet users, spam emails are also on the rise. Many individuals use them for illegal and unethical activities such as phishing and fraud. Spammers send dangerous links through spam emails, which can harm our systems and gain access to personal information. It has become easier for criminals to create fake profiles and email accounts. They often impersonate real individuals in their spam emails, making them difficult to identify. This project aims to identify and detect fraudulent spam messages. The paper will explore the use of machine learning techniques, algorithms, and apply them to data sets. The goal is to select the best methods for maximum precision and accuracy in email spam detection.
Email Spam and Phishing Detection
Anything that is connected to the internet is vulnerable, for example mobile phones, personal laptops, tablets, routers, and smart speakers. Cybercriminals need one point of weakness like unprotected devices or a weak password or any attachment to potentially enter into the system. There is a need to pause before proceeding with any mail or downloading any document or accessing any link in a message because there is a risk of phishing. Every day 320 billion spam emails are sent to many people. According to statistics of spam mail, it was noted that out of every 3000 emails, 1 mail is spam that contains phishing links, malware, fake messages, fake offers, etc. The hacker tries to get confidential information about people, companies, and bank account details. In 2023, Spam mails are still a big real-life problem because some people are still not aware of spam emails and they aren’t able to detect spam mail manually. So, there is a need for the development of a spam detector system that can detect spam emails with higher accuracy. In this paper, there will be a discussion about implementation, execution and obtained results of deep learning algorithms like LSTM (one-directional), BiLSTM (Bi-directional), BERT, and Convolution Neural Networks using a dataset that was downloaded from Kaggle. An accuracy of 98% was obtained with the CNN, 96% was obtained with the LSTM (one-directional) model, 97% with the BiLSTM (Bi-directional) model, and 99% was obtained with the BERT model. The best accuracy ‘of 99%’ with great recall value, less precision, and a great F1 score was attained by implementing the BERT model for spam detection. Keywords-Deep Learning, CIA, SIANN, RFC
Email Spam and Phishing Detection
With the influx of technological advancements and the increased simplicity in communication, especially through emails, the upsurge in the volume of unsolicited bulk emails (UBEs) has become a severe threat to global security and economy. Spam emails not only waste users’ time, but also consume a lot of network bandwidth, and may also include malware as executable files. Alternatively, phishing emails falsely claim users’ personal information to facilitate identity theft and are comparatively more dangerous. Thus, there is an intrinsic need for the development of more robust and dependable UBE filters that facilitate automatic detection of such emails. There are several countermeasures to spam and phishing, including blacklisting and content-based filtering. However, in addition to content-based features, behavior-based features are well-suited in the detection of UBEs. Machine learning models are being extensively used by leading internet service providers like Yahoo, Gmail, and Outlook, to filter and classify UBEs successfully. There are far too many options to consider, owing to the need to facilitate UBE detection and the recent advances in this domain. In this paper, we aim at elucidating on the way of extracting email content and behavior-based features, what features are appropriate in the detection of UBEs, and the selection of the most discriminating feature set. Furthermore, to accurately handle the menace of UBEs, we facilitate an exhaustive comparative study using several state-of-the-art machine learning algorithms. Our proposed models resulted in an overall accuracy of 99% in the classification of UBEs. The text is accompanied by snippets of Python code, to enable the reader to implement the approaches elucidated in this paper
Email Spam and Phishing Detection
Phishing emails pose a severe risk to online users, necessitating effective identification methods to safeguard digital communication. Detection techniques are continuously researched to address the evolution of phishing strategies. Machine learning (ML) is a powerful tool for automated phishing email detection, but existing techniques like support vector machines and Naive Bayes have proven slow or ineffective in handling spam filtering. This study attempts to provide a phishing email detector and reliable classifier using a hybrid machine classifier with term frequency-inverse document frequency (TF-IDF) and an effective feature extraction technique (FET) on a real-world dataset from Kaggle. Exploratory data analysis is conducted to enhance understanding of the dataset and identify any conspicuous errors and outliers to facilitate the detection process. The FET converts the data text into a numerical representation that can be used for ML algorithms. The model’s performance is evaluated using accuracy, precision, recall, F1 score, receiver operating characteristic (ROC) curve and area under the ROC curve metrics. The research findings indicate that the hybrid model utilising TF-IDF achieved superior performance, with an accuracy of 87.5%. The paper offers valuable knowledge on using ML to identify phishing emails and highlights the importance of combining various models.
Email Spam and Phishing Detection
Spam is the act of sending unsolicited emails to a large number of users for phishing, spreading malware, etc. Internet Service Providers (ISPs) and email inbox providers (like Gmail, Yahoo Mail, AOL, etc.) rely on SPAM filters, firewalls, and blacklist directories to prevent "unsolicited" SPAM emails from entering your inbox. Spam mails are overrunning email inboxes, which significantly slows down internet performance. It is crucial to properly analyze the connections between these spammers and spam because the majority of us tend to provide them with crucial information, such as our contact information. Since the benefactor covers a large percentage of the costs related to spamming, it effectively serves as advertising for the cost of mailing. The study of existing work shows that machine learning and deep learning are frequently employed to effectively identify email spam. This research paper is secondary work in which we have studied, and implemented the various machine learning and deep learning approaches to identify email spam in Python. The four machine learning algorithms—KNN, Navies Bayes, BiLSTM, and Deep CNN—show that they can be utilized effectively to detect spam. Yet the Deep CNN outperforms the other three based on accuracy and the F1 score.
Email Spam and Phishing Detection
The risk of cyberattacks against businesses has risen considerably, with Business Email Compromise (BEC) schemes taking the lead as one of the most common phishing attack methods. The daily evolution of this assault mechanism’s attack methods has shown a very high level of proficiency against organisations. Since the majority of BEC emails lack a payloader, they have become challenging for organisations to identify or detect using typical spam filtering and static feature extraction techniques. Hence, an efficient and effective BEC phishing detection approach is required to provide an effective solution to various organisations to protect against such attacks. This paper provides a systematic review and examination of the state of the art of BEC phishing detection techniques to provide a detailed understanding of the topic to allow researchers to identify the main principles of BEC phishing detection, the common Machine Learning (ML) algorithms used, the features used to detect BEC phishing, and the common datasets used. Based on the selected search strategy, 38 articles (of 950 articles) were chosen for closer examination. Out of these articles, the contributions of the selected articles were discussed and summarised to highlight their contributions as well as their limitations. In addition, the features of BEC phishing used for detection were provided, as well as the ML algorithms and datasets that were used in BEC phishing detection models were discussed. In the end, open issues and future research directions of BEC phishing detection based on ML were discussed.
Email Spam and Phishing Detection
Breakthroughs in technology are happening as we speak, but the threat of their misuse is also increasing. Even a tiny amount of exposure within an organization can potentially force the organization out of business. In a digital world, information is the greatest asset. A phishing attack is an attack on the critical information of an individual or an organization. In a phishing attack, the perpetrator uses emails to lure people from different organizations or individuals for using infected URLs, attachments, and offers. The emails contain URLs, sender email information, and reply email information, masked with a legit source to hide the malicious content. Because an individual or the organization receives a vast number of emails every day, it is difficult to detect the infected emails. In such cases, Machine Learning algorithms categorize emails into spam and legitimate mail. A Naive Bayesian network is a supervised Machine Learning algorithm, while it is also an effective way to classify a large number of emails. The Naive Bayesian Classifier is fast in the classification of a large dataset. To further improve the performance, Count Vectorization is applied, and for determining the legitimacy of the sender's email, used Blacklisting algorithm. In this paper, we have analyzed machine learning algorithms for the classification of emails.
Email Spam and Phishing Detection
The proliferation of phishing sites and emails poses significant challenges to existing cybersecurity efforts. Despite advances in spam filters and email security protocols, problems with oversight and false positives persist. Users often struggle to understand why emails are flagged as spam, risking the possibility of missing important communications or mistakenly trusting phishing emails. This study introduces ChatSpamDetector, a system that uses large language models (LLMs) to detect phishing emails. By converting email data into a prompt suitable for LLM analysis, the system provides a highly accurate determination of whether an email is phishing or not. Importantly, it offers detailed reasoning for its phishing determinations, assisting users in making informed decisions about how to handle suspicious emails. We conducted an evaluation using a comprehensive phishing email dataset and compared our system to several LLMs and baseline systems. We confirmed that our system using GPT-4 has superior detection capabilities with an accuracy of 99.70%. Advanced contextual interpretation by LLMs enables the identification of various phishing tactics and impersonations, making them a potentially powerful tool in the fight against email-based phishing threats.
Email Spam and Phishing Detection
Fake news production, accessibility, and consumption have all increased with the rise of internet-connected gadgets and social media platforms. A good fake news detection system is essential because the news readers receive can affect their opinions. Several works on fake news detection have been done using machine learning and deep learning approaches. Recently, the deep learning approach has been preferred over machine learning because of its ability to comprehend the intricacies of textual data. The introduction of transformer architecture changed the NLP paradigm and distinguished itself from recurrent models by enabling the processing of sentences as a whole rather than word by word. The attention mechanisms introduced in Transformers allowed them to understand the relationship between far-apart tokens in a sentence. Numerous deep learning works on fake news detection have been published by focusing on different features to determine the authenticity of a news source. We performed an extensive analysis of the comprehensive NELA-GT 2020 dataset, which revealed that the title and content of a news source contain discernible information critical for determining its integrity. To this objective, we introduce ‘FakeNews Transformer’ — a specialized Transformer-based architecture that considers the news story’s title and content to assess its veracity. Our proposed work achieved an accuracy of 74.0% on a subset of the NELA-GT 2020 dataset. To our knowledge, FakeNews Transformer is the first published work that considers both title and content for evaluating a news article; thus, we compare the performance of our work against two BERT and two LSTM models working independently on title and content. Our work outperformed the BERT and LSTM models working independently on title by 7.6% and 9.6%, while performing better than the BERT and LSTM models working independently on content by 8.9% and 10.5%, respectively.
Fake News Detection
The strategy for identifying fake news incorporates in blending of Natural Language Processing (NLP) techniques, Reinforcement Learning (RL) and block chain technology. Identifying false information on Twitter is essential because of the platform's broad appeal and significant impact on public conversation. For millions of people globally, Twitter is their main source of news, which makes it a great place for information to spread quickly. The procedure commences with gathering a comprehensive dataset of news articles and their corresponding metadata, followed by NLP-based pre-processing to cleanse and tokenize the text. Pertinent attributes, such as word frequency and readability, are then extracted and utilized to train an RL agent. This agent has received training to distinguish between between authentic and fabricated news through a system of rewards and penalties. After training, the RL agent uses the traits it has collected to classify fresh news as true or false. While the potential role of block chain technology is mentioned, further explanation is necessary. This inventive strategy aims to halt the sharing of misleading information and untrue in the realm of digital news.
Fake News Detection
The paper presents our solutions for the MediaEval 2020 task namely FakeNews: Corona Virus and 5G Conspiracy Multimedia Twitter-Data-Based Analysis. The task aims to analyze tweets related to COVID-19 and 5G conspiracy theories to detect misinformation spreaders. The task is composed of two sub-tasks namely (i) text-based, and (ii) structure-based fake news detection. For the first task, we propose six different solutions relying on Bag of Words (BoW) and BERT embedding. Three of the methods aim at binary classification task by differentiating in 5G conspiracy and the rest of the COVID-19 related tweets while the rest of them treat the task as ternary classification problem. In the ternary classification task, our BoW and BERT based methods obtained an F1-score of .606% and .566% on the development set, respectively. On the binary classification, the BoW and BERT based solutions obtained an average F1-score of .666% and .693%, respectively. On the other hand, for structure-based fake news detection, we rely on Graph Neural Networks (GNNs) achieving an average ROC of .95% on the development set. © 2020 Copyright 2020 for this paper by its authors. All Rights Reserved.
Fake News Detection
In today's digital age, the swift spreading of information has revolutionized the way for news consumers and makes them informed. However, this convenience comes with a downside – the propagation of fake news, which can spread misinformation, manipulate public opinions, and undermine the credibility of legitimate sources. The term "fake news" refers to intentionally fabricated or misleading information that is frequently presented as news for a variety of cognitive processes, including commercial, social, or political gain. Machine Learning (ML), with its ability to analyze large datasets and discern patterns, has emerged as a promising solution for tackling the issue of fake news. By leveraging techniques such as Natural Language Processing (NLP), classification algorithms, and anomaly detection, ML models can be trained to identify and differentiate between authentic news and fake news. That in turn prevents the news consumer from misleading, prevent a product or service from defame and also helps in political defaming. Machine learning algorithms can be used to analyze historical data and make accurate predictions about whether news fakes or not. In this study, the proposed machine learning-based news analysis model utilizes feature selection technique to categorize the news. The model explores different classification algorithms, including Decision Tree (DT), Passive Aggressive Classifier (PAC), Logistic Regression (LR), and Random Forest (RF), to build the fake news prediction model. The experimental results show that the Passive Aggressive Classifier outperforms other models with an accuracy rate of 93%. The proposed model can help news channels; social media and consumers to distinguish between fake and real news, and minimize the risk of misleading
Fake News Detection
Given the ubiquity of fake news online, a reliable mechanism for automated detection is needed. This project proposes a new end-to-end detection pipeline, which uses Natural Language Processing (NLP) techniques for automated evidence extraction from online sources given an input claim of arbitrary length. This project also compiles a dataset of input claims and evidence larger than state of the art datasets. Distant supervision is used to generate weakly labelled training data and increase sample size. The resultant dataset displays topical variation and variations in length and features. The final ensemble models demonstrate high detection accuracy and micro-average F1 scores. The results validate distance supervision as a viable strategy for model training and data collection. A ConvNet-RNN hybrid was found to be the best performing style based model, while a Siamese LSTM with layer-weights sharing was found to be the best performing truth based model. Generally, truth based models outperformed style based models, and ensembling different models leads to performance gains over any single classifier.
Fake News Detection
With the widespread use of social media platforms within our modern society, these platforms have become a popular medium for disseminating news across the globe. While some of these platforms are considered reliable sources for sharing news, others publicize the information without much validation. The transmission of fake news on social media impacts people’s behavior and negatively influences people’s decisions. During the COVID-19 outbreak, it was more evident than ever. This has led to a demand for conducting research studies to explore sophisticated approaches to assess the integrity of news worldwide. The main objective of this research paper was to outline our proposed experimental methodology to detect and access fake news using Data Mining and Natural Language Processing. The presented research effort provides a method to verify the authenticity of the news disseminated in social networks by dividing the process into four significant stages: news aggregation, publication collection, data analysis, and matching results.
Fake News Detection
With the development of technology, the spread of fake news on social networks is increasing. Many researchers and organizations have taken action to detect fake news manually or automatically. In this study, various Machine Learning Algorithms and Transformer based approaches are used to select the best performing model that can distinguish news as fake or real. In order to contribute to the Turkish literature in the field of Natural Language Processing (NLP), the dataset is specifically prepared in Turkish. The words were vectorized using Word2Vec, BERT and SBERT and classified using Machine Learning Algorithms such as Support Vector Machines, Naive Bayes, Logistic Regression, KNN and BERT/SBERT deep learning models. The highest F1 score of 0.99 was obtained from the transformer-based BERT and SBERT.
Fake News Detection
Currently, Fake News easily go viral on social networks, this is a cause for concern worldwide. An alternative to detect this type of information is the use of Machine Learning and Natural Language Processing. Nevertheless, due to the high volume of information it is crucial to define mechanism easy to implement and to deploy. The aim of this research is to demonstrate that the use of basic Neural Networks together with a modified hyperparameter optimization algorithm, allows obtaining similar results to those obtained when using SVM and NLP. The source of information covers verified trending news in the country as well as false headers from mid-2021 to February 2023. The output of the experiments determines that 86% of true news can be accurately identified with the proposed approach. While, the 78% of fake news can be accurately identified too with a mean error around 0.049.
Fake News Detection
The upsurge of fake news in recent times, facilitated due to the swift dissemination of information on social media, has necessitated the development of advanced detection techniques. This research focuses on optimizing the Hugging Face Transformer models – a cutting-edge Natural Language Processing (NLP) tool – to enhance fake news detection. These models are widely recognized for their superior performance in understanding and generating human language. However, their application in fake news detection remains under-explored. Thus, this paper explores how Python, a high-level programming language known for its simplicity and robustness, can be used to fine-tune these models for this purpose. The primary objective is to improve their speed, efficiency, and accuracy in detecting fake news. We propose a comprehensive framework that uses Python-based methodologies to tweak various aspects of the Hugging Face Transformer models, such as their architecture, training paradigms, and hyperparameters. The expected outcome is a significant improvement in the models’ performance metrics, which will be evaluated using standard benchmarks within the fake-news detection domain. Overall, this research paves the way for harnessing the full potential of Hugging Face Transformers in curbing the menace of fake news, thus contributing to a more reliable and truthful digital information ecosystem.
Fake News Detection
Fake news has been a problem ever since the internet boomed. The easier access and exponential growth of the knowledge offered on social media networks have created it knotty to largely differentiate between false and true information. Opposing such fake news is important because the world's view and mindset are shaped by information. People form their own opinions through the day-to-day news. If this information is false, it can have devastating consequences. The quality of social media networks is additionally at stake wherever the spreading of pretend data is prevailing. Machine learning and Natural Language Processing have competed for a significant role in the classification of the data though with some limitations. The need of an hour is to stop these types of fake news especially in developing countries like India and focus on the correct, proper news article which will not affect people's mentality negatively.
Fake News Detection
Recently, product/ service reviews and online businesses have been similar to the blood–heart relationship as they greatly impact customers’ purchase decisions. There is an increasing incentive to manipulate reviews, mostly profit-motivated, as positive reviews imply high purchases and vice versa. Therefore, a suitable fake review detection approach is paramount in ensuring fair e-business competition and sustainability. Most existing methods mainly utilize discrete review features such as text similarity, rating deviation, review content, product information, the semantic meaning of reviews, and reviewer behaviors. In the matter of discourse, some recent researchers attempted multi-feature (review- and reviewer-centric features) integration. However, such approaches face two issues: (1) Review representation is extracted in an independent manner, thus ignoring correlations between them (2) Lack of a unified framework that can jointly learn latent text feature vectors, aspect ratings, and overall rating. To address the named issues, we propose a novel Deep Hybrid Model for fake review detection, which jointly learns from latent text feature vectors, aspect ratings, and overall ratings. Initially, it computes contextualized review text vectors, extracts aspects, and calculates respective rating values. Then, contextualized word vectors, overall ratings, and aspect ratings are concatenated. Finally, the model learns to classify reviews from such unified multi-dimensional feature representation. Extensive experiments on a publicly available dataset demonstrate that the proposed approach significantly outperforms state-of-the-art baseline approaches.
Fake Review Detection
Now-a-days, use of apps has increased with the increasing craze towards mobiles. For all types of mobile application, users are preferring smartphones. Generally, depending on how many users already have downloaded that application? , what are the ratings and reviews? , what are the comments? , etc., users download mobile applications. In the mobile app market, fraud ranking points to false activity that has a reason to push up the mobile apps in the popularity list. Certainly, it turns more periodic for app developers to use fake mechanism. Here, the paper proposes semantic analysis of app review for fraud detection in mobile apps. Firstly we propose to correctly detect the misrepresentation by excavating the active periods, also called as leading sessions, of the mobile apps. Furthermore, we will inspect two types of evidences, namely, ranking-based evidences, review-based evidences and use natural language processing (NLP) to get action words. Next, convert review to ratings and finally perform pattern analysis on session with app data gathered from the app store. So, the paper proposes approach to validate its effectiveness, and also show the scalability of the detection algorithm.
Fake Review Detection
Online reviews are a growing market, but it is struggling with fake reviews. They undermine both the value of reviews to the user, and their trust in the review sites. However, fake positive reviews can boost a business, and so a small industry producing fake reviews has developed. The two sides are facing an arms race that involves more and more natural language processing (NLP). So far, NLP has been used mostly for detection, and works well on human-generated reviews. But what happens if NLP techniques are used to generate fake reviews as well? We investigate the question in an adversarial setup, by assessing the detectability of different fake-review generation strategies. We use generative models to produce reviews based on meta-information, and evaluate their effectiveness against deceptiondetection models and human judges. We find that meta-information helps detection, but that NLP-generated reviews conditioned on such information are also much harder to detect than conventional ones.
Fake Review Detection
Online shopping stores have grown steadily over the past few years. Due to the massive growth of these businesses, the detection of fake reviews has attracted attention. Fake reviews are seriously trying to mislead customers and thereby undermine the honesty and authenticity of online shopping environments. So far, various of fake review classifiers have been proposed that take into account the actual content of the review. To improve the accuracies of existing fake review classification or detection approaches, we propose to use BERT (Bidirectional Encoder Representation from Transformers) model to extract word embeddings from texts (i.e. reviews). Word embeddings are obtained in various basic methods such as SVM (Support vector machine), Random Forests, Naive Bayes and others. The confusion matrix method was also taken into account to evaluate and graphically represent the results. The results indicate that the SVM classifiers outperforms the others in terms of accuracy and f1-score with an accuracy of 87.81%, which is 7.6% higher than the classifier used in the previous study [5].
Fake Review Detection
In this COVID-19 scenario the majority have an interest in on-line searching. So, many folks order the merchandise depends on the previous reviews. These reviews square measure enjoying necessary role in creating purchase choices. however, in these reviews' spammers might manufacture pretend reviews because of such behavior of spammers clients would I mislead and create the incorrect call to beat this drawback we've to spot the actual one who posed reviews over just once and therefore the admin can delete that review supported the customer review info.
Fake Review Detection
In order to enhance brand benefits or discredit competitors, some merchants hire fake reviewers to post large amounts of fake reviews on e-commerce platforms. This behavior inevitably harms consumers’ interests and causes unfair market competition for other merchants. Researches on fake review detection mainly focus on mining the content of the reviews, the behavioral features of the reviewers, or building models using deep learning. However, most existing research have not taken into the differences in motivation between fake positive and fake negative reviews, the review time distribution features of true reviewers post reviews, and how to effectively integrate multi-modal features. In this paper, we collect restaurant review datasets from Yelp.com in three different regions, and propose a fake review detection method based on a neural network model called BERT-Multi feature-TextCNN-BiGRU-Attention(BMTBA). Firstly, we use the BERT pre-training model to train a restaurant review language model. Then, we propose to use a multimodal fusion method to combine the BERT pre-trained word vector sequences with extracted multidimensional statistical features as input(including a newly proposed reviewer feature called Review weekday). Finally, considering that the motivation for fake positive and fake negative reviews is different, we construct fake positive and fake negative model separately to detect them. Multiple ablation experiments are conducted on the three datasets mentioned above, and the results show that the proposed BMTBA model outperformed the baseline model (BERT-TextCNN-BiGRU-Attention) with a higher classification detection accuracy of 94.68%.
Fake Review Detection
Detecting fake reviews can help customers make better purchasing decisions and maintain a positive online business environment. In recent years, pre-trained language models have significantly improved the performance of natural language processing tasks. These models are able to generate different representation vectors for each word in different contexts, thus solving the challenge of multiple meanings of a word, which traditional word vector methods such as Word2Vec cannot solve, and, therefore, better capturing the text’s contextual information. In addition, we consider that reviews generally contain rich opinion and sentiment expressions, while most pre-trained language models, including BERT, lack the consideration of sentiment knowledge in the pre-training stage. Based on the above considerations, we propose a new fake review detection model based on a pre-trained language model and convolutional neural network, which is called BSTC. BSTC considers BERT, SKEP, and TextCNN, where SKEP is a pre-trained language model based on sentiment knowledge enhancement. We conducted a series of experiments on three gold-standard datasets, and the findings illustrate that BSTC outperforms state-of-the-art methods in detecting fake reviews. It achieved the highest accuracy on all three gold-standard datasets—Hotel, Restaurant, and Doctor—with 93.44%, 91.25%, and 92.86%, respectively.
Fake Review Detection
Fake (deceptive) reviews have become a serious problem for online consumers, with the proliferation of online marketplaces leading to an increase in spurious reviews that are often used to lure or discourage potential customers. While sentiment analysis has been introduced to the e-commerce sector, the lack of an effective method to differentiate between authentic and fake reviews is still a major challenge. Existing approaches face issues such as slow convergence and inadequate precision. In order to address these challenges, this paper proposes a new approach that integrates sentiment features into the review detection process. The proposed approach uses a feature extraction method that utilizes a preconstructed sentiment dictionary, a pre-trained BERT model to extract feature vectors, and a fully connected dense layer to classify reviews as real or fake using the softMax function. The effectiveness of the proposed approach was evaluated on the Yelp dataset, showing a nearly 7% improvement in accuracy compared to existing feature sets and a nearly 4% improvement over existing state-of-the-art methods. The integration of sentiment features has shown promising results in detecting fake reviews, which is crucial for ensuring a fair and trustworthy online marketplace.
Fake Review Detection
The increasing prevalence of fake online reviews jeopardizes firms' profits, consumers' well-being, and the trustworthiness of e-commerce ecosystems. We face the significant challenge of accurately detecting fake reviews. In this paper, we undertake a comprehensive investigation of traditional and state-of-the-art machine learning models in classification, based on textual features, to detect fake online reviews. We attempt to examine existing and noteworthy models for fake online review detection, in terms of the effectiveness of textual features, the efficiency of sampling methods, and their performance of detection. Adopting a quantitative and data-driven approach, we scrutinize both tree-based and transformer-based detection models. Our comparative studies evidence that transformer-based models (specifically BERT and GPT-3) outperform tree-based models (i.e., Random Forest and XGBoost), in terms of accuracy, precision, and recall metrics. We use real data from online reviews on Yelp.com for implementation. The results demonstrate that our proposed approach can identify fraudulent reviews effectively and efficiently. Synthesizing ChatGPT-3, tree-based, and transformer-based models for fake online review detection is rather new but promising, this paper highlights their potential for better detection of fake online reviews.
Fake Review Detection
Fighting fake news is a difficult and challenging task. With an increasing impact on the social and political environment, fake news exert an unprecedently dramatic influence on people’s lives. In response to this phenomenon, initiatives addressing automated fake news detection have gained popularity, generating widespread research interest. However, most approaches targeting English and low-resource languages experience problems when devising such solutions. This study focuses on the progress of such investigations, while highlighting existing solutions, challenges, and observations shared by various research groups. In addition, given the limited amount of automated analyses performed on Romanian fake news, we inspect the applicability of the available approaches in the Romanian context, while identifying future research paths.
Fake Review Detection
End of preview. Expand in Data Studio

This benchmark is from the SciPrompt paper: https://huggingface.co/papers/2410.01946

Emerging NLP encompasses 21 newly developed research fields within the broader category of Computation and Language. We collect 30 examples for each topic, assigning five instances for training and another five for validation. The rest of the examples are used for testing. In total, we collect 210 for training and 420 for the test sets.

For detailed information regarding the dataset or SciPrompt framework, please refer to our Github repo and the EMNLP paper.

Citation Information

For the use of SciPrompt and Emerging NLP benchmark, please cite:


@inproceedings{you-etal-2024-sciprompt,
    title = "{S}ci{P}rompt: Knowledge-augmented Prompting for Fine-grained Categorization of Scientific Topics",
    author = "You, Zhiwen  and
      Han, Kanyao  and
      Zhu, Haotian  and
      Ludaescher, Bertram  and
      Diesner, Jana",
    editor = "Al-Onaizan, Yaser  and
      Bansal, Mohit  and
      Chen, Yun-Nung",
    booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.emnlp-main.350",
    pages = "6087--6104",
}

Contact Information

If you have any questions, please email [email protected].

Downloads last month
10