{"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-191d8e8b688c45e4897c4993e2a6ef3f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We adress the problem of dueling bandits defined on partially ordered sets, or posets. In this setting, arms may not be comparable, and there may be several (incomparable) optimal arms. We propose an algorithm, UnchainedBandits, that efficiently finds the set of optimal arms of any poset even when pairs of comparable arms cannot be distinguished from pairs of incomparable arms, with a set of minimal assumptions. This algorithm relies on the concept of decoys, which stems from social psychology. For the easier case where the incomparability information may be accessible, we propose a second algorithm, SlicingBandits, which takes advantage of this information and achieves a very significant gain of performance compared to UnchainedBandits. We provide theoretical guarantees and experimental evaluation for both algorithms.", "targets": "Decoy Bandits Dueling on a Poset"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-01f34d9ee6cd4c6b8baa29bd8fc1cd7e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present a new model for prediction markets, in which we use risk measures to model agents and introduce a market maker to describe the trading process. This specific choice on modelling tools brings us mathematical convenience. The analysis shows that the whole market effectively approaches a global objective, despite that the market is designed such that each agent only cares about its own goal. Additionally, the market dynamics provides a sensible algorithm for optimising the global objective. An intimate connection between machine learning and our markets is thus established, such that we could 1) analyse a market by applying machine learning methods to the global objective, and 2) solve machine learning problems by setting up and running certain markets.", "targets": "Multi-period Trading Prediction Markets with Connections to Machine Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5dfaaae27d804500abaae8a5321ccea8", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recently deep neural networks (DNNs) have been used to learn speaker features. However, the quality of the learned features is not sufficiently good, so a complex back-end model, either neural or probabilistic, has to be used to address the residual uncertainty when applied to speaker verification, just as with raw features. This paper presents a convolutional timedelay deep neural network structure (CT-DNN) for speaker feature learning. Our experimental results on the Fisher database demonstrated that this CT-DNN can produce highquality speaker features: even with a single feature (0.3 seconds including the context), the EER can be as low as 7.68%. This effectively confirmed that the speaker trait is largely a deterministic short-time property rather than a long-time distributional pattern, and therefore can be extracted from just dozens of frames.", "targets": "Deep Speaker Feature Learning for Text-independent Speaker Verification"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3f2dda70e3e7434abc5de1102c3736f1", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present a deep hierarchical recurrent neural network for sequence tagging. Given a sequence of words, our model employs deep gated recurrent units on both character and word levels to encode morphology and context information, and applies a conditional random field layer to predict the tags. Our model is task independent, language independent, and feature engineering free. We further extend our model to multi-task and crosslingual joint training by sharing the architecture and parameters. Our model achieves state-of-the-art results in multiple languages on several benchmark tasks including POS tagging, chunking, and NER. We also demonstrate that multi-task and cross-lingual joint training can improve the performance in various cases.", "targets": "Multi-Task Cross-Lingual Sequence Tagging from Scratch"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-be75d7112fd44d87958fe3d8aa34f558", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This manuscript presents a trust computation for international relations and its calculus, which related to Bayesian inference, Dempster-Shafer theory and subjective logic. We proposed a method that allows a trust computation which is previously subjective and incomputable. An example of case study for the trust computation is the United States of America\u2013Great Britain relations. The method supports decision makers in a government such as foreign ministry, defense ministry, presidential or prime minister office. The Department of Defense (DoD) may use our method to determine a nation that can be known as a friendly, neutral or hostile nation.", "targets": "A Mathematical Trust Algebra for International Nation Relations Computation and Evaluation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d072b2dc576941288943867497088077", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Syllogism is a type of deductive reasoning involving quantified statements. The syllogistic reasoning scheme in the classical Aristotelian framework involves three crisp term sets and four linguistic quantifiers, for which the main support is the linguistic properties of the quantifiers. A number of fuzzy approaches for defining an approximate syllogism have been proposed for which the main support is cardinality calculus. In this paper we analyze fuzzy syllogistic models previously described by Zadeh and Dubois et al. and compare their behavior with that of the classical Aristotelian framework to check which of the 24 classical valid syllogistic reasoning patterns (called moods) are particular crisp cases of these fuzzy approaches. This allows us to assess to what extent these approaches can be considered as either plausible extensions of the classical crisp syllogism or a basis for a general approach to the problem of approximate syllogism.", "targets": "On the analysis of set-based fuzzy quantified reasoning using classical syllogistics"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-949339e9a46f4bff8f41252d8997dcdc", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The family of temporal difference (TD) methods span a spectrum from computationally frugal linear methods like TD(\u03bb) to data efficient least squares methods. Least square methods make the best use of available data directly computing the TD solution and thus do not require tuning a typically highly sensitive learning rate parameter, but require quadratic computation and storage. Recent algorithmic developments have yielded several sub-quadratic methods that use an approximation to the least squares TD solution, but incur bias. In this paper, we propose a new family of accelerated gradient TD (ATD) methods that (1) provide similar data efficiency benefits to least-squares methods, at a fraction of the computation and storage (2) significantly reduce parameter sensitivity compared to linear TD methods, and (3) are asymptotically unbiased. We illustrate these claims with a proof of convergence in expectation and experiments on several benchmark domains and a large-scale industrial energy allocation domain.", "targets": "Accelerated Gradient Temporal Difference Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b0c5e478669d43e0b816d44f9a0ddf6c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Linear Discriminant Analysis (LDA) on Electronic Health Records (EHR) data is widely-used for early detection of diseases. Classical LDA for EHR data classification, however, suffers from two handicaps: the ill-posed estimation of LDA parameters (e.g., covariance matrix), and the \u201clinear inseparability\u201d of EHR data. To handle these two issues, in this paper, we propose a novel classifier FWDA \u2014 Fast Wishart Discriminant Analysis, that makes predictions in an ensemble way. Specifically, FWDA first surrogates the distribution of inverse covariance matrices using a Wishart distribution estimated from the training data, then \u201cweightedaverages\u201d the classification results of multiple LDA classifiers parameterized by the sampled inverse covariance matrices via a Bayesian Voting scheme. The weights for voting are optimally updated to adapt each new input data, so as to enable the nonlinear classification. Theoretical analysis indicates that FWDA possesses a fast convergence rate and a robust performance on high dimensional data. Extensive experiments on large-scale EHR dataset show that our approach outperforms stateof-the-art algorithms by a large margin.", "targets": "FWDA: a Fast Wishart Discriminant Analysis with its Application to Electronic Health Records Data Classification"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d3456466c16647029187eb4068c5ef70", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a method for learning expressive energy-based policies for continuous states and actions, which has been feasible only in tabular domains before. We apply our method to learning maximum entropy policies, resulting into a new algorithm, called soft Q-learning, that expresses the optimal policy via a Boltzmann distribution. We use the recently proposed amortized Stein variational gradient descent to learn a stochastic sampling network that approximates samples from this distribution. The benefits of the proposed algorithm include improved exploration and compositionality that allows transferring skills between tasks, which we confirm in simulated experiments with swimming and walking robots. We also draw a connection to actorcritic methods, which can be viewed performing approximate inference on the corresponding energy-based model.", "targets": "Reinforcement Learning with Deep Energy-Based Policies"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-40f122b87c7e402098700f4c7d73f4a8", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Colorization of grayscale images has been a hot topic in computer vision. Previous research mainly focuses on producing a colored image to match the original one. However, since many colors share the same gray value, an input grayscale image could be diversely colored while maintaining its reality. In this paper, we design a novel solution for unsupervised diverse colorization. Specifically, we leverage conditional generative adversarial networks to model the distribution of real-world item colors, in which we develop a fully convolutional generator with multi-layer noise to enhance diversity, with multi-layer condition concatenation to maintain reality, and with stride 1 to keep spatial information. With such a novel network architecture, the model yields highly competitive performance on the open LSUN bedroom dataset. The Turing test of 80 humans further indicates our generated color schemes are highly convincible.", "targets": "Unsupervised Diverse Colorization via Generative Adversarial Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-19b78eb552004c75879c5a2981b53d58", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper tests the hypothesis that distinctive feature classifiers anchored at phonetic landmarks can be transferred crosslingually without loss of accuracy. Three consonant voicing classifiers were developed: (1) manually selected acoustic features anchored at a phonetic landmark, (2) MFCCs (either averaged across the segment or anchored at the landmark), and (3) acoustic features computed using a convolutional neural network (CNN). All detectors are trained on English data (TIMIT), and tested on English, Turkish, and Spanish (performance measured using F1 and accuracy). Experiments demonstrate that manual features outperform all MFCC classifiers, while CNN features outperform both. MFCC-based classifiers suffer an overall error rate increase of up to 96.1% when generalized from English to other languages. Manual features suffer only an up to 35.2% relative error rate increase, and CNN features actually perform the best on Turkish and Spanish, demonstrating that features capable of representing long-term spectral dynamics (CNN and landmark-based features) are able to generalize cross-lingually with little or no loss of accuracy.", "targets": "LANDMARK-BASED CONSONANT VOICING DETECTION ON MULTILINGUAL CORPORA"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-cb71fdd1c45e445a89d3804dcea78b83", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present the EpiReader, a novel model for machine comprehension of text. Machine comprehension of unstructured, real-world text is a major research goal for natural language processing. Current tests of machine comprehension pose questions whose answers can be inferred from some supporting text, and evaluate a model\u2019s response to the questions. The EpiReader is an end-to-end neural model comprising two components: the first component proposes a small set of candidate answers after comparing a question to its supporting text, and the second component formulates hypotheses using the proposed candidates and the question, then reranks the hypotheses based on their estimated concordance with the supporting text. We present experiments demonstrating that the EpiReader sets a new state-of-the-art on the CNN and Children\u2019s Book Test machine comprehension benchmarks, outperforming previous neural models by a significant margin.", "targets": "Natural Language Comprehension with the EpiReader"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-63ec1b56dbb04fbc97d9ef4d4176d192", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We study the skip-thought model proposed by Kiros et al. (2015) with neighborhood information as weak supervision. More specifically, we propose a skip-thought neighbor model to consider the adjacent sentences as a neighborhood. We train our skip-thought neighbor model on a large corpus with continuous sentences, and then evaluate the trained model on 7 tasks, which include semantic relatedness, paraphrase detection, and classification benchmarks. Both quantitative comparison and qualitative investigation are conducted. We empirically show that, our skip-thought neighbor model performs as well as the skip-thought model on evaluation tasks. In addition, we found that, incorporating an autoencoder path in our model didn\u2019t aid our model to perform better, while it hurts the performance of the skip-thought model.", "targets": "Rethinking Skip-thought: A Neighborhood based Approach"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e2ca987482dd43e7938c8e9bb86fa857", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "As a general and thus popular model for autonomous systems, partially observable Markov decision process (POMDP) can capture uncertainties from different sources like sensing noises, actuation errors, and uncertain environments. However, its comprehensiveness makes the planning and control in POMDP difficult. Traditional POMDP planning problems target to find the optimal policy to maximize the expectation of accumulated rewards. But for safety critical applications, guarantees of system performance described by formal specifications are desired, which motivates us to consider formal methods to synthesize supervisor for POMDP. With system specifications given by Probabilistic Computation Tree Logic (PCTL), we propose a supervisory control framework with a type of deterministic finite automata (DFA), za-DFA, as the controller form. While the existing work mainly relies on optimization techniques to learn fixed-size finite state controllers (FSCs), we develop an L\u2217 learning based algorithm to determine both space and transitions of za-DFA. Membership queries and different oracles for conjectures are defined. The learning algorithm is sound and complete. An example is given in detailed steps to illustrate the supervisor synthesis algorithm.", "targets": "Supervisor Synthesis of POMDP based on Automata Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d812fa8fc5dc4f1893997e3e8c63f81f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we propose ELF, an Extensive, Lightweight and Flexible platform for fundamental reinforcement learning research. Using ELF, we implement a highly customizable real-time strategy (RTS) engine with three game environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a miniature version of StarCraft, captures key game dynamics and runs at 40K frame-persecond (FPS) per core on a Macbook Pro notebook. When coupled with modern reinforcement learning methods, the system can train a full-game bot against builtin AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition, our platform is flexible in terms of environment-agent communication topologies, choices of RL methods, changes in game parameters, and can host existing C/C++-based game environments like ALE [4]. Using ELF, we thoroughly explore training parameters and show that a network with Leaky ReLU [16] and Batch Normalization [10] coupled with long-horizon training and progressive curriculum beats the rule-based built-in AI more than 70% of the time in the full game of Mini-RTS. Strong performance is also achieved on the other two games. In game replays, we show our agents learn interesting strategies. ELF, along with its RL platform, will be open-sourced.", "targets": "ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d60d067c1c4f46ba830fdcccc285672c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We show a new lower bound on the sample complexity of (\u03b5,\u03b4)-differentially private algorithms that accurately answer statistical queries on high-dimensional databases. The novelty of our bound is that it depends optimally on the parameter \u03b4, which loosely corresponds to the probability that the algorithm fails to be private, and is the first to smoothly interpolate between approximate differential privacy (\u03b4 > 0) and pure differential privacy (\u03b4 = 0). Specifically, we consider a database D \u2208 {\u00b11}n\u00d7d and its one-way marginals, which are the d queries of the form \u201cWhat fraction of individual records have the i-th bit set to +1?\u201d We show that in order to answer all of these queries to within error \u00b1\u03b1 (on average) while satisfying (\u03b5,\u03b4)-differential privacy, it is necessary that n \u2265\u03a9 \uf8eb\uf8ec\uf8ec\uf8ec\uf8ec\uf8ed\u221ad log(1/\u03b4) \u03b1\u03b5 \uf8f6\uf8f7\uf8f7\uf8f7\uf8f7\uf8f8 , which is optimal up to constant factors. To prove our lower bound, we build on the connection between fingerprinting codes and lower bounds in differential privacy (Bun, Ullman, and Vadhan, STOC\u201914). In addition to our lower bound, we give new purely and approximately differentially private algorithms for answering arbitrary statistical queries that improve on the sample complexity of the standard Laplace and Gaussian mechanisms for achieving worst-case accuracy guarantees by a logarithmic factor. \u2217Harvard University School of Engineering and Applied Sciences. Supported by NSF grant CCF-1116616. Email: tsteinke@seas.harvard.edu. \u2020Columbia University Department of Computer Science. Supported by a Junior Fellowship from the Simons Society of Fellows. Email: jullman@cs.columbia.edu. ar X iv :1 50 1. 06 09 5v 1 [ cs .D S] 2 4 Ja n 20 15", "targets": "Between Pure and Approximate Differential Privacy"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8c934a14c1034bcc90fbfefdee44ac47", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We describe here a methodology to identify a list of ambiguous Malay words that are commonly being used in Malay documentations such as Requirement Specification. We compiled several relevant and appropriate requirement quality attributes and sentence rules from previous literatures and adopt it to come out with a set of ambiguity attributes that most suit Malay words. The extracted Malay ambiguous words (potential) are then being mapped onto the constructed ambiguity attributes to confirm their vagueness. The list is then verified by Malay linguist experts. This paper aims to identify a list of potential ambiguous words in Malay as an attempt to assist writers to avoid using the vague words while documenting Malay Requirement Specification as well as to any other related Malay documentation. The result of this study is a list of 120 potential ambiguous Malay words that could act as guidelines in writing Malay sentences.", "targets": "MAPPING: AN EXPLORATORY STUDY"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-079c8b63d69046cba0e93bc77ab2625d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The rapid growth of emerging information technologies and application patterns in modern society, e.g., Internet, Internet of Things, Cloud Computing and Tri-network Convergence, has caused the advent of the era of big data. Big data contains huge values, however, mining knowledge from big data is a tremendously challenging task because of data uncertainty and inconsistency. Attribute reduction (also known as feature selection) can not only be used as an effective preprocessing step, but also exploits the data redundancy to reduce the uncertainty. However, existing solutions are designed 1) either for a single machine that means the entire data must fit in the main memory and the parallelism is limited; 2) or for the Hadoop platform which means that the data have to be loaded into the distributed memory frequently and therefore become inefficient. In this paper, we overcome these shortcomings for maximum efficiency possible, and propose a unified framework for Parallel Large-scale Attribute Reduction, termed PLAR, for big data analysis. PLAR consists of three components: 1) Granular Computing (GrC)-based initialization: it converts a decision table (i.e. original data representation) into a granularity representation which reduces the amount of space and hence can be easily cached in the distributed memory: 2) model-parallelism: it simultaneously evaluates all feature candidates and makes attribute reduction highly parallelizable; 3) dataparallelism: it computes the significance of an attribute in parallel using a MapReduce-style manner. We implement PLAR with four representative heuristic feature selection algorithms on SPARK, and evaluate them on various huge datasets, including UCI and astronomical datasets, finding our method\u2019s advantages beyond existing solutions.", "targets": "Parallel Large-Scale Attribute Reduction on Cloud Systems"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-68e8a229a66e4b5892786e6ec8abdb7d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Decentralized POMDPs provide an expressive framework for multi-agent sequential decision making. While finite-horizon DECPOMDPs have enjoyed significant success, progress remains slow for the infinite-horizon case mainly due to the inherent complexity of optimizing stochastic controllers representing agent policies. We present a promising new class of algorithms for the infinite-horizon case, which recasts the optimization problem as inference in a mixture of DBNs. An attractive feature of this approach is the straightforward adoption of existing inference techniques in DBNs for solving DEC-POMDPs and supporting richer representations such as factored or continuous states and actions. We also derive the Expectation Maximization (EM) algorithm to optimize the joint policy represented as DBNs. Experiments on benchmark domains show that EM compares favorably against the state-of-the-art solvers.", "targets": "Anytime Planning for Decentralized POMDPs using Expectation Maximization"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-bdc75fc77cfe40a8ab6a3baf79e412af", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Vector space models have become popular in distributional semantics, despite the challenges they face in capturing various semantic phenomena. We propose a novel probabilistic framework which draws on both formal semantics and recent advances in machine learning. In particular, we separate predicates from the entities they refer to, allowing us to perform Bayesian inference based on logical forms. We describe an implementation of this framework using a combination of Restricted Boltzmann Machines and feedforward neural networks. Finally, we demonstrate the feasibility of this approach by training it on a parsed corpus and evaluating it on established similarity datasets.", "targets": "Functional Distributional Semantics"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d37c0a84a36146c48ce8fdb879700633", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The AGM theory of belief revision has be\u00ad come an important paradigm for investigat\u00ad ing rational belief changes. Unfortunately, researchers working in this paradigm have re\u00ad stricted much of their attention to rather sim\u00ad ple representations of belief states, namely logically closed sets of propositional sen\u00ad tences. In our opinion, this has resulted in a too abstract categorisation of belief change operations: expansion, revision, or contrac\u00ad tion. Occasionally, in the AGM paradigm, also probabilistic belief changes have been considered, and it is widely accepted that the probabilistic version of expansion is con\u00ad ditioning. However, we argue that it may be more correct to view conditioning and expan\u00ad sion as two essentially different kinds of belief change, and that what we call constraining is a better candidate for being considered prob\u00ad abilistic expansion.", "targets": "Probabilistic Belief Change: Expansion, Conditioning and Constraining"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-add3e88a4a504e17956f0e8511c56a2d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Although traditionally used in the machine translation field, the encoder-decoder framework has been recently applied for the generation of video and image descriptions. The combination of Convolutional and Recurrent Neural Networks in these models has proven to outperform the previous state of the art, obtaining more accurate video descriptions. In this work we propose pushing further this model by introducing two contributions into the encoding stage. First, producing richer image representations by combining object and location information from Convolutional Neural Networks and second, introducing Bidirectional Recurrent Neural Networks for capturing both forward and backward temporal relationships in the input frames.", "targets": "Video Description using Bidirectional Recurrent Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-cec10bf29ceb42a09ad00a1d7206690c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a new zero-shot Event Detection method by Multi-modal Distributional Semantic embedding of videos. Our model embeds object and action concepts as well as other available modalities from videos into a distributional semantic space. To our knowledge, this is the first Zero-Shot event detection model that is built on top of distributional semantics and extends it in the following directions: (a) semantic embedding of multimodal information in videos (with focus on the visual modalities), (b) automatically determining relevance of concepts/attributes to a free text query, which could be useful for other applications, and (c) retrieving videos by free text event query (e.g., \u201dchanging a vehicle tire\u201d) based on their content. We embed videos into a distributional semantic space and then measure the similarity between videos and the event query in a free text form. We validated our method on the large TRECVID MED (Multimedia Event Detection) challenge. Using only the event title as a query, our method outperformed the state-of-the-art that uses big descriptions from 12.6% to 13.5% with MAP metric and 0.73 to 0.83 with ROC-AUC metric. It is also an order of magnitude faster.", "targets": "Zero-Shot Event Detection by Multimodal Distributional Semantic Embedding of Videos"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ee84939b383649edb0ea91b5f6ef42a0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper develops upper and lower bounds for the probability of Boolean functions by treating multiple occurrences of variables as independent and assigning them new individual probabilities. We call this approach dissociation and give an exact characterization of optimal oblivious bounds, i.e. when the new probabilities are chosen independent of the probabilities of all other variables. Our motivation comes from the weighted model counting problem (or, equivalently, the problem of computing the probability of a Boolean function), which is #P-hard in general. By performing several dissociations, one can transform a Boolean formula whose probability is difficult to compute, into one whose probability is easy to compute, and which is guaranteed to provide an upper or lower bound on the probability of the original formula by choosing appropriate probabilities for the dissociated variables. Our new bounds shed light on the connection between previous relaxation-based and model-based approximations and unify them as concrete choices in a larger design space. We also show how our theory allows a standard relational database management system (DBMS) to both upper and lower bound hard probabilistic queries in guaranteed polynomial time.", "targets": "Oblivious Bounds on the Probability of Boolean Functions"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-dda9ddc2550c4f299b48d9042c8a2cea", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In the fashion industry, order scheduling focuses on the assignment of production orders to appropriate production lines. In reality, before a new order can be put into production, a series of activities known as pre-production events need to be completed. In addition, in real production process, owing to various uncertainties, the daily production quantity of each order is not always as expected. In this research, by considering the pre-production events and the uncertainties in the daily production quantity, robust order scheduling problems in the fashion industry are investigated with the aid of a multi-objective evolutionary algorithm (MOEA) called nondominated sorting adaptive differential evolution (NSJADE). The experimental results illustrate that it is of paramount importance to consider pre-production events in order scheduling problems in the fashion industry. We also unveil that the existence of the uncertainties in the daily production quantity heavily affects the order scheduling.", "targets": "Robust Order Scheduling in the Fashion Industry: A Multi-Objective Optimization Approach"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c4033d90e4c740b7a0e669147550f111", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We provide a systematic analysis of levels of integration between discrete high-level reasoning and continuous low-level reasoning to address hybrid planning problems in robotics. We identify four distinct strategies for such an integration: (i) low-level checks are done for all possible cases in advance and then this information is used during plan generation, (ii) low-level checks are done exactly when they are needed during the search for a plan, (iii) first all plans are computed and then infeasible ones are filtered, and (iv) by means of replanning, after finding a plan, low-level checks identify whether it is infeasible or not; if it is infeasible, a new plan is computed considering the results of previous lowlevel checks. We perform experiments on hybrid planning problems in robotic manipulation and legged locomotion domains considering these four methods of integration, as well as some of their combinations. We analyze the usefulness of levels of integration in these domains, both from the point of view of computational efficiency (in time and space) and from the point of view of plan quality relative to its feasibility. We discuss advantages and disadvantages of each strategy in the light of experimental results and provide some guidelines on choosing proper strategies for a given domain.", "targets": "Levels of Integration between Low-Level Reasoning and Task Planning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e02ffe11163d4ee0b6e87e65f0e3bc5a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Neural machine translation (NMT) aims at solving machine translation (MT) problems with purely neural networks and exhibits promising results in recent years. However, most of the existing NMT models are of shallow topology and there is still a performance gap between the single NMT model and the best conventional MT system. In this work, we introduce a new type of linear connections, named fast-forward connections, based on deep Long Short-Term Memory (LSTM) network, together with the interleaved bidirectional way for stacking them. Fastforward connections play an essential role to propagate the gradients in building the deep topology of depth 16. On WMT\u201914 Englishto-French task, we achieved BLEU=37.7 with single attention model, which outperforms the corresponding single shallow model by 6.2 BLEU points. It is the first time that a single NMT model achieves state-of-the-art performance and outperforms the best conventional model by 0.7 BLEU points. Even without considering attention mechanism, we can still achieve BLEU=36.3. After the special handling for unknown words and the model ensembling, we obtained the best score on this task with BLEU=40.4. Our models are also verified on the more difficult WMT\u201914 English-to-German task.", "targets": "Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f67492689bf54838824bb58eab8c6b4e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "State-space models are successfully used in many areas of science, engineering and economics to model time series and dynamical systems. We present a fully Bayesian approach to inference and learning (i.e. state estimation and system identification) in nonlinear nonparametric state-space models. We place a Gaussian process prior over the state transition dynamics, resulting in a flexible model able to capture complex dynamical phenomena. To enable efficient inference, we marginalize over the transition dynamics function and infer directly the joint smoothing distribution using specially tailored Particle Markov Chain Monte Carlo samplers. Once a sample from the smoothing distribution is computed, the state transition predictive distribution can be formulated analytically. Our approach preserves the full nonparametric expressivity of the model and can make use of sparse Gaussian processes to greatly reduce computational complexity.", "targets": "Bayesian Inference and Learning in Gaussian Process State-Space Models with Particle MCMC"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9569434b45f04920af90a346a072e114", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this study, we introduce an ensemble-based approach for online machine learning. The ensemble of base classifiers in our approach is obtained by learning Na\u00efve Bayes classifiers on different training sets which are generated by projecting the original training set to lower dimensional space. We propose a mechanism to learn sequences of data using data chunks paradigm. The experiments conducted on a number of UCI datasets and one synthetic dataset demonstrate that the proposed approach performs significantly better than some well-known online learning algorithms.", "targets": "An ensemble-based online learning algorithm for streaming data"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-973a9c421b184a0b81a8bf0c9065a331", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recent works on word representations mostly rely on predictive models. Distributed word representations (aka word embeddings) are trained to optimally predict the contexts in which the corresponding words tend to appear. Such models have succeeded in capturing word similarities as well as semantic and syntactic regularities. Instead, we aim at reviving interest in a model based on counts. We present a systematic study of the use of the Hellinger distance to extract semantic representations from the word co-occurrence statistics of large text corpora. We show that this distance gives good performance on word similarity and analogy tasks, with a proper type and size of context, and a dimensionality reduction based on a stochastic low-rank approximation. Besides being both simple and intuitive, this method also provides an encoding function which can be used to infer unseen words or phrases. This becomes a clear advantage compared to predictive models which must train these new words.", "targets": "Rehabilitation of Count-based Models for Word Vector Representations"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d510f8f7d6ec49f19ef67f769dbc568f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we implicitly incorporate morpheme information into word embedding. Based on the strategy we utilize the morpheme information, three models are proposed. To test the performances of our models, we conduct the word similarity and syntactic analogy. The results demonstrate the effectiveness of our methods. Our models beat the comparative baselines on both tasks to a great extent. On the golden standard Wordsim-353 and RG-65, our models approximately outperform CBOW for 5 and 7 percent, respectively. In addition, 7 percent advantage is also achieved by our models on syntactic analysis. According to parameter analysis, our models can increase the semantic information in the corpus and our performances on the smallest corpus are similar to the performance of CBOW on the corpus which is five times ours. This property of our methods may have some positive effects on NLP researches about the corpus-limited languages.", "targets": "Implicitly Incorporating Morphological Information into Word Embedding"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2479aee0207845029a47031141ee29fb", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "I propose a framework for an agent to change its probabilistic beliefs when a new piece of propositional information \u03b1 is observed. Traditionally, belief change occurs by either a revision process or by an update process, depending on whether the agent is informed with \u03b1 in a static world or, respectively, whether \u03b1 is a \u2018signal\u2019 from the environment due to an event occurring. Boutilier suggested a unified model of qualitative belief change, which \u201ccombines aspects of revision and update, providing a more realistic characterization of belief change.\u201d In this paper, I propose a unified model of quantitative belief change, where an agent\u2019s beliefs are represented as a probability distribution over possible worlds. As does Boutilier, I take a dynamical systems perspective. The proposed approach is evaluated against several rationality postulated, and some properties of the approach are worked out. Information acquired can be due to evolution of the world or revelation about the world. That is, one may notice via some \u2018signal\u2019 generated by the changing environment that the environment has changed, or, one may be informed by an independent agent in a static environment that some \u2018fact\u2019 holds. In the present work, I deal with belief change of agents who handle uncertainty by maintaining a probability distribution over possible situations. The agents in this framework also have models for nondeterministic events, and noisy observations. Noisy observation models can model imperfect sensory equipment for receiving environmental signals, but they can also model untrustworthy informants in a static world. In this paper, I provide the work of Boutilier (1998) as background, because it has several connections with and was the seed for the present work. However, I do not intend simply to give a probabilistic version of his Generalized Update Semantics. Whereas Boutilier (1998) presents a model for unifying qualitative belief revision and update, I build on his work to present a unified model of belief revision and update in a stochastic (probabilistic) setting. I also take a dynamical systems perspective, like him. Due to my quantitative approach, an agent can maintain a probability distribution Copyright c \u00a9 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. over the worlds it believes possible, using an expectation semantics of change. This is in contrast to Boutilier\u2019s \u201cgeneralized update\u201d approach, which takes a most-plausible event and most-plausible world approach. Finally, my proposal requires a trade-off factor to mix the changes in probability distribution over possible worlds brought about due to the probabilistic belief revision process and, respectively, the probabilistic belief update process. Boutilier\u2019s model has revision and update more tightly coupled. For this reason, his approach is better called \u201cunified\u201d while mine is called \u201chybrid\u201d. The belief change community does not study probabilistic belief update; it is studied almost exclusively in frameworks employing Bayesian conditioning \u2013 for modeling events and actions in dynamical domains (e.g., DBNs, MDPs, POMDPs) (Koller and Friedman, 2009; Poole and Mackworth, 2010, e.g.). The part of my approach responsible for updating stays within the Bayesian framework, but combines the essential elements of belief update with unobservable events and belief update as partially observable Markov decision process (POMDP) state estimation. On the other hand, there is plenty of literature on probabilistic belief revision (Voorbraak, 1999; Grove and Halpern, 1998; Kern-Isberner, 2008; Yue and Liu, 2008, e.g.). The subject is both deep and broad. There is no one accepted approach and to argue which is the best is not the focus of this paper. I shall choose one reasonable method for probabilistic belief revision suitable to the task at hand. In the first section, Boutilier\u2019s \u2018generalized update\u2019 is reviewed. Then, in the next section, I introduce stochastic update and stochastic revision, culminating in the \u2018hybrid stochastic belief change\u2019 (HSBC) approach. The final section presents an example inspired by Boutilier\u2019s article (1998) and analyses the results. Some proofs of propositions are omitted to save space; they are available on request. Boutilier\u2019s Generalized Update I use Boutilier\u2019s notation and descriptions, except that I am more comfortable with \u03b1 and \u03b2 to represent sentences, instead of A and B. It is assumed that an agent has a deductively closed belief set K, a set of sentences drawn from some logical language reflecting the agent\u2019s beliefs about the current state of the world. For ease of presentation, I assume ar X iv :1 60 4. 02 12 6v 1 [ cs .A I] 7 A pr 2 01 6 a logically finite, classical propositional language, denoted L (LCPL in Boutilier (1998)), and consequence operation Cn . The belief set K will often be generated by some finite knowledge base KB (i.e., K = Cn(KB)). The identically true and false propositions are denoted > and \u22a5, respectively. Given a set of possible worlds W (or valuations over L) and \u03b1 \u2208 L, the set of \u03b1-worlds, that is, the elements of W satisfying \u03b1, is denoted by ||\u03b1||. The worlds satisfying all sentences in a set K is denoted ||K||.", "targets": "On Stochastic Belief Revision and Update and their Combination"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-10c9071f442f48dfb69993492584097c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Detecting a small number of outliers from a set of data observations is always challenging. This problem is more difficult in the setting of multiple network samples, where computing the anomalous degree of a network sample is generally not sufficient. In fact, explaining why the network is exceptional, expressed in the form of subnetwork, is also equally important. In this paper, we develop a novel algorithm to address these two key problems. We treat each network sample as a potential outlier and identify subnetworks that mostly discriminate it from nearby regular samples. The algorithm is developed in the framework of network regression combined with the constraints on both network topology and L1-norm shrinkage to perform subnetwork discovery. Our method thus goes beyond subspace /subgraph discovery and we show that it converges to a global optimum. Evaluation on various real-world network datasets demonstrates that our algorithm not only outperforms baselines in both network and high dimensional setting, but also discovers highly relevant and interpretable local subnetworks, further enhancing our understanding of anomalous networks.", "targets": "Outlier Detection from Network Data with Subnetwork Interpretation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-921d9509f4904984b1e10daaab677fde", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The Gumbel trick is a method to sample from a discrete probability distribution, or to estimate its normalizing partition function. The method relies on repeatedly applying a random perturbation to the distribution in a particular way, each time solving for the most likely configuration. We derive an entire family of related methods, of which the Gumbel trick is one member, and show that the new methods have superior properties in several settings with minimal additional computational cost. In particular, for the Gumbel trick to yield computational benefits for discrete graphical models, Gumbel perturbations on all configurations are typically replaced with socalled low-rank perturbations. We show how a subfamily of our new methods adapts to this setting, proving new upper and lower bounds on the log partition function and deriving a family of sequential samplers for the Gibbs distribution. Finally, we balance the discussion by showing how the simpler analytical form of the Gumbel trick enables additional theoretical results.", "targets": "Lost Relatives of the Gumbel Trick"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-88f380c99a7242fb833b0b966b9c69e7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we investigate whether an a priori disambiguation of word senses is strictly necessary or whether the meaning of a word in context can be disambiguated through composition alone. We evaluate the performance of off-the-shelf singlevector and multi-sense vector models on a benchmark phrase similarity task and a novel task for word-sense discrimination. We find that single-sense vector models perform as well or better than multi-sense vector models despite arguably less clean elementary representations. Our findings furthermore show that simple composition functions such as pointwise addition are able to recover sense specific information from a single-sense vector model remark-", "targets": "One Representation per Word \u2014 Does it make Sense for Composition?"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-abcebdf6e6bb4559ab3ed74f45d85dae", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "While there has been substantial progress in factoid question-answering (QA), answering complex questions remains challenging, typically requiring both a large body of knowledge and inference techniques. Open Information Extraction (Open IE) provides a way to generate semi-structured knowledge for QA, but to date such knowledge has only been used to answer simple questions with retrievalbased methods. We overcome this limitation by presenting a method for reasoning with Open IE knowledge, allowing more complex questions to be handled. Using a recently proposed support graph optimization framework for QA, we develop a new inference model for Open IE, in particular one that can work effectively with multiple short facts, noise, and the relational structure of tuples. Our model significantly outperforms a state-of-the-art structured solver on complex questions of varying difficulty, while also removing the reliance on manually curated knowledge.", "targets": "Answering Complex Questions Using Open Information Extraction"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ba35066d206c457bb526b7383e8fd8d3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Constraint-based causal discovery from limited data is a notoriously difficult challenge due to the many borderline independence test decisions. Several approaches to improve the reliability of the predictions by exploiting redundancy in the independence information have been proposed recently. Though promising, existing approaches can still be greatly improved in terms of accuracy and scalability. We present a novel method that reduces the combinatorial explosion of the search space by using a more coarse-grained representation of causal information, drastically reducing computation time. Additionally, we propose a method to score causal predictions based on their confidence. Crucially, our implementation also allows one to easily combine observational and interventional data and to incorporate various types of available background knowledge. We prove soundness and asymptotic consistency of our method and demonstrate that it can outperform the state-ofthe-art on synthetic data, achieving a speedup of several orders of magnitude. We illustrate its practical feasibility by applying it on a challenging protein data set.", "targets": "Ancestral Causal Inference"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-844c9ebbeffc48059e5b53999223dbef", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We study embedded Binarized Neural Networks (eBNNs) with the aim of allowing current binarized neural networks (BNNs) in the literature to perform feedforward inference efficiently on small embedded devices. We focus on minimizing the required memory footprint, given that these devices often have memory as small as tens of kilobytes (KB). Beyond minimizing the memory required to store weights, as in a BNN, we show that it is essential to minimize the memory used for temporaries which hold intermediate results between layers in feedforward inference. To accomplish this, eBNN reorders the computation of inference while preserving the original BNN structure, and uses just a single floating-point temporary for the entire neural network. All intermediate results from a layer are stored as binary values, as opposed to floating-points used in current BNN implementations, leading to a 32x reduction in required temporary space. We provide empirical evidence that our proposed eBNN approach allows efficient inference (10s of ms) on devices with severely limited memory (10s of KB). For example, eBNN achieves 95% accuracy on the MNIST dataset running on an Intel Curie with only 15 KB of usable memory with an inference runtime of under 50 ms per sample. To ease the development of applications in embedded contexts, we make our source code available that allows users to train and discover eBNN models for a learning task at hand, which fit within the memory constraint of the target device.", "targets": "Embedded Binarized Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d856f0c5548f4323b0e1eeb0dcb94b53", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper formulates a novel problem on graphs: find the minimal subset of edges in a fully connected graph, such that the resulting graph contains all spanning trees for a set of specified subgraphs. This formulation is motivated by an unsupervised grammar induction problem from computational linguistics. We present a reduction to some known problems and algorithms from graph theory, provide computational complexity results, and describe an approximation algorithm.", "targets": "Matroids Hitting Sets and Unsupervised Dependency Grammar Induction"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2b03e8d43fbf40a2821152c44deb1859", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Inspired by biological vision systems, the over-complete local features with huge cardinality are increasingly used for face recognition during the last decades. Accordingly, feature selection has become more and more important and plays a critical role for face data description and recognition. In this paper, we propose a trainable feature selection algorithm based on the regularized frame for face recognition. By enforcing a sparsity penalty term on the minimum squared error (MSE) criterion, we cast the feature selection problem into a combinatorial sparse approximation problem, which can be solved by greedy methods or convex relaxation methods. Moreover, based on the same frame, we propose a sparse Ho-Kashyap (HK) procedure to obtain simultaneously the optimal sparse solution and the corresponding margin vector of the MSE criterion. The proposed methods are used for selecting the most informative Gabor features of face images for recognition and the experimental results on benchmark face databases demonstrate the effectiveness of the", "targets": "Feature Selection via Sparse Approximation for Face Recognition"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7c99258a91cc49f0b8487ecf9d013ab2", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The Aviation Safety Reporting System collects voluntarily submitted reports on aviation safety incidents to facilitate research work aiming to reduce such incidents. To effectively reduce these incidents, it is vital to accurately identify why these incidents occurred. More precisely, given a set of possible causes, or shaping factors, this task of cause identification involves identifying all and only those shaping factors that are responsible for the incidents described in a report. We investigate two approaches to cause identification. Both approaches exploit information provided by a semantic lexicon, which is automatically constructed via Thelen and Riloff\u2019s Basilisk framework augmented with our linguistic and algorithmic modifications. The first approach labels a report using a simple heuristic, which looks for the words and phrases acquired during the semantic lexicon learning process in the report. The second approach recasts cause identification as a text classification problem, employing supervised and transductive text classification algorithms to learn models from incident reports labeled with shaping factors and using the models to label unseen reports. Our experiments show that both the heuristic-based approach and the learning-based approach (when given sufficient training data) outperform the baseline system significantly.", "targets": "Cause Identification from Aviation Safety Incident Reports via Weakly Supervised Semantic Lexicon Construction"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-62ffdc104a164f2cb0966d7a2d99efc0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Matrix factorization (MF) and Autoencoder (AE) are among the most successful approaches of unsupervised learning. While MF based models have been extensively exploited in the graph modeling and link prediction literature, the AE family has not gained much attention. In this paper we investigate both MF and AE\u2019s application to the link prediction problem in sparse graphs. We show the connection between AE and MF from the perspective of multiview learning, and further propose MF+AE: a model training MF and AE jointly with shared parameters. We apply dropout to training both the MF and AE parts, and show that it can significantly prevent overfitting by acting as an adaptive regularization. We conduct experiments on six real world sparse graph datasets, and show that MF+AE consistently outperforms the competing methods, especially on datasets that demonstrate strong non-cohesive structures.", "targets": "Dropout Training of Matrix Factorization and Autoencoder for Link Prediction in Sparse Graphs"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-34abd95b88b445fc9e23ea2f9778988f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We develop a new model for Interactive Question Answering (IQA), using GatedRecurrent-Unit recurrent networks (GRUs) as encoders for statements and questions, and another GRU as a decoder for outputs. Distinct from previous work, our approach employs context-dependent word-level attention for more accurate statement representations and question-guided sentence-level attention for better context modeling. Employing these mechanisms, our model accurately understands when it can output an answer or when it requires generating a supplementary question for additional input. When available, user\u2019s feedback is encoded and directly applied to update sentence-level attention to infer the answer. Extensive experiments on QA and IQA datasets demonstrate quantitatively the effectiveness of our model with significant improvement over conventional QA models.", "targets": "A CONTEXT-AWARE ATTENTION NETWORK FOR INTERACTIVE QUESTION ANSWERING"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-bd5de377b5ef404faf3704e08cb98acc", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The field of Distributed Constraint Optimization has gained momentum in recent years, thanks to its ability to address various applications related to multi-agent cooperation. Nevertheless, solving Distributed Constraint Optimization Problems (DCOPs) optimally is NP-hard. Therefore, in large-scale, complex applications, incomplete DCOP algorithms are necessary. Current incomplete DCOP algorithms suffer of one or more of the following limitations: they (a) find local minima without providing quality guarantees; (b) provide loose quality assessment; or (c) are unable to benefit from the structure of the problem, such as domain-dependent knowledge and hard constraints. Therefore, capitalizing on strategies from the centralized constraint solving community, we propose a Distributed Large Neighborhood Search (D-LNS) framework to solve DCOPs. The proposed framework (with its novel repair phase) provides guarantees on solution quality, refining upper and lower bounds during the iterative process, and can exploit domain-dependent structures. Our experimental results show that D-LNS outperforms other incomplete DCOP algorithms on both structured and unstructured problem instances.", "targets": "Solving DCOPs with Distributed Large Neighborhood Search"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-464cb6c27f7040e5ada67176b17fd7f4", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present Deep Speaker, a neural speaker embedding system that maps utterances to a hypersphere where speaker similarity is measured by cosine similarity. The embeddings generated by Deep Speaker can be used for many tasks, including speaker identification, verification, and clustering. We experiment with ResCNN and GRU architectures to extract the acoustic features, then mean pool to produce utterance-level speaker embeddings, and train using triplet loss based on cosine similarity. Experiments on three distinct datasets suggest that Deep Speaker outperforms a DNN-based i-vector baseline. For example, Deep Speaker reduces the verification equal error rate by 50% (relatively) and improves the identification accuracy by 60% (relatively) on a text-independent dataset. We also present results that suggest adapting from a model trained with Mandarin can improve accuracy for English speaker recognition.", "targets": "Deep Speaker: an End-to-End Neural Speaker Embedding System"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0c737af2fe464dfb9f43baf2d971b906", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Shannon\u2019s information entropy measures of the uncertainty of an event\u2019s outcome. If learning about a system reflects a decrease in uncertainty, then a plausible intuition is that learning should be accompanied by a decrease in the entropy of the organism\u2019s actions and/or perceptual states. To address whether this intuition is valid, I examined an artificial organism \u2013 a simple robot \u2013 that learned to navigate in an arena and analyzed the entropy of the outcome variables action, state, and reward. Entropy did indeed decrease in the initial stages of learning, but two factors complicated the scenario: (1) the introduction of new options discovered during the learning process and (2) the shifting patterns of perceptual and environmental states resulting from changes to the robot\u2019s learned movement strategies. These factors lead to a subsequent increase in entropy as the agent learned. I end with a discussion of the utility of information-based characterizations of learning.", "targets": "Does Learning Imply a Decrease in the Entropy of Behavior?"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-18633a974e0743ed909d0ae9d54d92ca", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We study the problem of structured output learning from a regression perspective. We first provide a general formulation of the kernel dependency estimation (KDE) approach to this problem using operator-valued kernels. Our formulation overcomes the two main limitations of the original KDE approach, namely the decoupling between outputs in the image space and the inability to use a joint feature space. We then propose a covariance-based operator-valued kernel that allows us to take into account the structure of the kernel feature space. This kernel operates on the output space and only encodes the interactions between the outputs without any reference to the input space. To address this issue, we introduce a variant of our KDE method based on the conditional covariance operator that in addition to the correlation between the outputs takes into account the effects of the input variables. Finally, we evaluate the performance of our KDE approach on three structured output problems, and compare it to the state-of-the-art kernelbased structured output regression methods.", "targets": "A Generalized Kernel Approach to Structured Output Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c00e6a074c234485892fb3dd2c62fb7b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In the perspective of a sustainable urban planning, it is necessary to investigate cities in a holistic way and to accept surprises in the response of urban environments to a particular set of strategies. For example, the process of inner-city densification may limit air pollution, carbon emissions, and energy use through reduced transportation; on the other hand, the resulting street canyons could lead to local levels of pollution that could be higher than in a low-density urban setting. The holistic approach to sustainable urban planning implies using different models in an integrated way that is capable of simulating the urban system. As the interconnection of such models is not a trivial task, one of the key elements that may be applied is the description of the urban geometric properties in an \u201cinteroperable\u201d way. Focusing on air quality as one of the most pronounced urban problems, the geometric aspects of a city may be described by objects such as those defined in CityGML, so that an appropriate air quality model can be applied for estimating the quality of the urban air on the basis of atmospheric flow and chemistry equations. It is generally admitted that an ontology-based approach can provide a generic and robust way to interconnect different models. However, a direct approach, that consists in establishing correspondences between concepts, is not sufficient in the present situation. One has to take into account, among other things, the computations involved in the correspondences between concepts. In this paper we first present theoretical background and motivations for the interconnection of 3D city models and other models related to sustainable development and urban planning. Then we present a practical experiment based on the interconnection of CityGML with an air quality model. Our approach is based on the creation of an ontology of air quality models and on the extension of an ontology of urban planning process (OUPP) that acts as an ontology mediator.", "targets": "Ontologies for the Integration of Air Quality Models and 3D City Models"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-818d6dca97b64b32b33e3d0d6e002985", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We study a semi-supervised learning method based on the similarity graph and Regularized Laplacian. We give convenient optimization formulation of the Regularized Laplacian method and establish its various properties. In particular, we show that the kernel of the method can be interpreted in terms of discrete and continuous time random walks and possesses several important properties of proximity measures. Both optimization and linear algebra methods can be used for efficient computation of the classification functions. We demonstrate on numerical examples that the Regularized Laplacian method is competitive with respect to the other state of the art semi-supervised learning methods. Key-words: Semi-supervised learning, Graph-based learning, Regularized Laplacian, Proximity measure, Wikipedia article classification \u2217 Corresponding author. K. Avrachenkov is with Inria Sophia Antipolis, 2004 Route des Lucioles, 06902, Sophia Antipolis, France k.avrachenkov@inria.fr \u2020 P. Chebotarev is with Trapeznikov Institute of Control Sciences of the Russian Academy of Sciences, 65 Profsoyuznaya Str., Moscow, 117997, Russia \u2021 A. Mishenin is with St. Petersburg State University, Faculty of Applied Mathematics and Control Processes, Peterhof, 198504, Russia \u00a7 This work was partially supported by Campus France, Alcatel-Lucent Inria Joint Lab, EU Project Congas FP7-ICT-2011-8-317672, and RFBR grant No. 13-07-00990. L\u2019Apprentissage Semi-supervis\u00e9 avec Laplacian R\u00e9gularis\u00e9 R\u00e9sum\u00e9 : Nous \u00e9tudions une m\u00e9thode d\u2019apprentissage semi-supervis\u00e9, bas\u00e9 sur le graphe de similarit\u00e9 et Laplacian r\u00e9gularis\u00e9. Nous formalisons la m\u00e9thode comme un probl\u00e8me d\u2019optimisation convexe et quadratique et nous \u00e9tablissons ses diverses propri\u00e9t\u00e9s. En particulier, nous montrons que le noyau de la m\u00e9thode peut \u00eatre interpr\u00e9t\u00e9 en termes des marches al\u00e9atoires en temps discret et continu et poss\u00e8de plusieurs propri\u00e9t\u00e9s importantes des mesures de proximit\u00e9. Les techniques d\u2019optimisation ainsi que les techniques d\u2019alg\u00e9bre lin\u00e9aire peuvent \u00eatre utilis\u00e9 pour un calcul efficace des fonctions de classification. Nous d\u00e9montrons sur des exemples num\u00e9riques que la m\u00e9thode de Laplacian r\u00e9gularis\u00e9 est concurrentiel par rapport aux autres \u00e9tat de l\u2019art m\u00e9thodes d\u2019apprentissage semi-supervis\u00e9. Mots-cl\u00e9s : Apprentissage Semi-supervis\u00e9, Apprentissage bas\u00e9 sur le graphe de similarit\u00e9, Laplacian r\u00e9gularis\u00e9, mesure de proximit\u00e9, classification des articles Wikipedia Semi-supervised Learning with Regularized Laplacian 3", "targets": "Semi-supervised Learning with Regularized Laplacian"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d6b9052a731b40c7ae878109d93b63dc", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Slot Filling (SF) aims to extract the values of certain types of attributes (or slots, such as person:cities of residence) for a given entity from a large collection of source documents. In this paper we propose an effective DNN architecture for SF with the following new strategies: (1). Take a regularized dependency graph instead of a raw sentence as input to DNN, to compress the wide contexts between query and candidate filler; (2). Incorporate two attention mechanisms: local attention learned from query and candidate filler, and global attention learned from external knowledge bases, to guide the model to better select indicative contexts to determine slot type. Experiments show that this framework outperforms state-of-the-art on both relation extraction (16% absolute F-score gain) and slot filling validation for each individual system (up to 8.5% absolute Fscore gain).", "targets": "Improving Slot Filling Performance with Attentive Neural Networks on Dependency Structures"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8b56d0a230964ae9a29a4913fcfc0723", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Conventional dependency parsers rely on a statistical model and a transition system or graph algorithm to enforce tree-structured outputs during training and inference. In this work we formalize dependency parsing as the problem of selecting the head (a.k.a. parent) of each word in a sentence. Our model which we call DENSE (as shorthand for Dependency Neural Selection) employs bidirectional recurrent neural networks for the head selection task. Without enforcing any structural constraints during training, DENSE generates (at inference time) trees for the overwhelming majority of sentences (95% on an English dataset), while remaining non-tree outputs can be adjusted with a maximum spanning tree algorithm. We evaluate DENSE on four languages (English, Chinese, Czech, and German) with varying degrees of non-projectivity. Despite the simplicity of our approach, experiments show that the resulting parsers are on par with or outperform the state of the art.", "targets": "Dependency Parsing as Head Selection"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6c51b494ad164c9c872cba92a43fc445", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a probabilistic video model, the Video Pixel Network (VPN), that estimates the discrete joint distribution of the raw pixel values in a video. The model and the neural architecture reflect the time, space and color structure of video tensors and encode it as a four-dimensional dependency chain. The VPN approaches the best possible performance on the Moving MNIST benchmark, a leap over the previous state of the art, and the generated videos show only minor deviations from the ground truth. The VPN also produces detailed samples on the action-conditional Robotic Pushing benchmark and generalizes to the motion of novel objects.", "targets": "Video Pixel Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7ea5a452459d4758a18d0622db9b2d0f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We introduce utility-directed procedures for mediating the flow of potentially distract\u00ad ing alerts and communications to computer users. We present models and inference pro\u00ad cedures that balance the context-sensitive costs of deferring alerts with the cost of in\u00ad terruption. We describe the challenge of rea\u00ad soning about such costs under uncertainty via an analysis of user activity and the content of notifications. After introducing principles of attention-sensitive alerting, we focus on the problem of guiding alerts about email mes\u00ad sages. We dwell on the problem of inferring the expected criticality of email and discuss work on the PRIORITIES system, centering on prioritizing email by criticality and modu\u00ad lating the communication of notifications to users about the presence and nature of in\u00ad coming email.", "targets": "Attention-Sensitive Alerting"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-239ad8397412402e8a9fbafb2e2c7720", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we investigate the cross-media retrieval between images and text, i.e., using image to search text (I2T) and using text to search images (T2I). Existing cross-media retrieval methods usually learn one couple of projections, by which the original features of images and text can be projected into a common latent space to measure the content similarity. However, using the same projections for the two different retrieval tasks (I2T and T2I) may lead to a tradeoff between their respective performances, rather than their best performances. Different from previous works, we propose a modality-dependent cross-media retrieval (MDCR) model, where two couples of projections are learned for different cross-media retrieval tasks instead of one couple of projections. Specifically, by jointly optimizing the correlation between images and text and the linear regression from one modal space (image or text) to the semantic space, two couples of mappings are learned to project images and text from their original feature spaces into two common latent subspaces (one for I2T and the other for T2I). Extensive experiments show the superiority of the proposed MDCR compared with other methods. In particular, based the 4,096 dimensional convolutional neural network (CNN) visual feature and 100 dimensional LDA textual feature, the mAP of the proposed method achieves 41.5%, which is a new state-of-the-art performance on the Wikipedia dataset.", "targets": "A Modality-dependent Cross-media Retrieval"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-af2951272f23498ca304c2cb6ddf12ea", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a technique for learning representations of parser states in transitionbased dependency parsers. Our primary innovation is a new control structure for sequence-to-sequence neural networks\u2014 the stack LSTM. Like the conventional stack data structures used in transitionbased parsing, elements can be pushed to or popped from the top of the stack in constant time, but, in addition, an LSTM maintains a continuous space embedding of the stack contents. This lets us formulate an efficient parsing model that captures three facets of a parser\u2019s state: (i) unbounded look-ahead into the buffer of incoming words, (ii) the complete history of actions taken by the parser, and (iii) the complete contents of the stack of partially built tree fragments, including their internal structures. Standard backpropagation techniques are used for training and yield state-of-the-art parsing performance.", "targets": "Transition-Based Dependency Parsing with Stack Long Short-Term Memory"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a8f1e045fa7e47d48dd2275182833a99", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper proposes a framework dedicated to the construction of what we call discrete elastic inner product allowing one to embed sets of non-uniformly sampled multivariate time series or sequences of varying lengths into inner product space structures. This framework is based on a recursive definition that covers the case of multiple embedded time elastic dimensions. We prove that such inner products exist in our general framework and show how a simple instance of this inner product class operates on some prospective applications, while generalizing the Euclidean inner product. Classification experimentations on time series and symbolic sequences datasets demonstrate the benefits that we can expect by embedding time series or sequences into elastic inner spaces rather than into classical Euclidean spaces. These experiments show good accuracy when compared to the euclidean distance or even dynamic programming algorithms while maintaining a linear algorithmic complexity at exploitation stage, although a quadratic indexing phase beforehand is required.", "targets": "Discrete Elastic Inner Vector Spaces with Application to Time Series and Sequence Mining"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-dce2bcd7beed4379af01a8d2ea92aeb3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Robotic commands in natural language usually contain various spatial descriptions that are semantically similar but syntactically different. Mapping such syntactic variants into semantic concepts that can be understood by robots is challenging due to the high flexibility of natural language expressions. To tackle this problem, we collect robotic commands for navigation and manipulation tasks using crowdsourcing. We further define a robot language and use a generative machine translation model to translate robotic commands from natural language to robot language. The main purpose of this paper is to simulate the interaction process between human and robots using crowdsourcing platforms, and investigate the possibility of translating natural language to robot language with paraphrases.", "targets": "Learning Lexical Entries for Robotic Commands using Crowdsourcing"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-337548099326456f8f19045e9c2e5e35", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We used MetaMap and YTEX as a basis for the construction of two separate systems to participate in the 2013 ShARe/CLEF eHealth Task 1[9], the recognition of clinical concepts. No modifications were directly made to these systems, but output concepts were filtered using stop concepts, stop concept text and UMLS semantic type. Concept boundaries were also adjusted using a small collection of rules to increase precision on the strict task. Overall MetaMap had better performance than YTEX on the strict task, primarily due to a 20% performance improvement in precision. In the relaxed task YTEX had better performance in both precision and recall giving it an overall F-Score 4.6% higher than MetaMap on the test data. Our results also indicated a 1.3% higher accuracy for YTEX in UMLS CUI mapping.", "targets": "Evaluation of YTEX and MetaMap for clinical concept recognition"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0e5785abbbce450086c1a19dac7627e9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "When approximating binary similarity using the hamming distance between short binary hashes, we show that even if the similarity is symmetric, we can have shorter and more accurate hashes by using two distinct code maps. I.e. by approximating the similarity between x and x\u2032 as the hamming distance between f(x) and g(x\u2032), for two distinct binary codes f, g, rather than as the hamming distance between f(x) and f(x\u2032).", "targets": "The Power of Asymmetry in Binary Hashing"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9a99451c17d54622a5f058d794e590ee", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The Particle Swarm Optimization Policy (PSO-P)has been recently introduced and proven to produce remarkableresults on interacting with academic reinforcement learningbenchmarks in an off-policy, batch-based setting. To furtherinvestigate the properties and feasibility on real-world applica-tions, this paper investigates PSO-P on the so-called IndustrialBenchmark (IB), a novel reinforcement learning (RL) benchmarkthat aims at being realistic by including a variety of aspects foundin industrial applications, like continuous state and action spaces,a high dimensional, partially observable state space, delayedeffects, and complex stochasticity.The experimental results of PSO-P on IB are compared toresults of closed-form control policies derived from the model-based Recurrent Control Neural Network (RCNN) and themodel-free Neural Fitted Q-Iteration (NFQ).Experiments show that PSO-P is not only of interest foracademic benchmarks, but also for real-world industrial appli-cations, since it also yielded the best performing policy in our IBsetting. Compared to other well established RL techniques, PSO-P produced outstanding results in performance and robustness,requiring only a relatively low amount of effort in findingadequate parameters or making complex design decisions.", "targets": "Batch Reinforcement Learning on the Industrial Benchmark: First Experiences"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-001094e72c21467c9854e5d83b7cb8a9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose zoneout, a novel method for regularizing RNNs. At each timestep, zoneout stochastically forces some hidden units to maintain their previous values. Like dropout, zoneout uses random noise to train a pseudo-ensemble, improving generalization. But by preserving instead of dropping hidden units, gradient information and state information are more readily propagated through time, as in feedforward stochastic depth networks. We perform an empirical investigation of various RNN regularizers, and find encouraging results: zoneout gives significant performance improvements across tasks, yielding state-ofthe-art results in character-level language modeling on the Penn Treebank dataset and competitive results on word-level Penn Treebank and permuted sequential MNIST classification tasks.", "targets": "Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1fdb059df3b74d2e91d84910fc9fac98", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Deep learning models, which learn high-level feature representations from raw data, have become popular for machine learning and artificial intelligence tasks that involve images, audio, and other forms of complex data. A number of software \u201cframeworks\u201d have been developed to expedite the process of designing and training deep neural networks, such as Caffe [11], Torch [4], and Theano [1]. Currently, these frameworks can harness multiple GPUs on the same machine, but are unable to use GPUs that are distributed across multiple machines; because even average-sized deep networks can take days to train on a single GPU when faced with 100s of GBs to TBs of data, distributed GPUs present a prime opportunity for scaling up deep learning. However, the limited inter-machine bandwidth available on commodity Ethernet networks presents a bottleneck to distributed GPU training, and prevents its trivial realization. To investigate how existing software frameworks can be adapted to efficiently support distributed GPUs, we propose Poseidon, a scalable system architecture for distributed inter-machine communication in existing deep learning frameworks. In order to assess Poseidon\u2019s effectiveness, we integrate Poseidon into the Caffe [11] framework and evaluate its performance at training convolutional neural networks for object recognition in images. Poseidon features three key contributions that improve the training speed of deep neural networks on clusters: (i) a three-level hybrid architecture that allows Poseidon to support both CPU-only clusters as well as GPU-equipped clusters, (ii) a distributed wait-free backpropagation (DWBP) algorithm to improve GPU utilization and to balance communication, and (iii) a dedicated structure-aware communication protocol (SACP) to minimize communication overheads. We empirically show that Poseidon converges to the same objective value as a single machine, and achieves state-of-the-art training speedup across multiple models and well-established datasets, using a commodity GPU cluster of 8 nodes (e.g. 4.5\u00d7 speedup on AlexNet, 4\u00d7 on GoogLeNet, 4\u00d7 on CIFAR-10). On the much larger ImageNet 22K dataset, Poseidon with 8 nodes achieves better speedup and competitive accuracy to recent CPU-based distributed deep learning systems such as Adam [2] and Le et al. [16], which use 10s to 1000s of nodes.", "targets": "Poseidon: A System Architecture for Efficient GPU-based Deep Learning on Multiple Machines"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b67935c1df5b483da68cf226bac9d656", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper shows how Long Short-term Memory recurrent neural networks can be used to generate complex sequences with long-range structure, simply by predicting one data point at a time. The approach is demonstrated for text (where the data are discrete) and online handwriting (where the data are real-valued). It is then extended to handwriting synthesis by allowing the network to condition its predictions on a text sequence. The resulting system is able to generate highly realistic cursive handwriting in a wide variety of styles.", "targets": "Generating Sequences With Recurrent Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-002a1bf58151426eb9a8e6e630f69d75", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We study the connection between the highlynon-convex loss function of a simple model ofthe fully-connected feed-forward neural net-work and the Hamiltonian of the sphericalspin-glass model under the assumptions of:i) variable independence, ii) redundancy innetwork parametrization, and iii) uniformity.These assumptions enable us to explain thecomplexity of the fully decoupled neural net-work through the prism of the results fromthe random matrix theory. We show that forlarge-size decoupled networks the lowest crit-ical values of the random loss function arelocated in a well-defined narrow band lower-bounded by the global minimum. Further-more, they form a layered structure. Weshow that the number of local minima out-side the narrow band diminishes exponen-tially with the size of the network. We em-pirically demonstrate that the mathemati-cal model exhibits similar behavior as thecomputer simulations, despite the presenceof high dependencies in real networks. Weconjecture that both simulated annealing andSGD converge to the band containing thelargest number of critical points, and thatall critical points found there are local min-ima and correspond to the same high learn-ing quality measured by the test error. Thisemphasizes a major difference between large-and small-size networks where for the lat-ter poor quality local minima have non-zeroprobability of being recovered. Simultane-ously we prove that recovering the globalminimum becomes harder as the network sizeincreases and that it is in practice irrelevantas global minimum often leads to overfitting.", "targets": "The Loss Surface of Multilayer Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-619696e14ce54c26a6aba2b8ad329ac1", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we extend the deep long short-term memory (DLSTM) recurrent neural networks by introducing gated direct connections between memory cells in adjacent layers. These direct links, called highway connections, enable unimpeded information flow across different layers and thus alleviate the gradient vanishing problem when building deeper LSTMs. We further introduce the latency-controlled bidirectional LSTMs (BLSTMs) which can exploit the whole history while keeping the latency under control. Efficient algorithms are proposed to train these novel networks using both frame and sequence discriminative criteria. Experiments on the AMI distant speech recognition (DSR) task indicate that we can train deeper LSTMs and achieve better improvement from sequence training with highway LSTMs (HLSTMs). Our novel model obtains 43.9/47.7% WER on AMI (SDM) dev and eval sets, outperforming all previous works. It beats the strong DNN and DLSTM baselines with 15.7% and 5.3% relative improvement respectively.", "targets": "HIGHWAY LONG SHORT-TERM MEMORY RNNS FOR DISTANT SPEECH RECOGNITION"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-86fe51e8f61240519b6c55867aa6fce2", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we correct an upper bound, presented in [4], on the generalisation error of classifiers learned through multiple kernel learning. The bound in [4] uses Rademacher complexity and has anadditive dependence on the logarithm of the number of kernels and the margin achieved by the classifier. However, there are some errors in parts of the proof which are corrected in this paper. Unfortunately, the final result turns out to be a risk bound which has a multiplicative dependence on the logarithm of the number of kernels and the margin achieved by the classifier.", "targets": "A Note on Improved Loss Bounds for Multiple Kernel Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-52ad377eccfd4615935ac2fef102e8ff", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Skip connections made the training of very deep neural networks possible and have become an indispendable component in a variety of neural architectures. A satisfactory explanation for their success remains elusive. Here, we present an explanation for the benefits of skip connections in training very deep neural networks. We argue that skip connections help break symmetries inherent in the loss landscapes of deep networks, leading to drastically simplified landscapes. In particular, skip connections between adjacent layers in a multilayer network break the permutation symmetry of nodes in a given layer, and the recently proposed DenseNet architecture, where each layer projects skip connections to every layer above it, also breaks the rescaling symmetry of connectivity matrices between different layers. This hypothesis is supported by evidence from a toy model with binary weights and from experiments with fully-connected networks suggesting (i) that skip connections do not necessarily improve training unless they help break symmetries and (ii) that alternative ways of breaking the symmetries also lead to significant performance improvements in training deep networks, hence there is nothing special about skip connections in this respect. We find, however, that skip connections confer additional benefits over and above symmetry-breaking, such as the ability to deal effectively with the vanishing gradients problem.", "targets": "Skip Connections as Effective Symmetry-Breaking"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-397bdd63ae954ac681acd5fc0c59442e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recent progress in randomized motion planners has led to the development of a new class of samplingbased algorithms that provide asymptotic optimality guarantees, notably the RRT\u2217 and the PRM\u2217 algorithms. Careful analysis reveals that the so-called \u201crewiring\u201d step in these algorithms can be interpreted as a local policy iteration (PI) step (i.e., a local policy evaluation step followed by a local policy improvement step) so that asymptotically, as the number of samples tend to infinity, both algorithms converge to the optimal path almost surely (with probability 1). Policy iteration, along with value iteration (VI) are common methods for solving dynamic programming (DP) problems. Based on this observation, recently, the RRT algorithm has been proposed, which performs, during each iteration, Bellman updates (aka\u201cbackups\u201d) on those vertices of the graph that have the potential of being part of the optimal path (i.e., the \u201cpromising\u201d vertices). The RRT algorithm thus utilizes dynamic programming ideas and implements them incrementally on randomly generated graphs to obtain high quality solutions. In this work, and based on this key insight, we explore a different class of dynamic programming algorithms for solving shortest-path problems on random graphs generated by iterative sampling methods. These class of algorithms utilize policy iteration instead of value iteration, and thus are better suited for massive parallelization. Contrary to the RRT\u2217 algorithm, the policy improvement during the rewiring step is not performed only locally but rather on a set of vertices that are classified as \u201cpromising\u201d during the current iteration. This tends to speed-up the whole process. The resulting algorithm, aptly named Policy Iteration-RRT (PI-RRT) is the first of a new class of DP-inspired algorithms for randomized motion planning that utilize PI methods.", "targets": "Incremental Sampling-based Motion Planners Using Policy Iteration Methods"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-65085d065ea84a489bc76a7659a78f5d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Academic researchers often need to face with a large collection of research papers in the literature. This problem may be even worse for postgraduate students who are new to a field and may not know where to start. To address this problem, we have developed an online catalog of research papers where the papers have been automatically categorized by a topic model. The catalog contains 7719 papers from the proceedings of two artificial intelligence conferences from 2000 to 2015. Rather than the commonly used Latent Dirichlet Allocation, we use a recently proposed method called hierarchical latent tree analysis for topic modeling. The resulting topic model contains a hierarchy of topics so that users can browse the topics from the top level to the bottom level. The topic model contains a manageable number of general topics at the top level and allows thousands of fine-grained topics at the bottom level. It also can detect topics that have emerged recently.", "targets": "Topic Browsing for Research Papers with Hierarchical Latent Tree Analysis"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-93fb814c935040cb99340cf9e2da52f5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present WIKIREADING, a large-scale natural language understanding task and publicly-available dataset with 18 million instances. The task is to predict textual values from the structured knowledge base Wikidata by reading the text of the corresponding Wikipedia articles. The task contains a rich variety of challenging classification and extraction sub-tasks, making it well-suited for end-to-end models such as deep neural networks (DNNs). We compare various state-of-the-art DNNbased architectures for document classification, information extraction, and question answering. We find that models supporting a rich answer space, such as word or character sequences, perform best. Our best-performing model, a word-level sequence to sequence model with a mechanism to copy out-of-vocabulary words, obtains an accuracy of 71.8%.", "targets": "WIKIREADING: A Novel Large-scale Language Understanding Task over Wikipedia"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ab9c809b82ad4b009ac9de83d16585f2", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recently, there have been several promising methods to generate realistic imagery from deep convolutional networks. These methods sidestep the traditional computer graphics rendering pipeline and instead generate imagery at the pixel level by learning from large collections of photos (e.g. faces or bedrooms). However, these methods are of limited utility because it is difficult for a user to control what the network produces. In this paper, we propose a deep adversarial image synthesis architecture that is conditioned on coarse sketches and sparse color strokes to generate realistic cars, bedrooms, or faces. We demonstrate a sketch based image synthesis system which allows users to scribble over the sketch to indicate preferred color for objects. Our network can then generate convincing images that satisfy both the color and the sketch constraints of user. The network is feed-forward which allows users to see the effect of their edits in real time. We compare to recent work on sketch to image synthesis and show that our approach can generate more realistic, more diverse, and more controllable outputs. The architecture is also effective at user-guided colorization of grayscale images.", "targets": "Scribbler: Controlling Deep Image Synthesis with Sketch and Color"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8506f887f27f435b816703c7697f1745", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Lossy image compression algorithms are pervasively used to reduce the size of images transmitted over the web and recorded on data storage media. However, we pay for their high compression rate with visual artifacts degrading the user experience. Deep convolutional neural networks have become a widespread tool to address high-level computer vision tasks very successfully. Recently, they have found their way into the areas of low-level computer vision and image processing to solve regression problems mostly with relatively shallow networks. We present a novel 12-layer deep convolutional network for image compression artifact suppression with hierarchical skip connections and a multi-scale loss function. We achieve a boost of up to 1.79 dB in PSNR over ordinary JPEG and an improvement of up to 0.36 dB over the best previous ConvNet result. We show that a network trained for a specific quality factor (QF) is resilient to the QF used to compress the input image\u2014a single network trained for QF 60 provides a PSNR gain of more than 1.5 dB over the wide QF range from 40 to 76.", "targets": "CAS-CNN: A Deep Convolutional Neural Network for Image Compression Artifact Suppression"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-707d120900bf4b528dfac0e241d70023", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The distinction between strong negation and default negation has been useful in answer set programming. We present an alternative account of strong negation, which lets us view strong negation in terms of the functional stable model semantics by Bartholomew and Lee. More specifically, we show that, under complete interpretations, minimizing both positive and negative literals in the traditional answer set semantics is essentially the same as ensuring the uniqueness of Boolean function values under the functional stable model semantics. The same account lets us view Lifschitz\u2019s two-valued logic programs as a special case of the functional stable model semantics. In addition, we show how non-Boolean intensional functions can be eliminated in favor of Boolean intensional functions, and furthermore can be represented using strong negation, which provides a way to compute the functional stable model semantics using existing ASP solvers. We also note that similar results hold with the functional stable model semantics by Cabalar.", "targets": "A Functional View of Strong Negation in Answer Set Programming"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-06f7b0b89e454da1b32f4f5afbf91451", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We tackle the problem of inferring node labels in a partially labeled graph where each node in the graph has multiple label types and each label type has a large number of possible labels. Our primary example, and the focus of this paper, is the joint inference of label types such as hometown, current city, and employers, for users connected by a social network. Standard label propagation fails to consider the properties of the label types and the interactions between them. Our proposed method, called EDGEEXPLAIN, explicitly models these, while still enabling scalable inference under a distributed message-passing architecture. On a billion-node subset of the Facebook social network, EDGEEXPLAIN significantly outperforms label propagation for several label types, with lifts of up to 120% for recall@1 and 60% for recall@3.", "targets": "Joint Inference of Multiple Label Types in Large Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-775dd81d3f3249d1a91caf1ec106bbb0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper we present REG, a graph-based approach for study a fundamental problem of Natural Language Processing (NLP): the automatic text summarization. The algorithm maps a document as a graph, then it computes the weight of their sentences. We have applied this approach to summarize documents in three languages.", "targets": "Un re\u0301sumeur a\u0300 base de graphes, inde\u0301pe\u0301ndant de la langue"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-503a2cbf26c442d1ac89d5805c206630", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Neural machine translation has shown very promising results lately. Most NMT models follow the encoder-decoder framework. To make encoder-decoder models more flexible, attention mechanism was introduced to machine translation and also other tasks like speech recognition and image captioning. We observe that the quality of translation by attention-based encoder-decoder can be significantly damaged when the alignment is incorrect. We attribute these problems to the lack of distortion and fertility models. Aiming to resolve these problems, we propose new variations of attention-based encoderdecoder and compare them with other models on machine translation. Our proposed method achieved an improvement of 2 BLEU points over the original attentionbased encoder-decoder.", "targets": "Implicit Distortion and Fertility Models for Attention-based Encoder-Decoder NMT Model"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1c096accd05842719c5a6820fe250a4f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Common statistical practice has shown that the full power of Bayesian methods is not realized until hierarchical priors are used, as these allow for greater \u201crobustness\u201d and the ability to \u201cshare statistical strength.\u201d Yet it is an ongoing challenge to provide a learning-theoretically sound formalism of such notions that: offers practical guidance concerning when and how best to utilize hierarchical models; provides insights into what makes for a good hierarchical prior; and, when the form of the prior has been chosen, can guide the choice of hyperparameter settings. We present a set of analytical tools for understanding hierarchical priors in both the online and batch learning settings. We provide regret bounds under log-loss, which show how certain hierarchical models compare, in retrospect, to the best single model in the model class. We also show how to convert a Bayesian log-loss regret bound into a Bayesian risk bound for any bounded loss, a result which may be of independent interest. Risk and regret bounds for Student\u2019s t and hierarchical Gaussian priors allow us to formalize the concepts of \u201crobustness\u201d and \u201csharing statistical strength.\u201d Priors for feature selection are investigated as well. Our results suggest that the learning-theoretic benefits of using hierarchical priors can often come at little cost on practical problems.", "targets": "Risk and Regret of Hierarchical Bayesian Learners"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5a3867e1930149e1a3073e73d104936d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In state-of-the-art Neural Machine Translation, an attention mechanism is used during decoding to enhance the translation. At every step, the decoder uses this mechanism to focus on different parts of the source sentence to gather the most useful information before outputting its target word. Recently, the effectiveness of the attention mechanism has also been explored for multimodal tasks, where it becomes possible to focus both on sentence parts and image regions. Approaches to pool two modalities usually include element-wise product, sum or concatenation. In this paper, we evaluate the more advanced Multimodal Compact Bilinear pooling method, which takes the outer product of two vectors to combine the attention features for the two modalities. This has been previously investigated for visual question answering. We try out this approach for multimodal image caption translation and show improvements compared to basic combination methods.", "targets": "MULTIMODAL NEURAL MACHINE TRANSLATION"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-01ad07e55c694cf8b07544e099b8f556", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Let us envision a new class of IT systems, the \u201cSupport Systems for Knowledge Works\u201d or SSKW. An SSKW can be defined as a system built for providing comprehensive support to human knowledge-workers while performing instances of complex knowledge-works of a particular type within a particular domain of professional activities. To get an idea what an SSKW-enabled work environment can be like, let us look into a hypothetical scenario that depicts the interaction between a physician and a patient-care SSKW during the activity of diagnosing a patient. The patient-care task is practiced by health-care professionals, typically within organizational setups like hospitals. An instance of the task, known as a case, is carried out by a group of professionals (physicians, surgeons, nurses, laboratory technicians etc.) led by a physician (often known as the lead physician for the case) with the primary goal of restoring an ailing patient to state of health. However, the performance also serves various secondary goals achieved through capture and reuse of information about the case. The overall task is usually divided into subtasks or activities such as examination, identification of possible diseases, clinical tests, diagnosis, treatment, follow-up etc. The actions taken during these activities and their results have complex interrelationships. The patient-care SSKW realizes an integrated IT-based system platform which supports all the constituent activities in ways consistent with their interrelationships. Our hypothetical scenario depicts a particular activity by the lead physician (shall be referred as LP hereafter), i.e., diagnosing a patient P with the help of a patient-care SSKW. Making a diagnosis results in identifying a particular disease based on available evidence (e.g., symptoms, signs and medical history of the patient, results of various clinical tests conducted) for which the patient will be treated. Such a scenario is described below. For diagnosing P , LP opens the case in SSKW and the following interactions take place:", "targets": "Enhancing Support for Knowledge Works: A relatively unexplored vista of computing research"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9e192a9d66d74d27a818c8e535a4abe0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Despite the successes in capturing continuous distributions, the application of generative adversarial networks (GANs) to discrete settings, like natural language tasks, is rather restricted. The fundamental reason is the difficulty of backpropagation through discrete random variables combined with the inherent instability of the GAN training objective. To address these problems, we propose Maximum-Likelihood Augmented Discrete Generative Adversarial Networks. Instead of directly optimizing the GAN objective, we derive a novel and low-variance objective using the discriminator\u2019s output that follows corresponds to the log-likelihood. Compared with the original, the new objective is proved to be consistent in theory and beneficial in practice. The experimental results on various discrete datasets demonstrate the effectiveness of the proposed approach.", "targets": "Maximum-Likelihood Augmented Discrete Generative Adversarial Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5dabf77d143949c89740249eda0ba663", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We develop a streaming (one-pass, boundedmemory) word embedding algorithm based on the canonical skip-gram with negative sampling algorithm implemented in word2vec. We compare our streaming algorithm to word2vec empirically by measuring the cosine similarity between word pairs under each algorithm and by applying each algorithm in the downstream task of hashtag prediction on a two-month interval of the Twitter sample stream. We then discuss the results of these experiments, concluding they provide partial validation of our approach as a streaming replacement for word2vec. Finally, we discuss potential failure modes and suggest directions for future work.", "targets": "Streaming Word Embeddings with the Space-Saving Algorithm"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-118cdaaa22884c5f9d778516d4396248", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a simple, scalable, fully generative model for transition-based dependency parsing with high accuracy. The model, parameterized by Hierarchical Pitman-Yor Processes, overcomes the limitations of previous generative models by allowing fast and accurate inference. We propose an efficient decoding algorithm based on particle filtering that can adapt the beam size to the uncertainty in the model while jointly predicting POS tags and parse trees. The UAS of the parser is on par with that of a greedy discriminative baseline. As a language model, it obtains better perplexity than a n-gram model by performing semi-supervised learning over a large unlabelled corpus. We show that the model is able to generate locally and syntactically coherent sentences, opening the door to further applications in language generation.", "targets": "A Bayesian Model for Generative Transition-based Dependency Parsing"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-70e96eff7f67406292efb506f8d2a870", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper studies the trade-off between two different kinds of pure exploration: breadth versus depth. The most biased coin problem asks how many total coin flips are required to identify a \u201cheavy\u201d coin from an infinite bag containing both \u201cheavy\u201d coins with mean \u03b81 \u2208 (0, 1), and \u201clight\u201d coins with mean \u03b80 \u2208 (0, \u03b81), where heavy coins are drawn from the bag with probability \u03b1 \u2208 (0, 1/2). The key difficulty of this problem lies in distinguishing whether the two kinds of coins have very similar means, or whether heavy coins are just extremely rare. This problem has applications in crowdsourcing, anomaly detection, and radio spectrum search. Chandrasekaran and Karp (2014) recently introduced a solution to this problem but it required perfect knowledge of \u03b80, \u03b81, \u03b1. In contrast, we derive algorithms that are adaptive to partial or absent knowledge of the problem parameters. Moreover, our techniques generalize beyond coins to more general instances of infinitely many armed bandit problems. We also prove lower bounds that show our algorithm\u2019s upper bounds are tight up to log factors, and on the way characterize the sample complexity of differentiating between a single parametric distribution and a mixture of two such distributions. As a result, these bounds have surprising implications both for solutions to the most biased coin problem and for anomaly detection when only partial information about the parameters is known.", "targets": "On the Detection of Mixture Distributions with applications to the Most Biased Coin Problem"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4da0f039d55f4c418565628c678405d2", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Artificial Intelligence (AI) is an effective science which employs strong enough approaches, methods, and techniques to solve unsolvable real-world based problems. Because of its unstoppable rise towards the future, there are also some discussions about its ethics and safety. Shaping an AI-friendly environment for people and a people-friendly environment for AI can be a possible answer for finding a shared context of values for both humans and robots. In this context, objective of this paper is to address the ethical issues of AI and explore the moral dilemmas that arise from ethical algorithms, from pre-set or acquired values. In addition, the paper will also focus on the subject of AI safety. As general, the paper will briefly analyze the concerns and potential solutions to solving the ethical issues presented and increase readers\u2019 awareness on AI safety as another related research interest.", "targets": "Ethical Artificial Intelligence - An Open Question"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-61cebf6c0b754d72806d3ca29e994fb0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Non-maximum suppression (NMS) is used in virtually all state-of-the-art object detection pipelines. While essential object detection ingredients such as features, classifiers, and proposal methods have been extensively researched surprisingly little work has aimed to systematically address NMS. The de-facto standard for NMS is based on greedy clustering with a fixed distance threshold, which forces to trade-off recall versus precision. We propose a convnet designed to perform NMS of a given set of detections. We report experiments on a synthetic setup, and results on crowded pedestrian detection scenes. Our approach overcomes the intrinsic limitations of greedy NMS, obtaining better recall and precision.", "targets": "A CONVNET FOR NON-MAXIMUM SUPPRESSION"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0c385306784948f5ae57d6abc81feec4", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "(To appear in Theory and Practice of Logic Programming (TPLP)) ESmodels is designed and implemented as an experiment platform to investigate the semantics, language, related reasoning algorithms, and possible applications of epistemic specifications. We first give the epistemic specification language of ESmodels and its semantics. The language employs only one modal operator K but we prove that it is able to represent luxuriant modal operators by presenting transformation rules. Then, we describe basic algorithms and optimization approaches used in ESmodels. After that, we discuss possible applications of ESmodels in conformant planning and constraint satisfaction. Finally, we conclude with perspectives.", "targets": "ESmodels: An Epistemic Specification Solver"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6130dad97f7a429181f65055befa7c8c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present DEFEXT, an easy to use semi supervised Definition Extraction Tool. DEFEXT is designed to extract from a target corpus those textual fragments where a term is explicitly mentioned together with its core features, i.e. its definition. It works on the back of a Conditional Random Fields based sequential labeling algorithm and a bootstrapping approach. Bootstrapping enables the model to gradually become more aware of the idiosyncrasies of the target corpus. In this paper we describe the main components of the toolkit as well as experimental results stemming from both automatic and manual evaluation. We release DEFEXT as open source along with the necessary files to run it in any Unix machine. We also provide access to training and test data for immediate use.", "targets": "DEFEXT: A Semi Supervised Definition Extraction Tool"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4ded55397a204932b8bbae216bfd8cf4", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Graphs are a useful abstraction of image content. Not only can graphs represent details about individual objects in a scene but they can capture the interactions between pairs of objects. We present a method for training a convolutional neural network such that it takes in an input image and produces a full graph. This is done end-to-end in a single stage with the use of associative embeddings. The network learns to simultaneously identify all of the elements that make up a graph and piece them together. We benchmark on the Visual Genome dataset, and report a Recall@50 of 9.7% compared to the prior state-of-the-art at 3.4%, a nearly threefold improvement on the challenging task of scene graph generation.", "targets": "Pixels to Graphs by Associative Embedding"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-01c881ac4a7b429d90c2b67f034c117b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we model the trajectory of sea vessels and provide a service that predicts in near-real time the position of any given vessel in 4\u2019, 10\u2019, 20\u2019 and 40\u2019 time intervals. We explore the necessary tradeoffs between accuracy, performance and resource utilization are explored given the large volume and update rates of input data. We start with building models based on well-established machine learning algorithms using static datasets and multi-scan training approaches and identify the best candidate to be used in implementing a single-pass predictive approach, under real-time constraints. The results are measured in terms of accuracy and performance and are compared against the baseline kinematic equations. Results show that it is possible to efficiently model the trajectory of multiple vessels using a single model, which is trained and evaluated using an adequately large, static dataset, thus achieving a significant gain in terms of resource usage while not compromising accuracy.", "targets": "Employing traditional machine learning algorithms for big data streams analysis: the case of object trajectory prediction"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-fb2056a239514997a462550215f4e0c7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this study, the problem of shallow parsing of Hindi-English code-mixed social media text (CSMT) has been addressed. We have annotated the data, developed a language identifier, a normalizer, a part-of-speech tagger and a shallow parser. To the best of our knowledge, we are the first to attempt shallow parsing on CSMT. The pipeline developed has been made available to the research community with the goal of enabling better text analysis of Hindi English CSMT. The pipeline is accessible at 1.", "targets": "Shallow Parsing Pipeline for Hindi-English Code-Mixed Social Media Text"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-774a099de40d4e57a845a84ed14a5859", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We introduce a novel validation framework to measure the true robustness of learning models for real-world applications by creating sourceinclusive and source-exclusive partitions in a dataset via clustering. We develop a robustness metric derived from source-aware lower and upper bounds of model accuracy even when data source labels are not readily available. We clearly demonstrate that even on a well-explored dataset like MNIST, challenging training scenarios can be constructed under the proposed assessment framework for two separate yet equally important applications: i) more rigorous learning model comparison and ii) dataset adequacy evaluation. In addition, our findings not only promise a more complete identification of trade-offs between model complexity, accuracy and robustness but can also help researchers optimize their efforts in data collection by identifying the less robust and more challenging class labels.", "targets": "Clustering-based Source-aware Assessment of True Robustness for Learning Models"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-786c909d74f148abab2b9f613ded821d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We tackle a task where an agent learns to navigate in a 2D maze-like environment called XWORLD. In each session, the agent perceives a sequence of raw-pixel frames, a natural language command issued by a teacher, and a set of rewards. The agent learns the teacher\u2019s language from scratch in a grounded and compositional manner, such that after training it is able to correctly execute zero-shot commands: 1) the combination of words in the command never appeared before, and/or 2) the command contains new object concepts that are learned from another task but never learned from navigation. Our deep framework for the agent is trained end to end: it learns simultaneously the visual representations of the environment, the syntax and semantics of the language, and the action module that outputs actions. The zero-shot learning capability of our framework results from its compositionality and modularity with parameter tying. We visualize the intermediate outputs of the framework, demonstrating that the agent truly understands how to solve the problem. We believe that our results provide some preliminary insights on how to train an agent with similar abilities in a 3D environment.", "targets": "A Deep Compositional Framework for Human-like Language Acquisition in Virtual Environment"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2f75d54056ee401fa17d15ef873163da", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The payload of communications satellites must go through a series of tests to assert their ability to survive in space. Each test involves some equipment of the payload to be active, which has an impact on the temperature of the payload. Sequencing these tests in a way that ensures the thermal stability of the payload and minimizes the overall duration of the test campaign is a very important objective for satellite manufacturers. The problem can be decomposed in two sub-problems corresponding to two objectives: First, the number of distinct configurations necessary to run the tests must be minimized. This can be modeled as packing the tests into configurations, and we introduce a set of implied constraints to improve the lower bound of the model. Second, tests must be sequenced so that the number of times an equipment unit has to be switched on or off is minimized. We model this aspect using the constraint Switch, where a buffer with limited capacity represents the currently active equipment units, and we introduce an improvement of the propagation algorithm for this constraint. We then introduce a search strategy in which we sequentially solve the sub-problems (packing and sequencing). Experiments conducted on real and random instances show the respective interest of our contributions.", "targets": "Constraint Programming for Planning Test Campaigns of Communications Satellites"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-38b52bd3318242159db68b841413a07d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Many intelligent user interfaces employ applica\u00ad tion and user models to determine the user's pref\u00ad erences, goals and likely future actions. Such models require application analysis, adaptation and expansion. Building and maintaining such models adds a substantial amount of time and labour to the application development cycle. We present a system that observes the interface of an unmodified application and records users' inter\u00ad actions with the application. From a history of such observations we build a coarse state space of observed interface states and actions between them. To refine the space, we hypothesize sub\u00ad states based upon the histories that led users to a given state. We evaluate the information gain of possible state splits, varying the length of the histories considered in such splits. In this way, we automatically produce a stochastic dynamic model of the application and of how it is used. To evaluate our approach, we present models de\u00ad rived from real-world application usage data.", "targets": "Building a Stochastic Dynamic Model of Application Use"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9cada47c194e4129bf154c28885b8e2d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Collaborative filtering or recommender sys\u00ad tems use a database about user preferences to predict additional topics or products a new user might like. In this paper we describe several algorithms designed for this task, in\u00ad cluding techniques based on correlation coef\u00ad ficients, vector-based similarity calculations, and statistical Bayesian methods. We com\u00ad pare the predictive accuracy of the various methods in a set of representative problem domains. We use two basic classes of evalua\u00ad tion metrics. The first characterizes accuracy over a set of individual predictions in terms of average absolute deviation. The second esti\u00ad mates the utility of a ranked list of suggested items. This metric uses an estimate of the probability that a user will see a recommen\u00ad dation in an ordered list. Experiments were run for datasets associ\u00ad ated with 3 application areas, 4 experimen\u00ad tal protocols, and the 2 evaluation met\u00ad rics for the various algorithms. Results indicate that for a wide range of con\u00ad ditions, Bayesian networks with decision trees at each node and correlation methods outperform Bayesian-clustering and vector\u00ad similarity methods. Between correlation and Bayesian networks, the preferred method de\u00ad pends on the nature of the dataset, nature of the application (ranked versus one-by-one presentation), and the availability of votes with which to make predictions. Other con\u00ad siderations include the size of database, speed of predictions, and learning time.", "targets": "Empirical Analysis of Predictive Algorithms for Collaborative Filtering"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-bd4fae7320eb447582d379c33f404a86", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A common phenomena in modern recommendation systems is the use of feedback from one user to infer the \u2018value\u2019 of an item to other users. This results in an exploration vs. exploitation trade-off, in which items of possibly low value have to be presented to users in order to ascertain their value. Existing approaches to solving this problem focus on the case where the number of items are small, or admit some underlying structure \u2013 it is unclear, however, if good recommendation is possible when dealing with content-rich settings with unstructured content. We consider this problem under a simple natural model, wherein the number of items and the number of item-views are of the same order, and an \u2018access-graph\u2019 constrains which user is allowed to see which item. Our main insight is that the presence of the access-graph in fact makes good recommendation possible \u2013 however this requires the exploration policy to be designed to take advantage of the access-graph. Our results demonstrate the importance of \u2018serendipity\u2019 in exploration, and how higher graph-expansion translates to a higher quality of recommendations; it also suggests a reason why in some settings, simple policies like Twitter\u2019s \u2018Latest-First\u2019 policy achieve a good performance. From a technical perspective, our model presents a way to study exploration-exploitation tradeoffs in settings where the number of \u2018trials\u2019 and \u2018strategies\u2019 are large (potentially infinite), and more importantly, of the same order. Our algorithms admit competitive-ratio guarantees which hold for the worst-case user, under both finite-population and infinite-horizon settings, and are parametrized in terms of properties of the underlying graph. Conversely, we also demonstrate that improperly-designed policies can be highly sub-optimal, and that in many settings, our results are order-wise optimal.", "targets": "Online Collaborative Filtering on Graphs"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6b24b25669a54ee5a550bcc3044c09ed", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Existing algorithms for subgroup discovery with numerical targets do not optimize the error or target variable dispersion of the groups they find. This often leads to unreliable or inconsistent statements about the data, rendering practical applications, especially in scientific domains, futile. Therefore, we here extend the optimistic estimator framework for optimal subgroup discovery to a new class of objective functions: we show how tight estimators can be computed efficiently for all functions that are determined by subgroup size (non-decreasing dependence), the subgroup median value, and a dispersion measure around the median (nonincreasing dependence). In the important special case when dispersion is measured using the mean absolute deviation from the median, this novel approach yields a linear time algorithm. Empirical evaluation on a wide range of datasets shows that, when used within branch-and-bound search, this approach is highly efficient and indeed discovers subgroups with much smaller errors.", "targets": "Identifying Consistent Statements about Numerical Data with Dispersion-Corrected Subgroup Discovery"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4e3609fe661e4228a4d59f95f90af888", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Vertex Separation Minimization Problem (VSMP) consists of finding a layout of a graph G = (V,E) which minimizes the maximum vertex cut or separation of a layout. It is an NPcomplete problem in general for which metaheuristic techniques can be applied to find near optimal solution. VSMP has applications in VLSI design, graph drawing and computer language compiler design. VSMP is polynomially solvable for grids, trees, permutation graphs and cographs. Construction heuristics play a very important role in the metaheuristic techniques as they are responsible for generating initial solutions which lead to fast convergence. In this paper, we have proposed three construction heuristics H 1, H 2 and H 3 and performed experiments on Grids, Small graphs, Trees and Harwell Boeing graphs, totaling 248 instances of graphs. Experiments reveal that H 1, H 2 and H 3 are able to achieve best results for 88.71%, 43.5% and 37.1% of the total instances respectively while the best construction heuristic in the literature achieves the best solution for 39.9% of the total instances. We have also compared the results with the state-of-the-art metaheuristic GVNS and observed that the proposed construction heuristics improves the results for some of the input instances. It was found that GVNS obtained best results for 82.9% instances of all input instances and the heuristic H 1 obtained best results for 82.3% of all input instances.", "targets": "Polynomial Time Efficient Construction Heuristics for Vertex Separation Minimization Problem"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-72d0a52dac9544bc992c110889df2ba6", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Learning tasks such as those involving genomic data often poses a serious challenge: the number of input features can be orders of magnitude larger than the number of training examples, making it difficult to avoid overfitting, even when using the known regularization techniques. We focus here on tasks in which the input is a description of the genetic variation specific to a patient, the single nucleotide polymorphisms (SNPs), yielding millions of ternary inputs. Improving the ability of deep learning to handle such datasets could have an important impact in medical research, more specifically in precision medicine, where highdimensional data regarding a particular patient is used to make predictions of interest. Even though the amount of data for such tasks is increasing, this mismatch between the number of examples and the number of inputs remains a concern. Naive implementations of classifier neural networks involve a huge number of free parameters in their first layer (number of input features times number of hidden units): each input feature is associated with as many parameters as there are hidden units. We propose a novel neural network parametrization which considerably reduces the number of free parameters. It is based on the idea that we can first learn or provide a distributed representation for each input feature (e.g. for each position in the genome where variations are observed in data), and then learn (with another neural network called the parameter prediction network) how to map a feature\u2019s distributed representation (based on the feature\u2019s identity not its value) to the vector of parameters specific to that feature in the classifier neural network (the weights which link the value of the feature to each of the hidden units). This approach views the problem of producing the parameters associated with each feature as a multi-task learning problem. We show experimentally on a population stratification task of interest to medical studies that the proposed approach can significantly reduce both the number of parameters and the error rate of the classifier.", "targets": "DIET NETWORKS: THIN PARAMETERS FOR FAT GENOMICS"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-18597cbdd7ba4d61bde5f470279ead81", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "LetF be a set of boolean functions. We present an algorithm for learningF\u2228 := {\u2228f\u2208Sf | S \u2286 F} from membership queries. Our algorithm asks at most |F| \u00b7OPT(F\u2228) membership queries where OPT(F\u2228) is the minimum worst case number of membership queries for learning F\u2228. When F is a set of halfspaces over a constant dimension space or a set of variable inequalities, our algorithm runs in polynomial time. The problem we address has practical importance in the field of program synthesis, where the goal is to synthesize a program that meets some requirements. Program synthesis has become popular especially in settings aiming to help end users. In such settings, the requirements are not provided upfront and the synthesizer can only learn them by posing membership queries to the end user. Our work enables such synthesizers to learn the exact requirements while bounding the number of membership queries.", "targets": "Learning Disjunctions of Predicates"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2002f174e2e04ac6a4c0121b76b48a41", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Humans comprehend the meanings and relations of discourses heavily relying on their semantic memory that encodes general knowledge about concepts and facts. Inspired by this, we propose a neural recognizer for implicit discourse relation analysis, which builds upon a semantic memory that stores knowledge in a distributed fashion. We refer to this recognizer as SeMDER. Starting from word embeddings of discourse arguments, SeMDER employs a shallow encoder to generate a distributed surface representation for a discourse. A semantic encoder with attention to the semantic memory matrix is further established over surface representations. It is able to retrieve a deep semantic meaning representation for the discourse from the memory. Using the surface and semantic representations as input, SeMDER finally predicts implicit discourse relations via a neural recognizer. Experiments on the benchmark data set show that SeMDER benefits from the semantic memory and achieves substantial improvements of 2.56% on average over current state-of-the-art baselines in terms of F1-score.", "targets": "Neural Discourse Relation Recognition with Semantic Memory"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f6167917d9a149c294c860aacf8e38b5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Pretraining is widely used in deep neutral network and one of the most famous pretraining models is Deep Belief Network (DBN). The optimization formulas are different during the pretraining process for different pretraining models. In this paper, we pretrained deep neutral network by different pretraining models and hence investigated the difference between DBN and Stacked Denoising Autoencoder (SDA) when used as pretraining model. The experimental results show that DBN get a better initial model. However the model converges to a relatively worse model after the finetuning process. Yet after pretrained by SDA for the second time the model converges to a better model if finetuned.", "targets": "Multi-pretrained Deep Neural Network"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-191902fac8294c32bbf30e6fda5d418a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "With a weighting scheme proportional to t, a traditional stochastic gradient descent (SGD) algorithm achieves a high probability convergence rate of O(\u03ba/T ) for strongly convex functions, instead of O(\u03ba ln(T )/T ). We also prove that an accelerated SGD algorithm also achieves a rate of O(\u03ba/T ).", "targets": "Stochastic gradient descent algorithms for strongly convex functions at O(1/T ) convergence rates"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c362c9df88474f3a85b1e279627c1128", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper contains analysis and extension of exploiters-based knowledge extraction methods, which allow generation of new knowledge, based on the basic ones. The main achievement of the paper is useful features of some universal exploiters proof, which allow extending set of basic classes and set of basic relations by finite set of new classes of objects and relations among them, which allow creating of complete lattice. Proposed approach gives an opportunity to compute quantity of new classes, which can be generated using it, and quantity of different types, which each of obtained classes describes; constructing of defined hierarchy of classes with determined subsumption relation; avoidance of some problems of inheritance and more efficient restoring of basic knowledge within the database.", "targets": "Object-Oriented Knowledge Extraction using Universal Exploiters"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c6ade02f06364d30a6d9979382618e7e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper proposes a new model for extracting an interpretable sentence embedding by introducing self-attention. Instead of using a vector, we use a 2-D matrix to represent the embedding, with each row of the matrix attending on a different part of the sentence. We also propose a self-attention mechanism and a special regularization term for the model. As a side effect, the embedding comes with an easy way of visualizing what specific parts of the sentence are encoded into the embedding. We evaluate our model on 3 different tasks: author profiling, sentiment classification and textual entailment. Results show that our model yields a significant performance gain compared to other sentence embedding methods in all of the 3 tasks.", "targets": "A STRUCTURED SELF-ATTENTIVE SENTENCE EMBEDDING"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-596a744c074c4af594ca9c281a56f20c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Contemporary research on computational processing of linguistic metaphors is divided into two main branches: metaphor recognition and metaphor interpretation. We take a different line of research and present an automated method for generating conceptual metaphors from linguistic data. Given the generated conceptual metaphors, we find corresponding linguistic metaphors in corpora. In this paper, we describe our approach and its evaluation using English and Russian data.", "targets": "Generating Conceptual Metaphors from Proposition Stores"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-cf4f1587aec7427c8700d6bcb07ef20d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recently, attempts have been made to remove Gaussian mixture models (GMM) from the training process of deep neural network-based hidden Markov models (HMM/DNN). For the GMM-free training of a HMM/DNN hybrid we have to solve two problems, namely the initial alignment of the frame-level state labels and the creation of context-dependent states. Although flat-start training via iteratively realigning and retraining the DNN using a frame-level error function is viable, it is quite cumbersome. Here, we propose to use a sequencediscriminative training criterion for flat start. While sequencediscriminative training is routinely applied only in the final phase of model training, we show that with proper caution it is also suitable for getting an alignment of context-independent DNN models. For the construction of tied states we apply a recently proposed KL-divergence-based state clustering method, hence our whole training process is GMM-free. In the experimental evaluation we found that the sequence-discriminative flat start training method is not only significantly faster than the straightforward approach of iterative retraining and realignment, but the word error rates attained are slightly better as well.", "targets": "GMM-Free Flat Start Sequence-Discriminative DNN Training"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ec44d46c0532483092a2d81e15a6bae5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We consider the question of extending propositional logic to a logic of plausible reasoning, and posit four requirements that any such extension should satisfy. Each is a requirement that some property of classical propositional logic be preserved in the extended logic; as such, the requirements are simpler and less problematic than those used in Cox\u2019s Theorem and its variants. As with Cox\u2019s Theorem, our requirements imply that the extended logic must be isomorphic to (finite-set) probability theory. We also obtain specific numerical values for the probabilities, recovering the classical definition of probability as a theorem, with truth assignments that satisfy the premise playing the role of the \u201cpossible cases.\u201d", "targets": "From Propositional Logic to Plausible Reasoning: A Uniqueness Theorem"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-882605f965784277b31edb392b8ad700", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "There are many declarative frameworks that allow us to implement code formatters relatively easily for any specific language, but constructing them is cumbersome. The first problem is that \u201ceverybody\u201d wants to format their code differently, leading to either many formatter variants or a ridiculous number of configuration options. Second, the size of each implementation scales with a language\u2019s grammar size, leading to hundreds of rules. In this paper, we solve the formatter construction problem using a novel approach, one that automatically derives formatters for any given language without intervention from a language expert. We introduce a code formatter called CODEBUFF that uses machine learning to abstract formatting rules from a representative corpus, using a carefully designed feature set. Our experiments on Java, SQL, and ANTLR grammars show that CODEBUFF is efficient, has excellent accuracy, and is grammar invariant for a given language. It also generalizes to a 4th language tested during manuscript preparation.", "targets": "Technical Report: Towards a Universal Code Formatter through Machine Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4b0a7b961ade425f90a6142bd2bae971", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper we present a novel iterative multiphase clustering technique for efficiently clustering high dimensional data points. For this purpose we implement clustering feature (CF) tree on a real data set and a Gaussian density distribution constraint on the resultant CF tree. The post processing by the application of Gaussian density distribution function on the micro-clusters leads to refinement of the previously formed clusters thus improving their quality. This algorithm also succeeds in overcoming the inherent drawbacks of conventional hierarchical methods of clustering like inability to undo the change made to the dendogram of the data points. Moreover, the constraint measure applied in the algorithm makes this clustering technique suitable for need driven data analysis. We provide veracity of our claim by evaluating our algorithm with other similar clustering algorithms.", "targets": "Using Gaussian Measures for Efficient Constraint Based Clustering"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b3d5c02b9a9a48eaa918a6c44212c628", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "How do news sources tackle controversial issues? In this work, we take a data-driven approach to understand how controversy interplays with emotional expression and biased language in the news. We begin by introducing a new dataset of controversial and noncontroversial terms collected using crowdsourcing. Then, focusing on 15 major U.S. news outlets, we compare millions of articles discussing controversial and non-controversial issues over a span of 7 months. We find that in general, when it comes to controversial issues, the use of negative affect and biased language is prevalent, while the use of strong emotion is tempered. We also observe many differences across news sources. Using these findings, we show that we can indicate to what extent an issue is controversial, by comparing it with other issues in terms of how they are portrayed across different media.", "targets": "Controversy and Sentiment in Online News"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-fa4098943dbe40eb9eff7555c3e13c6f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Due to the intractable nature of exact lifted inference, research has recently focused on the discovery of accurate and efficient approximate inference algorithms in Statistical Relational Models (SRMs), such as Lifted First-Order Belief Propagation. FOBP simulates propositional factor graph belief propagation without constructing the ground factor graph by identifying and lifting over redundant message computations. In this work, we propose a generalization of FOBP called Lifted Generalized Belief Propagation, in which both the region structure and the message structure can be lifted. This approach allows more of the inference to be performed intra-region (in the exact inference step of BP), thereby allowing simulation of propagation on a graph structure with larger region scopes and fewer edges, while still maintaining tractability. We demonstrate that the resulting algorithm converges in fewer iterations to more accurate results on a variety of SRMs.", "targets": "Lifted Region-Based Belief Propagation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6a340965ef7f43f5a7bfbe789a3071c8", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we describe a methodology to infer Bullish or Bearish sentiment towards companies/brands. More specifically, our approach leverages affective lexica and word embeddings in combination with convolutional neural networks to infer the sentiment of financial news headlines towards a target company. Such architecture was used and evaluated in the context of the SemEval 2017 challenge (task 5, subtask 2), in which it obtained the best performance.", "targets": "Fortia-FBK at SemEval-2017 Task 5: Bullish or Bearish? Inferring Sentiment towards Brands from Financial News Headlines"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-060be754058148df86792f075b2e02d1", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In last few years there are major changes and evolution has been done on classification of data. As the application area of technology is increases the size of data also increases. Classification of data becomes difficult because of unbounded size and imbalance nature of data. Class imbalance problem become greatest issue in data mining. Imbalance problem occur where one of the two classes having more sample than other classes. The most of algorithm are more focusing on classification of major sample while ignoring or misclassifying minority sample. The minority samples are those that rarely occur but very important. There are different methods available for classification of imbalance data set which is divided into three main categories, the algorithmic approach, datapreprocessing approach and feature selection approach. Each of this technique has their own advantages and disadvantages. In this paper systematic study of each approach is define which gives the right direction for research in class imbalance problem.", "targets": "Class Imbalance Problem in Data Mining: Review"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7387a5cd905244019aff1e101ee22894", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Words in natural language follow a Zipfian distribution whereby some words are frequent but most are rare. Learning representations for words in the \u201clong tail\u201d of this distribution requires enormous amounts of data. Representations of rare words trained directly on end-tasks are usually poor, requiring us to pre-train embeddings on external data, or treat all rare words as out-of-vocabulary words with a unique representation. We provide a method for predicting embeddings of rare words on the fly from small amounts of auxiliary data with a network trained against the end task. We show that this improves results against baselines where embeddings are trained on the end task in a reading comprehension task, a recognizing textual entailment task, and in language modelling.", "targets": "Learning to Compute Word Embeddings On the Fly"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5bd07d37c056460dbbfc77dd9d2da280", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present a system for online monitoring of maritime activity over streaming positions from numerous vessels sailing at sea. It employs an online tracking module for detecting important changes in the evolving trajectory of each vessel across time, and thus can incrementally retain concise, yet reliable summaries of its recent movement. In addition, thanks to its complex event recognition module, this system can also offer instant notification to marine authorities regarding emergency situations, such as risk of collisions, suspicious moves in protected zones, or package picking at open sea. Not only did our extensive tests validate the performance, efficiency, and robustness of the system against scalable volumes of real-world and synthetically enlarged datasets, but its deployment against online feeds from vessels has also confirmed its capabilities for effective, real-time maritime surveillance.", "targets": "Online Event Recognition from Moving Vessel Trajectories"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3a2c99b998a040f4a841faaa15214ddc", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Prosody affects the naturalness and intelligibility of speech. However, automatic prosody prediction from text for Chinese speech synthesis is still a great challenge and the traditional conditional random fields (CRF) based method always heavily relies on feature engineering. In this paper, we propose to use neural networks to predict prosodic boundary labels directly from Chinese characters without any feature engineering. Experimental results show that stacking feed-forward and bidirectional long short-term memory (BLSTM) recurrent network layers achieves superior performance over the CRF-based method. The embedding features learned from raw text further enhance the performance.", "targets": "AUTOMATIC PROSODY PREDICTION FOR CHINESE SPEECH SYNTHESIS USING BLSTM-RNN AND EMBEDDING FEATURES"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-97315894fc9d41b7988da13c80f11446", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Replacing a portion of current light duty vehicles (LDV) with plug-in hybrid electric vehicles (PHEVs) offers the possibility to reduce the dependence on petroleum fuels together with environmental and economic benefits. The charging activity of PHEVs will certainly introduce new load to the power grid. In the framework of the development of a smarter grid, the primary focus of the present study is to propose a model for the electrical daily demand in presence of PHEVs charging. Expected PHEV demand is modeled by the PHEV charging time and the starting time of charge according to real world data. A normal distribution for starting time of charge is assumed. Several distributions for charging time are considered: uniform distribution, Gaussian with positive support, Rician distribution and a non-uniform distribution coming from driving patterns in real-world data. We generate daily demand profiles by using real-world residential profiles throughout 2014 in the presence of different expected PHEV demand models. Support vector machines (SVMs), a set of supervised machine learning models, are employed in order to find the best model to fit the data. SVMs with radial basis function (RBF) and polynomial kernels were tested. Model performances are evaluated by means of mean squared error (MSE) and mean absolute percentage error (MAPE). Best results are obtained with RBF kernel: maximum (worst) values for MSE and MAPE were about 2.89 10 and 0.023, respectively. Keywords\u2014Energy demand, plug-in hybrid electric vehicle (PHEV), smart grids, support vector machines.", "targets": "Modeling Electrical Daily Demand in Presence of PHEVs in Smart Grids with Supervised Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b839eb50ec7142378927ab6219205811", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Multimedia or spoken content presents more attractive information than plain text content, but it\u2019s more difficult to display on a screen and be selected by a user. As a result, accessing large collections of the former is much more difficult and time-consuming than the latter for humans. It\u2019s highly attractive to develop a machine which can automatically understand spoken content and summarize the key information for humans to browse over. In this endeavor, we propose a new task of machine comprehension of spoken content. We define the initial goal as the listening comprehension test of TOEFL, a challenging academic English examination for English learners whose native language is not English. We further propose an Attention-based Multi-hop Recurrent Neural Network (AMRNN) architecture for this task, achieving encouraging results in the initial tests. Initial results also have shown that word-level attention is probably more robust than sentence-level attention for this task with ASR errors.", "targets": "Towards Machine Comprehension of Spoken Content: Initial TOEFL Listening Comprehension Test by Machine"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-878ce6f6ca1344e7a131c1ebfb0a9833", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Language models for agglutinative languages have always been hindered in past due to myriad of agglutinations possible to any given word through various affixes.We propose a method to diminish the problem of out-of-vocabulary words by introducing an embedding derived from syllables and morphemes which leverages the agglutinative property. Our model outperforms character-level embedding in perplexity by 16.87 with 9.50M parameters. Proposed method achieves state of the art performance over existing input prediction methods in terms of Key Stroke Saving and has been commercialized.", "targets": "Syllable-level Neural Language Model for Agglutinative Language"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-da3b7c005f214116acc59ad82c9a9c51", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The efficiency of inference in both the Hugin and, most notably, the Shafer-Shenoy archi\u00ad tectures can be improved by exploiting the independence relations induced by the incom\u00ad ing messages of a clique. That is, the mes\u00ad sage to be sent from a clique can be com\u00ad puted via a factorization of the clique poten\u00ad tial in the form of a junction tree. In this pa\u00ad per we show that by exploiting such nested junction trees in the computation of messages both space and time costs of the conventional propagation methods may be reduced. The paper presents a structured way of exploit\u00ad ing the nested junction trees technique to achieve such reductions. The usefulness of the method is emphasized through a thor\u00ad ough empirical evaluation involving ten large real-world Bayesian networks and the Hugin inference algorithm.", "targets": "Nested Junction Trees"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-43bc67e07e68491ca8ec1f08fd512d5b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Mobile edge computing (a.k.a. fog computing) has recently emerged to enable in-situ processing of delay-sensitive applications at the edge of mobile networks. Providing grid power supply in support of mobile edge computing, however, is costly and even infeasible (in certain rugged or under-developed areas), thus mandating on-site renewable energy as a major or even sole power supply in increasingly many scenarios. Nonetheless, the high intermittency and unpredictability of renewable energy make it very challenging to deliver a high quality of service to users in energy harvesting mobile edge computing systems. In this paper, we address the challenge of incorporating renewables into mobile edge computing and propose an efficient reinforcement learning-based resource management algorithm, which learns on-the-fly the optimal policy of dynamic workload offloading (to the centralized cloud) and edge server provisioning to minimize the long-term system cost (including both service delay and operational cost). Our online learning algorithm uses a decomposition of the (offline) value iteration and (online) reinforcement learning, thus achieving a significant improvement of learning rate and runtime performance when compared to standard reinforcement learning algorithms such as Q-learning. We prove the convergence of the proposed algorithm and analytically show that the learned policy has a simple monotone structure amenable to practical implementation. Our simulation results validate the efficacy of our algorithm, which significantly improves the edge computing performance compared to fixed or myopic optimization schemes and conventional reinforcement learning algorithms. J. Xu and L. Chen are with the Department of Electrical and Computer Engineering, University of Miami. Email: jiexu@miami.edu, lx.chen@miami.edu. S. Ren is with the Department of Electrical and Computer Engineering, University of California, Riverside. Email: sren@ece.ucr.edu", "targets": "Online Learning for Offloading and Autoscaling in Energy Harvesting Mobile Edge Computing"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9cc4f43463ac4b5ca705ba350a4afa38", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "There is a vast amount of unstructured Arabic information on the Web, this data is always organized in semi-structured text and cannot be used directly. This research proposes a semi-supervised technique that extracts binary relations between two Arabic named entities from the Web. Several works have been performed for relation extraction from Latin texts and as far as we know, there isn\u2019t any work for Arabic text using a semi-supervised technique. The goal of this research is to extract a large list or table from named entities and relations in a specific domain. A small set of a handful of instance relations are required as input from the user. The system exploits summaries from Google search engine as a source text. These instances are used to extract patterns. The output is a set of new entities and their relations. The results from four experiments show that precision and recall varies according to relation type. Precision ranges from 0.61 to 0.75 while recall ranges from 0.71 to 0.83. The best result is obtained for (player, club) relationship, 0.72 and 0.83 for precision and recall respectively.", "targets": "EXTRACTING ARABIC RELATIONS FROM THE WEB"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-93ea1e1e77bb46a493b872c5ddb38c9d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We discuss relations between Residual Networks (ResNet), Recurrent Neural Networks (RNNs) and the primate visual cortex. We begin with the observation that a shallow RNN is exactly equivalent to a very deep ResNet with weight sharing among the layers. A direct implementation of such a RNN, although having orders of magnitude fewer parameters, leads to a performance similar to the corresponding ResNet. We propose 1) a generalization of both RNN and ResNet architectures and 2) the conjecture that a class of moderately deep RNNs is a biologically-plausible model of the ventral stream in visual cortex. We demonstrate the effectiveness of the architectures by testing them on the CIFAR-10 dataset. This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF 1231216. 1 ar X iv :1 60 4. 03 64 0v 1 [ cs .L G ] 1 3 A pr 2 01 6", "targets": "Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-cb7e158a340941dcb5e13a44e3a4027c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Despite recent advances, the remaining bottlenecks in deep generative models are necessity of extensive training and difficulties with generalization from small number of training examples. Both problems may be addressed by conditional generative models that are trained to adapt the generative distribution to additional input data. So far this idea was explored only under certain limitations such as restricting the input data to be a single object or multiple objects representing the same concept. In this work we develop a new class of deep generative model called generative matching networks which is inspired by the recently proposed matching networks for one-shot learning in discriminative tasks and the ideas from meta-learning. By conditioning on the additional input dataset, generative matching networks may instantly learn new concepts that were not available during the training but conform to a similar generative process, without explicit limitations on the number of additional input objects or the number of concepts they represent. Our experiments on the Omniglot dataset demonstrate that generative matching networks can significantly improve predictive performance on the fly as more additional data is available to the model and also adapt the latent space which is beneficial in the context of feature extraction.", "targets": "GENERATIVE MATCHING NETWORKS"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a42b973c58784a34acf1a75c20f80ddb", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We develop a general duality between neural networks and compositional kernels, striving towards a better understanding of deep learning. We show that initial representations generated by common random initializations are sufficiently rich to express all functions in the dual kernel space. Hence, though the training objective is hard to optimize in the worst case, the initial weights form a good starting point for optimization. Our dual view also reveals a pragmatic and aesthetic perspective of neural networks and underscores their expressive power. \u2217Email: amitdaniely@google.com \u2020Email: rf@cs.stanford.edu. Work performed at Google. \u2021Email: singer@google.com ar X iv :1 60 2. 05 89 7v 1 [ cs .L G ] 1 8 Fe b 20 16", "targets": "Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-24e4652415cf46f98768850aa6034679", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This research applies ideas from argumentation theory in the context of semantic wikis, aiming to provide support for structured-large scale argumentation between human agents. The implemented prototype is exemplified by modelling the MMR vaccine controversy.", "targets": "Using Semantic Wikis for Structured Argument in Medical Domain"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-21d6f08bfcd64a36a2c2aff6bab3db18", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose k-means, a new clustering method which efficiently copes with large numbers of clusters and achieves low energy solutions. k-means builds upon the standard k-means (Lloyd\u2019s algorithm) and combines a new strategy to accelerate the convergence with a new low time complexity divisive initialization. The accelerated convergence is achieved through only looking at kn nearest clusters and using triangle inequality bounds in the assignment step while the divisive initialization employs an optimal 2-clustering along a direction. The worst-case time complexity per iteration of our k-means is O(nknd+ kd), where d is the dimension of the n data points and k is the number of clusters and usually n k kn. Compared to k-means\u2019 O(nkd) complexity, our k-means complexity is significantly lower, at the expense of slightly increasing the memory complexity by O(nkn + k). In our extensive experiments k-means is order(s) of magnitude faster than standard methods in computing accurate clusterings on several standard datasets and settings with hundreds of clusters and high dimensional data. Moreover, the proposed divisive initialization generally leads to clustering energies comparable to those achieved with the standard k-means++ initialization, while being significantly faster.", "targets": "k-means for fast and accurate large scale clustering"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-34f125e15f064521949404bf9d41dba9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We investigate evaluation metrics for endto-end dialogue systems where supervised labels, such as task completion, are not available. Recent works in end-to-end dialogue systems have adopted metrics from machine translation and text summarization to compare a model\u2019s generated response to a single target response. We show that these metrics correlate very weakly or not at all with human judgements of the response quality in both technical and non-technical domains. We provide quantitative and qualitative results highlighting specific weaknesses in existing metrics, and provide recommendations for future development of better automatic evaluation metrics for dialogue systems.", "targets": "How NOT To Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Metrics for Dialogue Response Generation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e7b7b95bd04845e28a6ed131b6a7bcd4", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper presents results of our experiments using the Ubuntu Dialog Corpus \u2013the largest publicly available multi-turn dialog corpus. First, we use an in-houseimplementation of previously reported models to do an independent evaluationusing the same data. Second, we evaluate the performances of various LSTMs,Bi-LSTMs and CNNs on the dataset. Third, we create an ensemble by averagingpredictions of multiple models. The ensemble further improves the performanceand it achieves a state-of-the-art result for this dataset. Finally, we discuss ourfuture plans using this corpus.", "targets": "Improved Deep Learning Baselines for Ubuntu Corpus Dialogs"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4d5948af680e40739e884a95f40c590c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Eliminating the negative effect of highly non-stationary environmental noise is a long-standing research topic for speech recognition but remains an important challenge nowadays. To address this issue, traditional unsupervised signal processing methods seem to have touched the ceiling. However, data-driven based supervised approaches, particularly the ones designed with deep learning, have recently emerged as potential alternatives. In this light, we are going to comprehensively summarise the recently developed and most representative deep learning approaches to deal with the raised problem in this article, with the aim of providing guidelines for those who are going deeply into the field of environmentally robust speech recognition. To better introduce these approaches, we categorise them into singleand multi-channel techniques, each of which is specifically described at the front-end, the back-end, and the joint framework of speech recognition systems. In the meanwhile, we describe the pros and cons of these approaches as well as the relationships among them, which can probably benefit future research.", "targets": "Deep Learning for Environmentally Robust Speech Recognition: An Overview of Recent Developments"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5b7e62e30ed149f8803f00fd8e4e9b80", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recent works have shown that synthetic parallel data automatically generated by translation models can be effective for various neural machine translation (NMT) issues. In this study, we build NMT systems using only synthetic parallel data. As an efficient alternative to real parallel data, we also present a new type of synthetic parallel corpus. The proposed pseudo parallel data are distinct from previous works in that ground truth and synthetic examples are mixed on both sides of sentence pairs. Experiments on Czech-German and French-German translations demonstrate the efficacy of the proposed pseudo parallel corpus, which shows not only enhanced results for bidirectional translation tasks but also substantial improvement with the aid of a ground truth real parallel corpus.", "targets": "Building a Neural Machine Translation System Using Only Synthetic Parallel Data"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e10d0734c1c7464ba7e844697cfcfc59", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we present a system to visualize RDF knowledge graphs. These graphs are obtained from a knowledge extraction system designed by GEOLSemantics. This extraction is performed using natural language processing and trigger detection. The user can visualize subgraphs by selecting some ontology features like concepts or individuals. The system is also multilingual, with the use of the annotated ontology in English, French, Arabic and Chinese.", "targets": "RDF Knowledge Graph Visualization From a Knowledge Extraction System"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6efbcd3f295e4fd5ad2611958678e029", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Deep learning is a branch of artificial intelligence employing deep neural network architectures that has significantly advanced the state-of-the-art in computer vision, speech recognition, natural language processing and other domains. In November 2015, Google released TensorFlow, an open source deep learning software library for defining, training and deploying machine learning models. In this paper, we review TensorFlow and put it in context of modern deep learning concepts and software. We discuss its basic computational paradigms and distributed execution model, its programming interface as well as accompanying visualization toolkits. We then compare TensorFlow to alternative libraries such as Theano, Torch or Caffe on a qualitative as well as quantitative basis and finally comment on observed use-cases of TensorFlow in academia and industry.", "targets": "A Tour of TensorFlow"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9f2836218d22426898975da1cd84fc7a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Each year, millions of motor vehicle traffic accidents all over the world cause a large number of fatalities, injuries and significant material loss. Automated Driving (AD) has potential to drastically reduce such accidents. In this work, we focus on the technical challenges that arise from AD in urban environments. We present the overall architecture of an AD system and describe in detail the perception and planning modules. The AD system, built on a modified Acura RLX, was demonstrated in a course in GoMentum Station in California. We demonstrated autonomous handling of 4 scenarios: traffic lights, cross-traffic at intersections, construction zones and pedestrians. The AD vehicle displayed safe behavior and performed consistently in repeated demonstrations with slight variations in conditions. Overall, we completed 44 runs, encompassing 110km of automated driving with only 3 cases where the driver intervened the control of the vehicle, mostly due to error in GPS positioning. Our demonstration showed that robust and consistent behavior in urban scenarios is possible, yet more investigation is necessary for full scale rollout on public roads.", "targets": "Towards Full Automated Drive in Urban Environments: A Demonstration in GoMentum Station, California"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2c6bf4b5d6e14c29ac8ad8482ae4bf82", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Bio-inspired optimization algorithms have been gaining more popularity recently. One of the most important of these algorithms is particle swarm optimization (PSO). PSO is based on the collective intelligence of a swam of particles. Each particle explores a part of the search space looking for the optimal position and adjusts its position according to two factors; the first is its own experience and the second is the collective experience of the whole swarm. PSO has been successfully used to solve many optimization problems. In this work we use PSO to improve the performance of a well-known representation method of time series data which is the symbolic aggregate approximation (SAX). As with other time series representation methods, SAX results in loss of information when applied to represent time series. In this paper we use PSO to propose a new minimum distance WMD for SAX to remedy this problem. Unlike the original minimum distance, the new distance sets different weights to different segments of the time series according to their information content. This weighted minimum distance enhances the performance of SAX as we show through experiments using different time series datasets.", "targets": "Particle Swarm Optimization of Information-Content Weighting of Symbolic Aggregate Approximation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-10c53693d5694de8adff70a944f2e5b7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "N-tuple networks have been successfully used as position evaluation functions for board games such as Othello or Connect Four. The effectiveness of such networks depends on their architecture, which is determined by the placement of constituent n-tuples, sequences of board locations, providing input to the network. The most popular method of placing ntuples consists in randomly generating a small number of long, snake-shaped board location sequences. In comparison, we show that learning n-tuple networks is significantly more effective if they involve a large number of systematically placed, short, straight n-tuples. Moreover, we demonstrate that in order to obtain the best performance and the steepest learning curve for Othello it is enough to use n-tuples of size just 2, yielding a network consisting of only 288 weights. The best such network evolved in this study has been evaluated in the online Othello League, obtaining the performance of nearly 96% \u2014 more than any other player to date.", "targets": "Systematic N-tuple Networks for Position Evaluation: Exceeding 90% in the Othello League"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7f2419c4f60040839b12cbc034f178cd", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present a neural network architecture to predict a point in color space from the sequence of characters in the color\u2019s name. Using large scale color\u2013name pairs obtained from an online color design forum, we evaluate our model on a \u201ccolor Turing test\u201d and find that, given a name, the colors predicted by our model are preferred by annotators to color names created by humans. Our datasets and demo system are available online at http://colorlab.us.", "targets": "Character Sequence Models for Colorful Words"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a1e25f13e5694cf9aee847f285d8a3a5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Despite the remarkable progress recently made in distant speech recognition, state-of-the-art technology still suffers from a lack of robustness, especially when adverse acoustic conditions characterized by non-stationary noises and reverberation are met. A prominent limitation of current systems lies in the lack of matching and communication between the various technologies involved in the distant speech recognition process. The speech enhancement and speech recognition modules are, for instance, often trained independently. Moreover, the speech enhancement normally helps the speech recognizer, but the output of the latter is not commonly used, in turn, to improve the speech enhancement. To address both concerns, we propose a novel architecture based on a network of deep neural networks, where all the components are jointly trained and better cooperate with each other thanks to a full communication scheme between them. Experiments, conducted using different datasets, tasks and acoustic conditions, revealed that the proposed framework can overtake other competitive solutions, including recent joint training approaches.", "targets": "A NETWORK OF DEEP NEURAL NETWORKS FOR DISTANT SPEECH RECOGNITION"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-de2f6f8d2f834264bb1310fbc16ce996", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In Computer Vision, problem of identifying or classifying the objects present in an image is called Object Categorization. It is a challenging problem, especially when the images have clutter background, occlusions or different lighting conditions. Many vision features have been proposed which aid object categorization even in such adverse conditions. Past research has shown that, employing multiple features rather than any single features leads to better recognition. Multiple Kernel Learning (MKL) framework has been developed for learning an optimal combination of features for object categorization. Existing MKL methods use linear combination of base kernels which may not be optimal for object categorization. Real-world object categorization may need to consider complex combination of kernels(non-linear) and not only linear combination. Evolving non-linear functions of base kernels using Genetic Programming is proposed in this report. Experiment results show that non-kernel generated using genetic programming gives good accuracy as compared to linear combination of kernels.", "targets": "Finding Optimal Combination of Kernels using Genetic Programming"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-899a286ce110410fb8f2976c685e0907", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this work we present a method for using Deep Q-Networks (DQNs) in multi-objective tasks. Deep Q-Networks provide remarkable performance in single objective tasks learning from high-level visual perception. However, in many scenarios (e.g in robotics), the agent needs to pursue multiple objectives simultaneously. We propose an architecture in which separate DQNs are used to control the agent\u2019s behaviour with respect to particular objectives. In this architecture we use signal suppression, known from the (Brooks) subsumption architecture, to combine outputs of several DQNs into a single action. Our architecture enables the decomposition of the agent\u2019s behaviour into controllable and replaceable sub-behaviours learned by distinct modules. To evaluate our solution we used a game-like simulator in which an agent provided with high-level visual input pursues multiple objectives in a 2D world. Our solution provides benefits of modularity, while its performance is comparable to the monolithic approach.", "targets": "Multi-Objective Deep Q-Learning with Subsumption Architecture"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f13b803a37514caf8c2e64167ef46331", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Nowadays ontologies present a growing interest in Data Fusion applications. As a matter of fact, the ontologies are seen as a semantic tool for describing and reasoning about sensor data, objects, relations and general domain theories. In addition, uncertainty is perhaps one of the most important characteristics of the data and information handled by Data Fusion. However, the fundamental nature of ontologies implies that ontologies describe only asserted and veracious facts of the world. Different probabilistic, fuzzy and evidential approaches already exist to fill this gap; this paper recaps the most popular tools. However none of the tools meets exactly our purposes. Therefore, we constructed a Dempster-Shafer ontology that can be imported into any specific domain ontology and that enables us to instantiate it in an uncertain manner. We also developed a Java application that enables reasoning about these uncertain ontological instances.", "targets": "Uncertainty in Ontologies: Dempster-Shafer Theory for Data Fusion Applications"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-96a8f4dd2f9042f586f14595628befe3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Scientists often run experiments to distinguish competing theories. This requires patience, rigor, and ingenuity\u2014there is often a large space of possible experiments one could run. But we need not comb this space by hand\u2014if we represent our theories as formal models and explicitly declare the space of experiments, we can automate the search for good experiments, looking for those with high expected information gain. Here, we present a general and principled approach to experiment design based on probabilistic programming languages (PPLs). PPLs offer a clean separation between declaring problems and solving them, which means that the scientist can automate experiment design by simply declaring her model and experiment spaces in the PPL without having to worry about the details of calculating information gain. We demonstrate our system in two case studies drawn from cognitive psychology, where we use it to design optimal experiments in the domains of sequence prediction and categorization. We find strong empirical validation that our automatically designed experiments were indeed optimal. We conclude by discussing a number of interesting questions for future research.", "targets": "Practical optimal experiment design with probabilistic programs"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ee1d059d4fde4af2acf1c12f2beaac0d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Speech analysis had been taken to a new level with the discovery of Reverse Speech (RS). RS is the discovery of hidden messages, referred as reversals, in normal speech. Works are in progress for exploiting the relevance of RS in different real world applications such as investigation, medical field etc. In this paper we represent an innovative method for preparing a reliable Software Requirement Specification (SRS) document with the help of reverse speech. As SRS act as the backbone for the successful completion of any project, a reliable method is needed to overcome the inconsistencies. Using RS such a reliable method for SRS documentation was developed. Keywords\u2014 Reverse Speech, Software Requirement Specification (SRS), Speech Enhancement, Speech Recognition.", "targets": "Software Requirement Specification Using Reverse Speech Technology"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-90d598abf88843c48841672870daced6", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We simulate the training of a set of state of the art neural networks, the Maxout networks (Goodfellow et al., 2013a), on three benchmark datasets: the MNIST, CIFAR10 and SVHN, with three distinct arithmetics: floating point, fixed point and dynamic fixed point. For each of those datasets and for each of those arithmetics, we assess the impact of the precision of the computations on the final error of the training. We find that very low precision computation is sufficient not just for running trained networks but also for training them. For example, almost state-of-the-art results were obtained on most datasets with around 10 bits for computing activations and gradients, and 12 bits for storing updated parameters.", "targets": "LOW PRECISION ARITHMETIC FOR DEEP LEARNING"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a8d1f8d57c9f4b08a7ad994ba13cb4e3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Wit is a quintessential form of rich interhuman interaction, and is often grounded in a specific situation (e.g., a comment in response to an event). In this work, we attempt to build computational models that can produce witty descriptions for a given image. Inspired by a cognitive account of humor appreciation, we employ linguistic wordplay, specifically puns. We compare our approach against meaningful baseline approaches via human studies. In a Turing test style evaluation, people find our model\u2019s description for an image to be wittier than a human\u2019s witty description 55% of the time!", "targets": "Punny Captions: Witty Wordplay in Image Descriptions"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-082dd847a0c048f997da6441c7bd0cbe", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper addresses how a recursive neural network model can automatically leave out useless information and emphasize important evidence, in other words, to perform \u201cweight tuning\u201d for higher-level representation acquisition. We propose two models, Weighted Neural Network (WNN) and Binary-Expectation Neural Network (BENN), which automatically control how much one specific unit contributes to the higher-level representation. The proposed model can be viewed as incorporating a more powerful compositional function for embedding acquisition in recursive neural networks. Experimental results demonstrate the significant improvement over standard neural models.", "targets": "Feature Weight Tuning for Recursive Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-202c077790794c83a955566676982a75", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose the Gaussian Error Linear Unit (GELU), a high-performing neural network activation function. The GELU nonlinearity is the expected transformation of a stochastic process which randomly applies the identity or zero map, combining the intuitions of dropout and zoneout while respecting neuron values. This connection suggests a new probabilistic understanding of nonlinearities. We perform an empirical evaluation of the GELU nonlinearity against the ReLU and ELU activations and find performance improvements across all tasks.", "targets": "Bridging Nonlinearities and Stochastic Regularizers with Gaussian Error Linear Units"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-02db4c4791c64027a10c58e77942d1ac", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We argue that the optimization plays a crucial role in generalization of deep learning models through implicit regularization. We do this by demonstrating that generalization ability is not controlled by network size but rather by some other implicit control. We then demonstrate how changing the empirical optimization procedure can improve generalization, even if actual optimization quality is not affected. We do so by studying the geometry of the parameter space of deep networks, and devising an optimization algorithm attuned to this geometry.", "targets": "Geometry of Optimization and Implicit Regularization in Deep Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d0de5041b24446768cff747c68e8b80e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We develop new representations for the L\u00e9vy measures of the beta and gamma processes. These representations are manifested in terms of an infinite sum of well-behaved (proper) beta and gamma distributions. Further, we demonstrate how these infinite sums may be truncated in practice, and explicitly characterize truncation errors. We also perform an analysis of the characteristics of posterior distributions, based on the proposed decompositions. The decompositions provide new insights into the beta and gamma processes (and their generalizations), and we demonstrate how the proposed representation unifies some properties of the two. This paper is meant to provide a rigorous foundation for and new perspectives on L\u00e9vy processes, as these are of increasing importance in machine learning.", "targets": "Le\u0301vy Measure Decompositions for the Beta and Gamma Processes"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f17c31bb37694959bd6e80bb06f43c59", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Modelling problems containing a mixture of Boolean and numerical variablesis a long-standing interest of Artificial Intelligence. However, performinginference and learning in hybrid domains is a particularly daunting task.The ability to model this kind of domains is crucial in \u201clearning to design\u201dtasks, that is, learning applications where the goal is to learn from exampleshow to perform automatic de novo design of novel objects. In this paper wepresent Structured Learning Modulo Theories, a max-margin approach forlearning in hybrid domains based on Satisfiability Modulo Theories, whichallows to combine Boolean reasoning and optimization over continuous lineararithmetical constraints. We validate our method on artificial and real worldscenarios.", "targets": "Structured Learning Modulo Theories"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3e2bc2449dcb44c183ae1db177c21dc3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We investigate the pertinence of methods from algebraic topology for text data analysis. These methods enable the development of mathematically-principled isometric-invariant mappings from a set of vectors to a document embedding, which is stable with respect to the geometry of the document in the selected metric space. In this work, we evaluate the utility of these topology-based document representations in traditional NLP tasks, specifically document clustering and sentiment classification. We find that the embeddings do not benefit text analysis. In fact, performance is worse than simple techniques like tf-idf, indicating that the geometry of the document does not provide enough variability for classification on the basis of topic or sentiment in the chosen datasets.", "targets": "Does the Geometry of Word Embeddings Help Document Classification? A Case Study on Persistent Homology Based Representations"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-bda3792257fa48d0ad3b58f9b41f5d4e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Microarrays are made it possible to simultaneously monitor the expression profiles of thousands of genes under various experimental conditions. It is used to identify the co-expressed genes in specific cells or tissues that are actively used to make proteins. This method is used to analysis the gene expression, an important task in bioinformatics research. Cluster analysis of gene expression data has proved to be a useful tool for identifying co-expressed genes, biologically relevant groupings of genes and samples. In this paper we applied K-Means with Automatic Generations of Merge Factor for ISODATAAGMFI. Though AGMFI has been applied for clustering of Gene Expression Data, this proposed Enhanced Automatic Generations of Merge Factor for ISODATAEAGMFI Algorithms overcome the drawbacks of AGMFI in terms of specifying the optimal number of clusters and initialization of good cluster centroids. Experimental results on Gene Expression Data show that the proposed EAGMFI algorithms could identify compact clusters with perform well in terms of the Silhouette Coefficients cluster measure.", "targets": "Performance Analysis of Enhanced Clustering Algorithm for Gene Expression Data"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-79ac5a82201645fe8ee129523ff9ccca", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we present algorithms that perform gradient ascent of the average reward in a partially observable Markov decision process (POMDP). These algorithms are based on GPOMDP, an algorithm introduced in a companion paper (Baxter & Bartlett, 2001), which computes biased estimates of the performance gradient in POMDPs. The algorithm\u2019s chief advantages are that it uses only one free parameter 2 [0; 1), which has a natural interpretation in terms of bias-variance trade-off, it requires no knowledge of the underlying state, and it can be applied to infinite state, control and observation spaces. We show how the gradient estimates produced by GPOMDP can be used to perform gradient ascent, both with a traditional stochastic-gradient algorithm, and with an algorithm based on conjugate-gradients that utilizes gradient information to bracket maxima in line searches. Experimental results are presented illustrating both the theoretical results of Baxter and Bartlett (2001) on a toy problem, and practical aspects of the algorithms on a number of more realistic problems.", "targets": "Experiments with Infinite-Horizon, Policy-Gradient Estimation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b897672ecf784c0c800967cf87d510ac", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The choice of architecture of artificial neuron network (ANN) is still a challenging task that users face every time. It greatly affects the accuracy of the built network. In fact there is no optimal method that is applicable to various implementations at the same time. In this paper we propose a method to construct ANN based on clustering, that resolves the problems of random and ad\u2019hoc approaches for multilayer ANN architecture. Our method can be applied to regression problems. Experimental results obtained with different datasets, reveals the efficiency of our method.", "targets": "Towards a constructive multilayer perceptron for regression task using non-parametric clustering. A case study of Photo-Z redshift reconstruction"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-58f8ce7c39944b4d988455d99de10622", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Unsupervised models of dependency parsing typically require large amounts of clean, unlabeled data plus gold-standard part-of-speech tags. Adding indirect supervision (e.g. language universals and rules) can help, but we show that obtaining small amounts of direct supervision\u2014here, partial dependency annotations\u2014provides a strong balance between zero and full supervision. We adapt the unsupervised ConvexMST dependency parser to learn from partial dependencies expressed in the Graph Fragment Language. With less than 24 hours of total annotation, we obtain 7% and 17% absolute improvement in unlabeled dependency scores for English and Spanish, respectively, compared to the same parser using only universal grammar constraints.", "targets": "Fill it up: Exploiting partial dependency annotations in a minimum spanning tree parser"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b0f6c8a6a515433db7ca1ced3015f157", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "One of the benefits of belief networks and influence diagrams is that so much knowl\u00ad edge is captured in the graphical structure. In particular, statements of conditional irrel\u00ad evance (or independence) can be verified in time linear in the size of the graph. To re\u00ad solve a particular inference query or decision problem, only some of the possible states and probability distributions must be specified, the \"requisite information.\" This paper presents a new, simple, and effi\u00ad cient \"Bayes-ball\" algorithm which is well\u00ad suited to both new students of belief net\u00ad works and state of the art implementations. The Bayes-ball algorithm determines irrele\u00ad vant sets and requisite information more ef\u00ad ficiently than existing methods, and is linear in the size of the graph for belief networks and influence diagrams.", "targets": "Bayes-Ball: The Rational Pastime (for Determining Irrelevance and Requisite Information in Belief Networks and Influence Diagrams)"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-03faaf76750142eabfd9e4d2b1804e47", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We describe a novel non-parametric statistical hypothesis test of relative dependence between a source variable and two candidate target variables. Such a test enables us to determine whether one source variable is significantly more dependent on a first target variable or a second. Dependence is measured via the HilbertSchmidt Independence Criterion (HSIC), resulting in a pair of empirical dependence measures (source-target 1, source-target 2). We test whether the first dependence measure is significantly larger than the second. Modeling the covariance between these HSIC statistics leads to a provably more powerful test than the construction of independent HSIC statistics by subsampling. The resulting test is consistent and unbiased, and (being based on U-statistics) has favorable convergence properties. The test can be computed in quadratic time, matching the computational complexity of standard empirical HSIC estimators. The effectiveness of the test is demonstrated on several real-world problems: we identify language groups from a multilingual corpus, and we prove that tumor location is more dependent on gene expression than chromosomal imbalances. Source code is available for download at https://github. com/wbounliphone/reldep. Proceedings of the 32 International Conference on Machine Learning, Lille, France, 2015. JMLR: W&CP volume 37. Copyright 2015 by the author(s).", "targets": "A low variance consistent test of relative dependency"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-197633d2289b492486ca6ee2a26f3d00", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The increase in the use of microblogging came along with the rapid growth on short linguistic data. On the other hand deep learning is considered to be the new frontier to extract meaningful information out of large amount of raw data in an automated manner. In this study, we engaged these two emerging fields to come up with a robust language identifier on demand, namely Language Identification Engine (LIDE). As a result, we achieved 95.12% accuracy in Discriminating between Similar Languages (DSL) Shared Task 2015 dataset, which is comparable to the maximum reported accuracy of 95.54% achieved so far.", "targets": "LIDE: Language Identification from Text Documents"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b7f85b9e18ce409ba612669981b31f6e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We use some of the largest order statistics of the random projections of a reference signal to construct a binary embedding that is adapted to signals correlated with such signal. The embedding is characterized from the analytical standpoint and shown to provide improved performance on tasks such as classification in a reduced-dimensionality space. Keywords\u2014Binary Embeddings, Random projections", "targets": "Binary adaptive embeddings from order statistics of random projections"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0e2930c0767949afbab2554294698afa", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Query-based video summarization is the task of creating a brief visual trailer, which captures the parts of the video (or a collection of videos) that are most relevant to the user-issued query. In this paper, we propose an unsupervised label propagation approach for this task. Our approach effectively captures the multimodal semantics of queries and videos using state-of-the-art deep neural networks and creates a summary that is both semantically coherent and visually attractive. We describe the theoretical framework of our graph-based approach and empirically evaluate its effectiveness in creating relevant and attractive trailers. Finally, we showcase example video trailers generated by our system.", "targets": "Semantic Video Trailers"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-87e6ea65df144dcfbcfbb1d64ccbd8c8", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We consider the hashing mechanism for constructing binary embeddings, that involves pseudo-random projections followed by nonlinear (sign function) mappings. The pseudorandom projection is described by a matrix, where not all entries are independent random variables but instead a fixed \u201cbudget of randomness\u201d is distributed across the matrix. Such matrices can be efficiently stored in sub-quadratic or even linear space, provide reduction in randomness usage (i.e. number of required random values), and very often lead to computational speed ups. We prove several theoretical results showing that projections via various structured matrices followed by nonlinear mappings accurately preserve the angular distance between input highdimensional vectors. To the best of our knowledge, these results are the first that give theoretical ground for the use of general structured matrices in the nonlinear setting. In particular, they generalize previous extensions of the JohnsonLindenstrauss lemma and prove the plausibility of the approach that was so far only heuristically confirmed for some special structured matrices. Consequently, we show that many structured matrices can be used as an efficient information compression mechanism. Our findings build a better understanding of certain deep architectures, which contain randomly weighted and untrained layers, and yet achieve high performance on different learning tasks. We empirically verify our theoretical findings and show the dependence of learning via structured hashed projections on the performance of neural network as well as nearest neighbor classifier. Equal contribution.", "targets": "Binary embeddings with structured hashed projections"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ee2e8fdd5c0b4e56b41f7d8f7757cba7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose the concept of Action-Related Place (ARPlace) as a powerful and flexible representation of task-related place in the context of mobile manipulation. ARPlace represents robot base locations not as a single position, but rather as a collection of positions, each with an associated probability that the manipulation action will succeed when located there. ARPlaces are generated using a predictive model that is acquired through experience-based learning, and take into account the uncertainty the robot has about its own location and the location of the object to be manipulated. When executing the task, rather than choosing one specific goal position based only on the initial knowledge about the task context, the robot instantiates an ARPlace, and bases its decisions on this ARPlace, which is updated as new information about the task becomes available. To show the advantages of this least-commitment approach, we present a transformational planner that reasons about ARPlaces in order to optimize symbolic plans. Our empirical evaluation demonstrates that using ARPlaces leads to more robust and efficient mobile manipulation in the face of state estimation uncertainty on our simulated robot.", "targets": "Learning and Reasoning with Action-Related Places for Robust Mobile Manipulation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-cfb43c18389c4728bde10c41ac2b7a61", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The paper studies the problem of recovering a spectrally sparse object from a small number of time domain samples. Specifically, the object of interest with ambient dimension n is assumed to be a mixture of r complex multi-dimensional sinusoids, while the underlying frequencies can assume any value in the unit disk. Conventional compressed sensing paradigms suffer from the basis mismatch issue when imposing a discrete dictionary on the Fourier representation. To address this problem, we develop a novel nonparametric algorithm, called enhanced matrix completion (EMaC), based on structured matrix completion. The algorithm starts by arranging the data into a low-rank enhanced form with multi-fold Hankel structure, then attempts recovery via nuclear norm minimization. Under mild incoherence conditions, EMaC allows perfect recovery as soon as the number of samples exceeds the order of O(r log n). We also show that, in many instances, accurate completion of a low-rank multi-fold Hankel matrix is possible when the number of observed entries is proportional to the information theoretical limits (except for a logarithmic gap). The robustness of EMaC against bounded noise and its applicability to super resolution are further demonstrated by numerical experiments.", "targets": "Spectral Compressed Sensing via Structured Matrix Completion"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-44e7e7c831914db2848f2ab15bf9e4a2", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We discuss the problem of construction of inference procedures which can manipulate with uncertainties measured in ordinal scales and fulfil to the property of strict monotonic\u00ad ity of conclusion. The class of A-valuations of plausibility is considered where operations based only on information about linear or\u00ad dering of plausibility values are used. In this class the modus ponens generating function fulfiling to the property of strict monotonic\u00ad ity of conclusions is introduced. 1 STABILITY OF DECISIONS IN INFERENCE PROCEDURES Human judgements about plausibility, truth, certainty values of premises, rules and facts are usually qualita\u00ad tive and measured in ordinal scales. Representation of these judgements by numbers from interval L = [0, lJ or L = [0, 100] and using over these numbers quanti\u00ad tative operations such as multiplication, addition and so on is not always correct. Let's consider example. Let R1 and R2 are two rules of some expert system: Rl: If At then H1,pv(RI), (1) R2: If A2 then H2,pv(R2), (2) where pv(RI) and pv(R2) are the plausibility, cer\u00ad tainty, truth values of rules measured in some linearly ordered scale L, for example L = [0, 1]. Often plausi\u00ad bilities of conclusions are calculated by: pv(Ht) = pv(Rl) * pv(At), (3) pv(H2) = pv(R2) * pv(A2), (4) where pv(At) and pv(A2} are the plausibilities of premises and * is some T-norm, for example multi\u00ad plication operation (Godo, Lopez de Mantaras et al. 1988; Hall1990; Trillas, Valverde 1985; Valverde, Tril\u00ad las 1985; Forsyth 1984). Generally the plausibility of conclusion can be calculated by means of a modus po\u00ad nens generating function mpgf: pv(HI) = mpgf(p\u00b7v(At),pv(Rt)). Let in (1)-( 4) the qualitative information about plau\u00ad sibility values is the next: pv(At) < pv(A2} < pv(R2) < pv(RI), (5) that is the plausibility values of premises are less than the plausibility values of rules, the plausibility value of A1 is less than the plausibility value of A2 and the plausibility value of rule R2 is less than the plausibility value of rule Rl. Let these plausibility values are inter\u00ad preted as the next quantitative values from L = (0, 1]: pv(A1) = 0.3 < pv(A2) = 0. 4 < pv(R2} = 0.6 <", "targets": "Modus Ponens Generating Function in the Class of A-valuations of Plausibility"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5b3e5442b78f497dbee9dbbe2e0ebf5a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Verbs play a critical role in the meaning of sentences, but these ubiquitous words have received little attention in recent distributional semantics research. We introduce SimVerb-3500, an evaluation resource that provides human ratings for the similarity of 3,500 verb pairs. SimVerb-3500 covers all normed verb types from the USF free-association database, providing at least three examples for every VerbNet class. This broad coverage facilitates detailed analyses of how syntactic and semantic phenomena together influence human understanding of verb meaning. Further, with significantly larger development and test sets than existing benchmarks, SimVerb-3500 enables more robust evaluation of representation learning architectures and promotes the development of methods tailored to verbs. We hope that SimVerb-3500 will enable a richer understanding of the diversity and complexity of verb semantics and guide the development of systems that can effectively represent and interpret this meaning.", "targets": "SimVerb-3500: A Large-Scale Evaluation Set of Verb Similarity"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-361d8e77433842a6bced5e6a72533415", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We study online boosting, the task of converting any weak online learner into a strong online learner. Based on a novel and natural definition of weak online learnability, we develop two online boosting algorithms. The first algorithm is an online version of boost-by-majority. By proving a matching lower bound, we show that this algorithm is essentially optimal in terms of the number of weak learners and the sample complexity needed to achieve a specified accuracy. This optimal algorithm is not adaptive, however. Using tools from online loss minimization, we derive an adaptive online boosting algorithm that is also parameter-free, but not optimal. Both algorithms work with base learners that can handle example importance weights directly, as well as by rejection sampling examples with probability defined by the booster. Results are complemented with an experimental study.", "targets": "Optimal and Adaptive Algorithms for Online Boosting"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2b28d616bf174952a1eb13c0d2869904", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper presents the systems developed by LIUM and CVC for the WMT16 Multimodal Machine Translation challenge. We explored various comparative methods, namely phrase-based systems and attentional recurrent neural networks models trained using monomodal or multimodal data. We also performed a human evaluation in order to estimate the usefulness of multimodal data for human machine translation and image description generation. Our systems obtained the best results for both tasks according to the automatic evaluation metrics BLEU and METEOR.", "targets": "Does Multimodality Help Human and Machine for Translation and Image Captioning?"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-693e482e4120406ba17df2b50114926b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present a system that produces sentential descriptions of video: who did what to whom, and where and how they did it. Action class is rendered as a verb, participant objects as noun phrases, properties of those objects as adjectival modifiers in those noun phrases, spatial relations between those participants as prepositional phrases, and characteristics of the event as prepositional-phrase adjuncts and adverbial modifiers. Extracting the information needed to render these linguistic entities requires an approach to event recognition that recovers object tracks, the track-to-role assignments, and changing body posture.", "targets": "Video In Sentences Out"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d9ea70d4c7dd408ca0625be85aaec691", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Extracting per-frame features using convolutional neural networks for real-time processing of video data is currently mainly performed on powerful GPU-accelerated workstations and compute clusters. However, there are many applications such as smart surveillance cameras that require or would benefit from on-site processing. To this end, we propose and evaluate a novel algorithm for changebased evaluation of CNNs for video data recorded with a static camera setting, exploiting the spatio-temporal sparsity of pixel changes. We achieve an average speed-up of 8.6\u00d7 over a cuDNN baseline on a realistic benchmark with a negligible accuracy loss of less than 0.1% and no retraining of the network. The resulting energy efficiency is 10\u00d7 higher than per-frame evaluation and reaches an equivalent of 328GOp/s/W on the Tegra X1 platform.", "targets": "CBinfer: Change-Based Inference for Convolutional Neural Networks on Video Data"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3ae9d6bb878748d8845c61f84966d53b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Non-technical losses (NTL) such as electricity theft cause significant harm to our economies, as in some countries they may range up to 40% of the total electricity distributed. Detecting NTLs requires costly on-site inspections. Accurate prediction of NTLs for customers using machine learning is therefore crucial. To date, related research largely ignore that the two classes of regular and non-regular customers are highly imbalanced, that NTL proportions may change and mostly consider small data sets, often not allowing to deploy the results in production. In this paper, we present a comprehensive approach to assess three NTL detection models for different NTL proportions in large real world data sets of 100Ks of customers: Boolean rules, fuzzy logic and Support Vector Machine. This work has resulted in appreciable results that are about to be deployed in a leading industry solution. We believe that the considerations and observations made in this contribution are necessary for future smart meter research in order to report their effectiveness on imbalanced and large real world data sets.", "targets": "Large-Scale Detection of Non-Technical Losses in Imbalanced Data Sets"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-dde6666cf7d14a5a8344d1a75d07bf4f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We investigate the problem of sentence-level supporting argument detection from relevant documents for user-specified claims. A dataset containing claims and associated citation articles is collected from online debate website idebate.org. We then manually label sentence-level supporting arguments from the documents along with their types as STUDY, FACTUAL, OPINION, or REASONING. We further characterize arguments of different types, and explore whether leveraging type information can facilitate the supporting arguments detection task. Experimental results show that LambdaMART (Burges, 2010) ranker that uses features informed by argument types yields better performance than the same ranker trained without type information.", "targets": "Understanding and Detecting Supporting Arguments of Diverse Types"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-972c681f770a43db80faee8533a36003", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We study propagation of the RegularGcc global constraint. This ensures that each row of a matrix of decision variables satisfies a Regular constraint, and each column satisfies a Gcc constraint. On the negative side, we prove that propagation is NP-hard even under some strong restrictions (e.g. just 3 values, just 4 states in the automaton, or just 5 columns to the matrix). On the positive side, we identify two cases where propagation is fixed parameter tractable. In addition, we show how to improve propagation over a simple decomposition into separate Regular and Gcc constraints by identifying some necessary but insufficient conditions for a solution. We enforce these conditions with some additional weighted row automata. Experimental results demonstrate the potential of these methods on some standard benchmark problems.", "targets": "The RegularGcc Matrix Constraint"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-cdac41f58a8f4267ac715a088163a95f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In neural machine translation (NMT), generation of a target word depends on both source and target contexts. We find that source contexts have a direct impact on the adequacy of a translation while target contexts on the fluency. Intuitively, generation of a content word should rely more on the source context and generation of a functional word should rely more on the target context. Due to lack of effective control on the influence from source and target contexts, conventional NMT tends to yield fluent but inadequate translations. To address this problem, we propose to use context gates to dynamically control the ratios at which source and target contexts contribute to the generation of target words. In this way, we can enhance the adequacy of NMT while keeping the fluency unchanged. Experiments show that our approach significantly improves upon a standard attention-based NMT system by +2.3 BLEU points.", "targets": "Context Gates for Neural Machine Translation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0a6850bc26464d5fb43bbc132ee8ae8e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Automatic question generation aims to generate questions from a text passage where the generated questions can be answered by certain sub-spans of the given passage. Traditional methods mainly use rigid heuristic rules to transform a sentence into related questions. In this work, we propose to apply the neural encoderdecoder model to generate meaningful and diverse questions from natural language sentences. The encoder reads the input text and the answer position, to produce an answer-aware input representation, which is fed to the decoder to generate an answer focused question. We conduct a preliminary study on neural question generation from text with the SQuAD dataset, and the experiment results show that our method can produce fluent and diverse questions.", "targets": "Neural Question Generation from Text: A Preliminary Study"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-339c225e4fb748f0989d9822979d037f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Most conventional sentence similarity methods only focus on similar parts of two input sentences, and simply ignore the dissimilar parts, which usually give us some clues and semantic meanings about the sentences. In this work, we propose a model to take into account both the similarities and dissimilarities by decomposing and composing lexical semantics over sentences. The model represents each word as a vector, and calculates a semantic matching vector for each word based on all words in the other sentence. Then, each word vector is decomposed into a similar component and a dissimilar component based on the semantic matching vector. After this, a twochannel CNN model is employed to capture features by composing the similar and dissimilar components. Finally, a similarity score is estimated over the composed feature vectors. Experimental results show that our model gets the state-of-the-art performance on the answer sentence selection task, and achieves a comparable result on the paraphrase identification task.", "targets": "Sentence Similarity Learning by Lexical Decomposition and Composition"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6d56764a9d664dc1836f335245a9158e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "To model time-varying nonlinear temporal dynamics in sequential data, a recurrent network capable of varying and adjusting the recurrence depth between input intervals is examined. The recurrence depth is extended by several intermediate hidden state units, and the weight parameters involved in determining these units are dynamically calculated. The motivation behind the paper lies on overcoming a deficiency in Recurrent Highway Networks and improving their performances which are currently at the forefront of RNNs: 1) Determining the appropriate number of recurrent depth in RHN for different tasks is a huge burden and just setting it to a large number is computationally wasteful with possible repercussion in terms of performance degradation and high latency. Expanding on the idea of adaptive computation time (ACT), with the use of an elastic gate in the form of a rectified exponentially decreasing function taking on as arguments as previous hidden state and input, the proposed model is able to evaluate the appropriate recurrent depth for each input. The rectified gating function enables the most significant intermediate hidden state updates to come early such that significant performance gain is achieved early. 2) Updating the weights from that of previous intermediate layer offers a richer representation than the use of shared weights across all intermediate recurrence layers. The weight update procedure is just an expansion of the idea underlying hypernetworks. To substantiate the effectiveness of the proposed network, we conducted three experiments: regression on synthetic data, human activity recognition, and language modeling on the Penn Treebank dataset. The proposed networks showed better performance than other state-of-theart recurrent networks in all three experiments.", "targets": "Early Improving Recurrent Elastic Highway Network"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-de884ae9d2f642eea2512138c6548453", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The zero-shot paradigm exploits vector-based word representations extracted from text corpora with unsupervised methods to learn general mapping functions from other feature spaces onto word space, where the words associated to the nearest neighbours of the mapped vectors are used as their linguistic labels. We show that the neighbourhoods of the mapped elements are strongly polluted by hubs, vectors that tend to be near a high proportion of items, pushing their correct labels down the neighbour list. After illustrating the problem empirically, we propose a simple method to correct it by taking the proximity distribution of potential neighbours across many mapped vectors into account. We show that this correction leads to consistent improvements in realistic zero-shot experiments in the cross-lingual, image labeling and image retrieval domains.", "targets": "IMPROVING ZERO-SHOT LEARNING BY MITIGATING THE HUBNESS PROBLEM"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9f6b6bed5517440a959ca45f24bde517", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We consider a transfer-learning problem by using the parameter transfer approach, where a suitable parameter of feature mapping is learned through one task and applied to another objective task. Then, we introduce the notion of the local stability of parametric feature mapping and parameter transfer learnability, and thereby derive a learning bound for parameter transfer algorithms. As an application of parameter transfer learning, we discuss the performance of sparse coding in selftaught learning. Although self-taught learning algorithms with plentiful unlabeled data often show excellent empirical performance, their theoretical analysis has not been studied. In this paper, we also provide the first theoretical learning bound for self-taught learning.", "targets": "Learning Bound for Parameter Transfer Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f1e9b5ef18d84a1da23a81ae21b932b4", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The human intelligence lies in the algorithm, the nature of algorithms lies in the classification, and the classification is equal to outlier detection. This paper is based on its past unpublished edition (2009), which discussed an application of \u05e4 (pe) algorithm in outlier detection for time series data. The \u05e4 algorithm, also named as RDD algorithm, is originated from the study on general AI. United with it, designed modules can be used to realize kinds of tasks. A primary framework concerned with the mind through \u05e4 algorithm has been constructed in prior works. In this concise paper, we neglect background and minor description of the early edition, and directly discuss the main contents include longest k\u2013turn subsequence problem, curve type outliers, futural directions and related comments. In section \u201cPast Present\u201d, we keep all prior conclusions, though a little might be out of date.", "targets": "\u05e4 Algorithm: its past present, futue present and comments"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f360bfa34c7949f28278144647eac9c5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Information discounting plays an important role in the theory of belief functions and, generally, in information fusion. Nevertheless, neither classical uniform discounting nor contextual cannot model certain use cases, notably temporal discounting. In this article, new contextual discounting schemes, conservative, proportional and optimistic, are proposed. Some properties of these discounting operations are examined. Classical discounting is shown to be a special case of these schemes. Two motivating cases are discussed: modelling of source reliability and application to temporal discounting.", "targets": "Conservative, Proportional and Optimistic Contextual Discounting in the Belief Functions Theory"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-03af2870434544089a82a1d0c1129a30", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We report on our system for the shared task on discrimination of similar languages (DSL 2016). The system uses only byte representations in a deep residual network (ResNet). The system, named ResIdent, is trained only on the data released with the task (closed training). We obtain 84.88% accuracy on subtask A, 68.80% accuracy on subtask B1, and 69.80% accuracy on subtask B2. A large difference in accuracy on development data can be observed with relatively minor changes in our network\u2019s architecture and hyperparameters. We therefore expect fine-tuning of these parameters to yield higher accuracies.", "targets": "Byte-based Language Identification with Deep Convolutional Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3c9860fb24b142beb9b71d501bf25604", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recent studies have shown that deep neural networks (DNN) are vulnerable to adversarial samples: maliciously-perturbed samples crafted to yield incorrect model outputs. Such attacks can severely undermine DNN systems, particularly in security-sensitive settings. It was observed that an adversary could easily generate adversarial samples by making a small perturbation on irrelevant feature dimensions that are unnecessary for the current classification task. To overcome this problem, we introduce a defensive mechanism called DeepMask. By identifying and removing unnecessary features in a DNN model, DeepMask limits the capacity an attacker can use generating adversarial samples and therefore increase the robustness against such inputs. Comparing with other defensive approaches, DeepMask is easy to implement and computationally efficient. Experimental results show that DeepMask can increase the performance of state-of-the-art DNN models against adversarial samples.", "targets": "DEEPMASK: MASKING DNN MODELS FOR ROBUST- NESS AGAINST ADVERSARIAL SAMPLES"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4b0334d6d105432b8ee1048031131698", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In a recent paper, we have shown that Plan Recognition over STRIPS can be formulated and solved using Classical Planning heuristics and algorithms (Ramirez and Geffner 2009). In this work, we show that this formulation subsumes the standard formulation of Plan Recognition over libraries through a compilation of libraries into STRIPS theories. The libraries correspond to AND/OR graphs that may be cyclic and where children of AND nodes may be partially ordered. These libraries include Context-Free Grammars as a special case, where the Plan Recognition problem becomes a parsing with missing tokens problem. Plan Recognition over the standard libraries become Planning problems that can be easily solved by any modern planner, while recognition over more complex libraries, including Context\u2013Free Grammars (CFGs), illustrate limitations of current Planning heuristics and suggest improvements that may be relevant in other Planning problems too.", "targets": "Heuristics for Planning, Plan Recognition and Parsing (Written: June 2009, Published: May 2016)"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-82363befa41e45808ac49de2c2defdbe", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNetlevel accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 1MB (461x smaller than AlexNet). The SqueezeNet architecture is available for download here: https://github.com/DeepScale/SqueezeNet", "targets": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b1fe68b718524c1b839d6d52084dd231", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Continuous-time Bayesian networks is a natural structured representation language for multicomponent stochastic processes that evolve continuously over time. Despite the compact representation, inference in such models is intractable even in relatively simple structured networks. Here we introduce a mean field variational approximation in which we use a product of inhomogeneous Markov processes to approximate a distribution over trajectories. This variational approach leads to a globally consistent distribution, which can be efficiently queried. Additionally, it provides a lower bound on the probability of observations, thus making it attractive for learning tasks. We provide the theoretical foundations for the approximation, an efficient implementation that exploits the wide range of highly optimized ordinary differential equations (ODE) solvers, experimentally explore characterizations of processes for which this approximation is suitable, and show applications to a large-scale realworld inference problem.", "targets": "Mean Field Variational Approximation for Continuous-Time Bayesian Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-768cd49df94c41b6a2b54a52899aa20d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In most practical problems of classifier learning, the training data suffers from the label noise. Hence, it is important to understand how robust is a learning algorithm to such label noise. This paper presents some theoretical analysis to show that many popular decision tree algorithms are robust to symmetric label noise under large sample size. We also present some sample complexity results which provide some bounds on the sample size for the robustness to hold with a high probability. Through extensive simulations we illustrate this robustness.", "targets": "On the Robustness of Decision Tree Learning under Label Noise"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-eef3f5334a624f36b4f7efc4d12d2eda", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recurrent neural network (RNN) based character-level language models (CLMs) are extremely useful for modeling unseen words by nature. However, their performance is generally much worse than the word-level language models (WLMs), since CLMs need to consider longer history of tokens to properly predict the next one. We address this problem by proposing hierarchical RNN architectures, which consist of multiple modules with different clock rates. Despite the multiclock structures, the input and output layers operate with the character-level clock, which allows the existing RNN CLM training approaches to be directly applicable without any modifications. Our CLM models show better perplexity than KneserNey (KN) 5-gram WLMs on the One Billion Word Benchmark with only 2% of parameters. Also, we present real-time character-level end-to-end speech recognition examples on the Wall Street Journal (WSJ) corpus, where replacing traditional mono-clock RNN CLMs with the proposed models results in better recognition accuracies even though the number of parameters are reduced to 30%.", "targets": "Character-Level Language Modeling with Hierarchical Recurrent Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-dc3fbb7aaf664fde9848150df8a1bd04", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "With ever-increasing computational demand for deep learning, it is critical to investigate the implications of the numeric representation and precision of DNN model weights and activations on computational efficiency. In this work, we explore unconventional narrow-precision floating-point representations as it relates to inference accuracy and efficiency to steer the improved design of future DNN platforms. We show that inference using these custom numeric representations on production-grade DNNs, including GoogLeNet and VGG, achieves an average speedup of 7.6\u00d7 with less than 1% degradation in inference accuracy relative to a state-of-the-art baseline platform representing the most sophisticated hardware using single-precision floating point. To facilitate the use of such customized precision, we also present a novel technique that drastically reduces the time required to derive the optimal precision configuration.", "targets": "DEEP NEURAL NETWORKS"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9bc5bae4e5124f6ab12d51597008d647", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We describe a dynamic programming algorithm for computing the marginal distribution of discrete probabilistic programs. This algorithm takes a functional interpreter for an arbitrary probabilistic programming language and turns it into an efficient marginalizer. Because direct caching of sub-distributions is impossible in the presence of recursion, we build a graph of dependencies between sub-distributions. This factored sum-product network makes (potentially cyclic) dependencies between subproblems explicit, and corresponds to a system of equations for the marginal distribution. We solve these equations by fixed-point iteration in topological order. We illustrate this algorithm on examples used in teaching probabilistic models, computational cognitive science research, and game theory.", "targets": "A Dynamic Programming Algorithm for Inference in Recursive Probabilistic Programs"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-53079f1ffd5d4b87b730ac6b7bdc0c97", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Ontology-based data access is concerned with querying incomplete data sources in the presence of domain-specific knowledge provided by an ontology. A central notion in this setting is that of an ontology-mediated query, which is a database query coupled with an ontology. In this paper, we study several classes of ontology-mediated queries, where the database queries are given as some form of conjunctive query and the ontologies are formulated in description logics or other relevant fragments of first-order logic, such as the guarded fragment and the unary-negation fragment. The contributions of the paper are three-fold. First, we characterize the expressive power of ontology-mediated queries in terms of fragments of disjunctive datalog. Second, we establish intimate connections between ontology-mediated queries and constraint satisfaction problems (CSPs) and their logical generalization, MMSNP formulas. Third, we exploit these connections to obtain new results regarding (i) first-order rewritability and datalogrewritability of ontology-mediated queries, (ii) P/NP dichotomies for ontology-mediated queries, and (iii) the query containment problem for ontology-mediated queries.", "targets": "Ontology-based Data Access: A Study through Disjunctive Datalog, CSP, and MMSNP"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-28ba2c1cee404b4e9ea719a918c823ce", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper discusses models for dialogue state tracking using recurrent neural networks (RNN). We present experiments on the standard dialogue state tracking (DST) dataset, DSTC2 [7]. On the one hand, RNN models became the state of the art models in DST, on the other hand, most state-of-the-art DST models are only turn-based and require dataset-specific preprocessing (e.g. DSTC2-specific) in order to achieve such results. We implemented two architectures which can be used in an incremental setting and require almost no preprocessing. We compare their performance to the benchmarks on DSTC2 and discuss their properties. With only trivial preprocessing, the performance of our models is close to the state-ofthe-art results.1", "targets": "Recurrent Neural Networks for Dialogue State Tracking"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c09912aef1984f77af50529f61941a4d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "People can recognize scenes across many different modalities beyond natural images. In this paper, we investigate how to learn cross-modal scene representations that transfer across modalities. To study this problem, we introduce a new cross-modal scene dataset. While convolutional neural networks can categorize scenes well, they also learn an intermediate representation not aligned across modalities, which is undesirable for cross-modal transfer applications. We present methods to regularize cross-modal convolutional neural networks so that they have a shared representation that is agnostic of the modality. Our experiments suggest that our scene representation can help transfer representations across modalities for retrieval. Moreover, our visualizations suggest that units emerge in the shared representation that tend to activate on consistent concepts independently of the modality.", "targets": "Cross-Modal Scene Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4df1c491c41346db94ad787bb7ffb119", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The paper analyzes the interaction between humans and computers in terms of response time in solving the image-based CAPTCHA. In particular, the analysis focuses on the attitude of the different Internet users in easily solving four different types of image-based CAPTCHAs which include facial expressions like: animated character, old woman, surprised face, worried face. To pursue this goal, an experiment is realized involving 100 Internet users in solving the four types of CAPTCHAs, differentiated by age, Internet experience, and education level. The response times are collected for each user. Then, association rules are extracted from user data, for evaluating the dependence of the response time in solving the CAPTCHA from age, education level and experience in internet usage by statistical analysis. The results implicitly capture the users\u2019 psychological states showing in what states the users are more sensible. It reveals to be a novelty and a meaningful analysis in the state-of-the-art.", "targets": "Analysis of the Human-Computer Interaction on the Example of Image-based CAPTCHA by Association Rule Mining"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-063121feb6eb4f0eaf05fa1ae71fcbc7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recent price-of-anarchy analyses of games of complete information suggest that coarse correlatedequilibria, which characterize outcomes resulting from no-regret learning dynamics, have near-optimalwelfare. This work provides two main technical results that lift this conclusion to games of incompleteinformation, a.k.a., Bayesian games. First, near-optimal welfare in Bayesian games follows directly fromthe smoothness-based proof of near-optimal welfare in the same game when the private informationis public. Second, no-regret learning dynamics converge to Bayesian coarse correlated equilibrium inthese incomplete information games. These results are enabled by interpretation of a Bayesian gameas a stochastic game of complete information.", "targets": "No-Regret Learning in Repeated Bayesian Games"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2ea1a3fec7774a70b67603cb7d3e8e7a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The introduction of loopy belief propagation (LBP) revitalized the application of graphical models in many domains. Many recent works present improvements on the basic LBP algorithm in an attempt to overcome convergence and local optima problems. Notable among these are convexified free energy approximations that lead to inference procedures with provable convergence and quality properties. However, empirically LBP still outperforms most of its convex variants in a variety of settings, as we also demonstrate here. Motivated by this fact we seek convexified free energies that directly approximate the Bethe free energy. We show that the proposed approximations compare favorably with state-of-the art convex free energy approximations.", "targets": "Convexifying the Bethe Free Energy"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c3d3ec9d1add4304a61667fc9dec1cc4", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We show that the herding procedure of Welling (2009b) takes exactly the form of a standard convex optimization algorithm\u2014 namely a conditional gradient algorithm minimizing a quadratic moment discrepancy. This link enables us to invoke convergence results from convex optimization and to consider faster alternatives for the task of approximating integrals in a reproducing kernel Hilbert space. We study the behavior of the different variants through numerical simulations. Our experiments shed more light on the learning bias of herding: they indicate that while we can improve over herding on the task of approximating integrals, the original herding algorithm approaches more often the maximum entropy distribution.", "targets": "On the Equivalence between Herding and Conditional Gradient Algorithms"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-fa30a59569b145998d9e13bd7955904f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "One of the key issues in both natural language understanding and generation is the appropriate processing of Multiword Expressions (MWEs). MWEs pose a huge problem to the precise language processing due to their idiosyncratic nature and diversity in lexical, syntactical and semantic properties. The semantics of a MWE cannot be expressed after combining the semantics of its constituents. Therefore, the formalism of semantic clustering is often viewed as an instrument for extracting MWEs especially for resource constraint languages like Bengali. The present semantic clustering approach contributes to locate clusters of the synonymous noun tokens present in the document. These clusters in turn help measure the similarity between the constituent words of a potentially candidate phrase using a vector space model and judge the suitability of this phrase to be a MWE. In this experiment, we apply the semantic clustering approach for nounnoun bigram MWEs, though it can be extended to any types of MWEs. In parallel, the well known statistical models, namely Point-wise Mutual Information (PMI), Log Likelihood Ratio (LLR), Significance function are also employed to extract MWEs from the Bengali corpus. The comparative evaluation shows that the 372 Chakraborty et al. semantic clustering approach outperforms all other competing statistical models. As a byproduct of this experiment, we have started developing a standard lexicon in Bengali that serves as a productive Bengali linguistic thesaurus.", "targets": "Identifying Bengali Multiword Expressions using Semantic Clustering"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-bb684bb84bd7499899ad62af1751bf0f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Agents of general intelligence deployed in real-world scenarios must adapt to ever-changing environmental conditions. While such adaptive agents may leverage engineered knowledge, they will require the capacity to construct and evaluate knowledge themselves from their own experience in a bottom-up, constructivist fashion. This position paper builds on the idea of encoding knowledge as temporally extended predictions through the use of general value functions. Prior work has focused on learning predictions about externally derived signals about a task or environment (e.g. battery level, joint position). Here we advocate that the agent should also predict internally generated signals regarding its own learning process\u2014for example, an agent\u2019s confidence in its learned predictions. Finally, we suggest how such information would be beneficial in creating an introspective agent that is able to learn to make good decisions in a complex, changing world. Predictive Knowledge. The ability to autonomously construct knowledge directly from experience produced by an agent interacting with the world is a key requirement for general intelligence. One particularly promising form of knowledge that is grounded in experience is predictive knowledge\u2014here defined as a collection of multi-step predictions about observable outcomes that are contingent on different ways of behaving. Much like scientific knowledge, predictive knowledge can be maintained and updated by making a prediction, executing a procedure, and observing the outcome and updating the prediction\u2014a process completely independent of human intervention. Experience-grounded predictions are a powerful resource to guide decision making in environments which are too complex or dynamic to be exhaustively anticipated by an engineer [1,2]. A value function from the field of reinforcement learning is one way of representing predictive knowledge. Value functions are a learned or computed mapping from state to the long-term expectation of future reward. Sutton et al. recently introduced a generalization of value functions that makes it possible to specify general predictive questions [1]. These general value functions (GVFs), specify a prediction target as the expected discounted sum of future signals of interest (cumulants) observed while the agent selects actions according to some decision making policy. Temporal discounting is also generalized in GVFs from the conventional exponential weighting of future cumulants to an arbitrary, stateconditional weighting of future cumulants. This enables GVFs to specify a rich ar X iv :1 60 6. 05 59 3v 1 [ cs .A I] 1 7 Ju n 20 16", "targets": "Introspective Agents: Confidence Measures for General Value Functions"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9b2639ba0e7e4acca572da73b3804408", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Development of a proper names pronunciation lexicon is usually a manual effort which can not be avoided. Grapheme to phoneme (G2P) conversion modules, in literature, are usually rule based and work best for non-proper names in a particular language. Proper names are foreign to a G2P module. We follow an optimization approach to enable automatic construction of proper names pronunciation lexicon. The idea is to construct a small orthogonal set of words (basis) which can span the set of names in a given database. We propose two algorithms for the construction of this basis. The transcription lexicon of all the proper names in a database can be produced by the manual transcription of only the small set of basis words. We first construct a cost function and show that the minimization of the cost function results in a basis. We derive conditions for convergence of this cost function and validate them experimentally on a very large proper name database. Experiments show the transcription can be achieved by transcribing a set of small number of basis words. The algorithms proposed are generic and independent of language; however performance is better if the proper names have same origin, namely, same language or geographical region.", "targets": "Basis Identification for Automatic Creation of Pronunciation Lexicon for Proper Names"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-31b2f35ceef847b8b2ee9b3eee52c6ee", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Type-level word embeddings use the same set of parameters to represent all instances of a word regardless of its context, ignoring the inherent lexical ambiguity in language. Instead, we embed semantic concepts (or synsets) as defined in WordNet and represent a word token in a particular context by estimating a distribution over relevant semantic concepts. We use the new, context-sensitive embeddings in a model for predicting prepositional phrase (PP) attachments and jointly learn the concept embeddings and model parameters. We show that using context-sensitive embeddings improves the accuracy of the PP attachment model by 5.4% absolute points, which amounts to a 34.4% relative reduction in errors.", "targets": "Using Ontology-Grounded Token Embeddings To Predict Prepositional Phrase Attachments"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-708b5701e72544cb92f61e526f8bd850", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article and generate a question-answer pair, but start from an existing question-answer pair, crawled from J! Archive, and augment it with text snippets retrieved by Google. Following this approach, we built SearchQA, which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context tuple of the SearchQA comes with additional meta-data such as the snippet\u2019s URL, which we believe will be valuable resources for future research. We conduct human evaluation as well as test two baseline methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a meaningful gap between the human and machine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering.", "targets": "SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a543509e031a4324959d6ff9e8c04cc3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We introduce a new multi-modal task for computer systems, posed as a combined vision-language comprehension challenge: identifying the most suitable text describing a scene, given several similar options. Accomplishing the task entails demonstrating comprehension beyond just recognizing \u201ckeywords\u201d (or key-phrases) and their corresponding visual concepts. Instead, it requires an alignment between the representations of the two modalities that achieves a visually-grounded \u201cunderstanding\u201d of various linguistic elements and their dependencies. This new task also admits an easy-to-compute and wellstudied metric: the accuracy in detecting the true target among the decoys. The paper makes several contributions: an effective and extensible mechanism for generating decoys from (human-created) image captions; an instance of applying this mechanism, yielding a large-scale machine comprehension dataset (based on the COCO images and captions) that we make publicly available; human evaluation results on this dataset, informing a performance upper-bound; and several baseline and competitive learning approaches that illustrate the utility of the proposed task and dataset in advancing both image and language comprehension. We also show that, in a multi-task learning setting, the performance on the proposed task is positively correlated with the end-to-end task of image captioning.", "targets": "Understanding Image and Text Simultaneously: a Dual Vision-Language Machine Comprehension Task"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-34daf77114af408b9d276f2c9a496399", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We envision a machine learning service provider facing a continuous stream of problems with the same input domain, but with output domains that may differ. Clients present the provider with problems implicitly, by labeling a few example inputs, and then ask the provider to train models which reasonably extend their labelings to novel inputs. The provider wants to avoid constraining its users to a set of common labels, so it does not assume any particular correspondence between labels for a new task and labels for previously encountered tasks. To perform well in this setting, the provider needs a representation of the input domain which, in expectation, permits effective models for new problems to be learned efficiently from a small number of examples. While this bears a resemblance to settings considered in previous work on multitask and lifelong learning, our non-assumption of inter-task label correspondence leads to a novel algorithm: Lifelong Learner of Discriminative Representations (LLDR), which explicitly minimizes a proxy for the intra-task small-sample generalization error. We examine the relative benefits of our approach on a diverse set of real-world datasets in three significant scenarios: representation learning, multitask learning and lifelong learning.", "targets": "Lifelong Learning of Discriminative Representations"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6934f98acd1a4a5eae6112b98da0800f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The Rao-Blackwell theorem is utilized to analyze and improve the scalability of inference in large probabilistic models that exhibit symmetries. A novel marginal density estimator is introduced and shown both analytically and empirically to outperform standard estimators by several orders of magnitude. The developed theory and algorithms apply to a broad class of probabilistic models including statistical relational models considered not susceptible to lifted probabilistic inference. Introduction Many successful applications of artificial intelligence research are based on large probabilistic models. Examples include Markov logic networks (Richardson and Domingos 2006), conditional random fields (Lafferty, McCallum, and Pereira 2001) and, more recently, deep learning architectures (Hinton, Osindero, and Teh 2006; Bengio and LeCun 2007; Poon and Domingos 2011). Especially the models one encounters in the statistical relational learning (SRL) literature often have joint distributions spanning millions of variables and features. Indeed, these models are so large that, at first sight, inference and learning seem daunting. For numerous of these models, however, scalable approximate and, to a lesser extend, exact inference algorithms do exist. Most notably, there has been a strong focus on lifted inference algorithms, that is, algorithms that group indistinguishable variables and features during inference. For an overview we refer the reader to (Kersting 2012). Lifted algorithms facilitate efficient inference in numerous large probabilistic models for which inference is NP-hard in principle. We are concerned with the estimation of marginal probabilities based on a finite number of sample points. We show that the feasibility of inference and learning in large and highly symmetric probabilistic models can be explained with the Rao-Blackwell theorem from the field of statistics. The theory and algorithms do not directly depend on the syntactical nature of the relational models such as arity of predicates and number of variables per formula but only on the given automorphism group of the probabilistic model, and are applicable to classes of probabilistic models much broader than the class of statistical relational models. Copyright c \u00a9 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Consider an experiment where a coin is flipped n times. While a frequentist would assume the flips to be i.i.d., a Bayesian typically makes the weaker assumption of exchangeability \u2013 that the probability of an outcome sequence only depends on the number of \u201cheads\u201d in the sequence and not on their order. Under the non-i.i.d. assumption, a possible corresponding graphical model is the fully connected graph with n nodes and high treewidth. The actual number of parameters required to specify the distribution, however, is only n+1, one for each sequence with 0 \u2264 k \u2264 n \u201cheads.\u201d Bruno de Finetti was the first to realize that such a sequence of random variables can be (re-)parameterized as a unique mixture of n+1 independent urn processes (de Finetti 1938). It is this notion of a parameterization as a mixture of urn processes that is at the heart of our work. A direct application of de Finetti\u2019s results, however, is often impossible since not all variables are exchangeable in realistic probabilistic models. Motivated by the intuition of exchangeability, we show that arbitrary model symmetries allow us to re-paramterize the distribution as a mixture of independent urn processes where each urn consists of isomorphic joint assignments. Most importantly, we develop a novel Rao-Blackwellized estimator that implicitly estimates the fewer parameters of the simpler mixture model and, based on these, computes the marginal densities. We identify situations in which the application of the Rao-Blackwell estimator is tractable. In particular, we show that the Rao-Blackwell estimator is always linear-time computable for single-variable marginal density estimation. By invoking the Rao-Blackwell theorem, we show that the mean squared error of the novel estimator is at least as small as that of the standard estimator and strictly smaller under non-trivial symmetries of the probabilistic model. Moreover, we prove that for estimates based on sample points drawn from a Markov chainM, the bias of the Rao-Blackwell estimator is governed by the mixing time of the quotient Markov chain whose convergence behavior is superior to that ofM. We present empirical results verifying that the RaoBlackwell estimator always outperforms the standard estimator by up to several orders of magnitude, irrespective of the model structure. Indeed, we show that the results of the novel estimator resemble those typically observed in lifted inference papers. For the first time such a performance is shown for an SRL model with a transitivity formula. ar X iv :1 30 4. 26 94 v1 [ cs .A I] 9 A pr 2 01 3", "targets": "Symmetry-Aware Marginal Density Estimation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-617d79144c3f40eabc5e08af8aa068b5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In classical machine learning, regression is treated as a black box process of identifying a suitable function from a hypothesis set without attempting to gain insight into the mechanism connecting inputs and outputs. In the natural sciences, however, finding an interpretable function for a phenomenon is the prime goal as it allows to understand and generalize results. This paper proposes a novel type of function learning network, called equation learner (EQL), that can learn analytical expressions and is able to extrapolate to unseen domains. It is implemented as an end-to-end differentiable feed-forward network and allows for efficient gradient based training. Due to sparsity regularization concise interpretable expressions can be obtained. Often the true underlying source expression is identified. INTRODUCTION The quality of a model is typically measured by its ability to generalize from a training set to previously unseen data from the same distribution. In regression tasks generalization essentially boils down to interpolation if the training data is sufficiently dense. As long as models are selected correctly, i. e. in a way to not overfit the data, the regression problem is well understood and can \u2013 at least conceptually \u2013 be considered solved. However, when working with data from real-world devices, e. g. controlling a robotic arm, interpolation might not be sufficient. It could happen that future data lies outside of the training domain, e. g. when the arm is temporarily operated outside of its specifications. For the sake of robustness and safety it is desirable in such a case to have a regression model that continues to make good predictions, or at least does not fail catastrophically. This setting, which we call extrapolation generalization, is the topic of the present paper. We are particularly interested in regression tasks for systems that can be described by real-valued analytic expression, e. g. mechanical systems such as a pendulum or a robotic arm. These are typically governed by a highly nonlinear function but it is nevertheless possible, in principle, to infer their behavior on an extrapolation domain from their behavior elsewhere. We make two main contributions: 1) a new type of network that can learn analytical expressions and is able to extrapolate to unseen domains and 2) a model selection strategy tailored to the extrapolation setting. The following section describes the setting of regression and extrapolation. Afterwards we introduce our method and discuss the architecture, its training, and its relation to prior art. We present our results in the Section Experimental evaluation and close with conclusions. REGRESSION AND EXTRAPOLATION We consider a multivariate regression problem with a training set {(x1, y1), . . . , (xN , yN )} with x \u2208 R, y \u2208 R. Because our main interest lies on extrapolation in the context of learning the dynamics of physical systems we assume the data originates from an unknown analytical function (or system of functions), \u03c6 : R \u2192 R with additive zero-mean noise, \u03be, i. e. y = \u03c6(x) + \u03be and E\u03be = 0. The function \u03c6 may, for instance, reflect a system of ordinary differential equations that govern the movements of a robot arm or the like. The general task is to learn a function \u03c8 : R \u2192 R that approximates the true functional relation as well as possible in the squared loss sense, i. e. achieves minimal expected error E\u2016\u03c8(x) \u2212 \u03c6(x)\u20162. In practice, we only have particular examples of the function values available and measure the quality of predicting in terms of the empirical error on", "targets": "EXTRAPOLATION AND LEARNING EQUATIONS"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2d8d46df2dfc4302aeb46c53aeb4bd03", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Multi objective (MO) optimization is an emerging field which is increasingly being implemented in many industries globally. In this work, the MO optimization of the extraction process of bioactive compounds from the Gardenia Jasminoides Ellis fruit was solved. Three swarm-based algorithms have been applied in conjunction with normal-boundary intersection (NBI) method to solve this MO problem. The gravitational search algorithm (GSA) and the particle swarm optimization (PSO) technique were implemented in this work. In addition, a novel Hopfield-enhanced particle swarm optimization was developed and applied to the extraction problem. By measuring the levels of dominance, the optimality of the approximate Pareto frontiers produced by all the algorithms were gauged and compared. Besides, by measuring the levels of convergence of the frontier, some understanding regarding the structure of the objective space in terms of its relation to the level of frontier dominance is uncovered. Detail comparative studies were conducted on all the algorithms employed and developed in this work.", "targets": "Swarm Intelligence for Multiobjective Optimization of Extraction Process"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-42426d3ccd814aed8c97e834908ce0d0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Rare diseases are very difficult to identify among large number of other possible diagnoses. Better availability of patient data and improvement in machine learning algorithms empower us to tackle this problem computationally. In this paper, we target one such rare disease \u2013 cardiac amyloidosis. We aim to automate the process of identifying potential cardiac amyloidosis patients with the help of machine learning algorithms and also learn most predictive factors. With the help of experienced cardiologists, we prepared a gold standard with 73 positive (cardiac amyloidosis) and 197 negative instances. We achieved high average cross-validation F1 score of 0.98 using an ensemble machine learning classifier. Some of the predictive variables were: Age and Diagnosis of cardiac arrest, chest pain, congestive heart failure, hypertension, prim open angle glaucoma, and shoulder arthritis. Further studies are needed to validate the accuracy of the system across an entire health system and its generalizability for other diseases.", "targets": "A Bootstrap Machine Learning Approach to Identify Rare Disease Patients from Electronic Health Records"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-46539e08d9a3400a81d4fcb26b8df68c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Hidden Markov Models (HMMs) are learning methods for pattern recognition. The probabilistic HMMs have been one of the most used techniques based on the Bayesian model. First-order probabilistic HMMs were adapted to the theory of belief functions such that Bayesian probabilities were replaced with mass functions. In this paper, we present a second-order Hidden Markov Model using belief functions. Previous works in belief HMMs have been focused on the first-order HMMs. We extend them to the second-order model.", "targets": "Second-order Belief Hidden Markov Models"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7d015deaaa1147d39a8d5b412dcd3cbe", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Reasoning and inference are central to human and artificial intelligence. Modeling inference in human language is very challenging. With the availability of large annotated data (Bowman et al., 2015), it has recently become feasible to train neural network based inference models, which have shown to be very effective. In this paper, we present a new state-of-the-art result, achieving the accuracy of 88.6% on the Stanford natural language inference dataset. Unlike the previous top models that use very complicated network architectures, we first demonstrate that carefully designing sequential inference models based on chain LSTMs can outperform all previous models. Based on this, we further show that by explicitly considering recursive architectures in both local inference modeling and inference composition, we achieve additional improvement. Particularly, incorporating syntactic parsing information contributes to our best result\u2014it further improves the performance even when added to the already very strong model.", "targets": "Enhanced LSTM for Natural Language Inference"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1869881353fd4ccbbdcaf8f1724994fa", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Kinect skeleton tracker is able to achieve considerable human body tracking performance in convenient and a low-cost manner. However, The tracker often captures unnatural human poses such as discontinuous and vibrated motions when self-occlusions occur. A majority of approaches tackle this problem by using multiple Kinect sensors in a workspace. Combination of the measurements from different sensors is then conducted in Kalman filter framework or optimization problem is formulated for sensor fusion. However, these methods usually require heuristics to measure reliability of measurements observed from each Kinect sensor. In this paper, we developed a method to improve Kinect skeleton using single Kinect sensor, in which supervised learning technique was employed to correct unnatural tracking motions. Specifically, deep recurrent neural networks were used for improving joint positions and velocities of Kinect skeleton, and three methods were proposed to integrate the refined positions and velocities for further enhancement. Moreover, we suggested a novel measure to evaluate naturalness of captured motions. We evaluated the proposed approach by comparison with the ground truth obtained using a commercial optical maker-based motion capture system.", "targets": "Tracking Human-like Natural Motion Using Deep Recurrent Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5aa229a0810a41bd92c036fc12464a65", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a sequence labeling framework with a secondary training objective, learning to predict surrounding words for every word in the dataset. This language modeling objective incentivises the framework to learn general-purpose patterns of semantic and syntactic composition, which are also useful for improving accuracy on different sequence labeling tasks. The architecture was evaluated on 8 datasets, covering the tasks of error detection in learner texts, named entity recognition, chunking and POS-tagging. The novel language modeling objective provided consistent performance improvements on every benchmark, without requiring any additional annotated or unannotated data.", "targets": "Semi-supervised Multitask Learning for Sequence Labeling"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ef3fe689762a4169819e5d39b2e5a391", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We consider the problem of sparse variable selection in nonparametric additive models, with the prior knowledge of the structure among the covariates to encourage those variables within a group to be selected jointly. Previous works either study the group sparsity in the parametric setting (e.g., group lasso), or address the problem in the nonparametric setting without exploiting the structural information (e.g., sparse additive models). In this paper, we present a new method, called group sparse additive models (GroupSpAM), which can handle group sparsity in additive models. We generalize the `1/`2 norm to Hilbert spaces as the sparsityinducing penalty in GroupSpAM. Moreover, we derive a novel thresholding condition for identifying the functional sparsity at the group level, and propose an efficient block coordinate descent algorithm for constructing the estimate. We demonstrate by simulation that GroupSpAM substantially outperforms the competing methods in terms of support recovery and prediction accuracy in additive models, and also conduct a comparative experiment on a real breast cancer dataset.", "targets": "Group Sparse Additive Models"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d7528a61f13a49ffaab0d374de6a0808", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Understanding open-domain text is one of the primary challenges in natural language processing (NLP). Machine comprehension benchmarks evaluate the system\u2019s ability to understand text based on the text content only. In this work, we investigate machine comprehension on MCTest, a question answering (QA) benchmark. Prior work is mainly based on feature engineering approaches. We come up with a neural network framework, named hierarchical attention-based convolutional neural network (HABCNN), to address this task without any manually designed features. Specifically, we explore HABCNN for this task by two routes, one is through traditional joint modeling of document, question and answer, one is through textual entailment. HABCNN employs an attention mechanism to detect key phrases, key sentences and key snippets that are relevant to answering the question. Experiments show that HABCNN outperforms prior deep learning approaches by a big margin.", "targets": "Attention-Based Convolutional Neural Network for Machine Comprehension"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ff6d79f015cf45aba7557dd6ddfb90d3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Model interpretation is one of the key aspects of the model evaluation process. The explanation of the relationship between model variables and outputs is relatively easy for statistical models, such as linear regressions, thanks to the availability of model parameters and their statistical significance. For \u201cblack box\u201d models, such as random forest, this information is hidden inside the model structure. This work presents an approach for computing feature contributions for random forest classification models. It allows for the determination of the influence of each variable on the model prediction for an individual instance. By analysing feature contributions for a training dataset, the most significant variables can be determined and their typical contribution towards predictions made for individual classes, i.e., class-specific feature contribution \u201dpatterns\u201d, are discovered. These patterns represent a standard behaviour of the model and allow for an additional assessment of the model reliability for a new data. Interpretation of feature contributions for two UCI benchmark datasets shows the potential of the proposed methodology. The robustness of results is demonstrated through an extensive analysis of feature contributions calculated for a large number of generated random forest models. \u2217a.m.wojak@bradford.ac.uk \u2020j.palczewski@leeds.ac.uk \u2021r.l.marcheserobinson@ljmu.ac.uk \u00a7d.neagu@bradford.ac.uk 1 ar X iv :1 31 2. 11 21 v1 [ cs .L G ] 4 D ec 2 01 3", "targets": "Interpreting random forest classification models using a feature contribution method"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-307fab2d9131487694f4d5a40f48dd8a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present a general-purpose tagger based on convolutional neural networks (CNN), used for both composing word vectors and encoding context information. The CNN tagger is robust across different tagging tasks: without task-specific tuning of hyper-parameters, it achieves state-of-theart results in part-of-speech tagging, morphological tagging and supertagging. The CNN tagger is also robust against the outof-vocabulary problem, it performs well on artificially unnormalized texts.", "targets": "A General-Purpose Tagger with Convolutional Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0a1c71da809e444f9d002b745186dbfc", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper we examine the benefit of performing named entity recognition (NER) and co-reference resolution to an English and a Greek corpus used for text segmentation. The aim here is to examine whether the combination of text segmentation and information extraction can be beneficial for the identification of the various topics that appear in a document. NER was performed manually in the English corpus and was compared with the output produced by publicly available annotation tools while, an already existing tool was used for the Greek corpus. Produced annotations from both corpora were manually corrected and enriched to cover four types of named entities. Co-reference resolution i.e., substitution of every reference of the same instance with the same named entity identifier was subsequently performed. The evaluation, using five text segmentation algorithms for the English corpus and four for the Greek corpus leads to the conclusion that, the benefit highly depends on the segment\u2019s topic, the number of named entity instances appearing in it, as well as the segment\u2019s length.", "targets": "Text Segmentation using Named Entity Recognition and Co-reference Resolution in English and Greek Texts"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7346ba6d53b74f8988deb9c7dc69d563", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Algorithms that generate computer game content require game design knowledge. We present an approach to automatically learn game design knowledge for level design from gameplay videos. We further demonstrate how the acquired design knowledge can be used to generate sections of game levels. Our approach involves parsing video of people playing a game to detect the appearance of patterns of sprites and utilizing machine learning to build a probabilistic model of sprite placement. We show how rich game design information can be automatically parsed from gameplay videos and represented as a set of generative probabilistic models. We use Super Mario Bros. as a proof of concept. We evaluate our approach on a measure of playability and stylistic similarity to the original levels as represented in the gameplay videos.", "targets": "Toward Game Level Generation from Gameplay Videos"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-58e682c56afb479fb9b03497c8efeb2b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recently, bidirectional recurrent network language models (biRNNLMs) have been shown to outperform standard, unidirectional, recurrent neural network language models (uni-RNNLMs) on a range of speech recognition tasks. This indicates that future word context information beyond the word history can be useful. However, bi-RNNLMs pose a number of challenges as they make use of the complete previous and future word context information. This impacts both training efficiency and their use within a lattice rescoring framework. In this paper these issues are addressed by proposing a novel neural network structure, succeeding word RNNLMs (suRNNLMs). Instead of using a recurrent unit to capture the complete future word contexts, a feedforward unit is used to model a finite number of succeeding, future, words. This model can be trained much more efficiently than bi-RNNLMs and can also be used for lattice rescoring. Experimental results on a meeting transcription task (AMI) show the proposed model consistently outperformed uni-RNNLMs and yield only a slight degradation compared to bi-RNNLMs in N-best rescoring. Additionally, performance improvements can be obtained using lattice rescoring and subsequent confusion network decoding.", "targets": "FUTURE WORD CONTEXTS IN NEURAL NETWORK LANGUAGE MODELS"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6e08d7ab7a9a42348e91167872657c87", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In the last two decades, modal and description logics have been applied to numerous areas of computer science, including knowledge representation, formal verification, database theory, distributed computing and, more recently, semantic web and ontologies. For this reason, the problem of automated reasoning in modal and description logics has been thoroughly investigated. In particular, many approaches have been proposed for efficiently handling the satisfiability of the core normal modal logic Km, and of its notational variant, the description logic ALC. Although simple in structure, Km/ALC is computationally very hard to reason on, its satisfiability being PSpace-complete. In this paper we start exploring the idea of performing automated reasoning tasks in modal and description logics by encoding them into SAT, so that to be handled by stateof-the-art SAT tools; as with most previous approaches, we begin our investigation from the satisfiability in Km. We propose an efficient encoding, and we test it on an extensive set of benchmarks, comparing the approach with the main state-of-the-art tools available. Although the encoding is necessarily worst-case exponential, from our experiments we notice that, in practice, this approach can handle most or all the problems which are at the reach of the other approaches, with performances which are comparable with, or even better than, those of the current state-of-the-art tools. 1. Motivations and Goals In the last two decades, modal and description logics have provided an essential framework for many applications in numerous areas of computer science, including artificial intelligence, formal verification, database theory, distributed computing and, more recently, semantic web and ontologies. For this reason, the problem of automated reasoning in modal and description logics has been thoroughly investigated (e.g., Fitting, 1983; Ladner, 1977; Baader & Hollunder, 1991; Halpern & Moses, 1992; Baader, Franconi, Hollunder, Nebel, & Profitlich, 1994; Massacci, 2000). In particular, the research in modal and description logics has followed two parallel routes until the seminal work of Schild (1991), which proved that the core modal logic Km and the core description logic ALC are one a notational variant of the other. Since then, analogous results have been produced for a bunch of other logics, so that, nowadays the two research lines have mostly merged into one research flow. Many approaches have been proposed for efficiently reasoning in modal and description logics, starting from the problem of checking the satisfiability in the core normal modal logic Km and in its notational variant, the description logic ALC (hereafter simply \u201cKm\u201d). We classify them as follows. c \u00a92009 AI Access Foundation. All rights reserved. Sebastiani & Vescovi \u2022 The \u201cclassic\u201d tableau-based approach (Fitting, 1983; Baader & Hollunder, 1991; Massacci, 2000) is based on the construction of propositional tableau branches, which are recursively expanded on demand by generating successor nodes in a candidate Kripke model. Kris (Baader & Hollunder, 1991; Baader et al., 1994), Crack (Franconi, 1998), LWB (Balsiger, Heuerding, & Schwendimann, 1998) were among the main representative tools of this approach. \u2022 The DPLL-based approach (Giunchiglia & Sebastiani, 1996, 2000) differs from the previous one mostly in the fact that a Davis-Putnam-Logemann-Loveland (DPLL) procedure, which treats the modal subformulas as propositions, is used instead of the classic propositional tableaux procedure at each nesting level of the modal operators. KSAT (Giunchiglia & Sebastiani, 1996), ESAT (Giunchiglia, Giunchiglia, & Tacchella, 2002) and *SAT (Tacchella, 1999), are the representative tools of this approach. These two approaches merged into the \u201cmodern\u201d tableaux-based approach, which has been extended to work with more expressive description logics and to provide more sophisticate reasoning functions. Among the tools employing this approach, we recall FaCT/FaCT++ and DLP (Horrocks & Patel-Schneider, 1999), and Racer (Haarslev & Moeller, 2001). 1 \u2022 In the translational approach (Hustadt & Schmidt, 1999; Areces, Gennari, Heguiabehere, & de Rijke, 2000) the modal formula is encoded into first-order logic (FOL), and the encoded formula can be decided efficiently by a FOL theorem prover (Areces et al., 2000). Mspass (Hustadt, Schmidt, & Weidenbach, 1999) is the most representative tool of this approach. \u2022 The CSP-based approach (Brand, Gennari, & de Rijke, 2003) differs from the tableauxbased and DPLL-based ones mostly in the fact that a CSP (Constraint Satisfaction Problem) engine is used instead of tableaux/DPLL. KCSP is the only representative tool of this approach. \u2022 In the Inverse-method approach (Voronkov, 1999, 2001), a search procedure is based on the \u201cinverted\u201d version of a sequent calculus (which can be seen as a modalized version of propositional resolution). K K(Voronkov, 1999) is the only representative tool of this approach. \u2022 In the Automata-theoretic approach, (a symbolic representation based on BDDs \u2013 Binary Decision Diagrams \u2013 of) a tree automaton accepting all the tree models of the input formula is implicitly built and checked for emptiness (Pan, Sattler, & Vardi, 2002; Pan & Vardi, 2003). KBDD (Pan & Vardi, 2003) is the only representative tool of this approach. 1. Notice that there is not an universal agreement on the terminology \u201ctableaux-based\u201d and \u201cDPLL-based\u201d. E.g., tools like FaCT, DLP, and Racer are most often called \u201ctableau-based\u201d, although they use a DPLL-like algorithm instead of propositional tableaux for handling the propositional component of reasoning (Horrocks, 1998; Patel-Schneider, 1998; Horrocks & Patel-Schneider, 1999; Haarslev & Moeller, 2001).", "targets": "Automated Reasoning in Modal and Description Logics via SAT Encoding: the Case Study of Km/ALC-Satisfiability"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-495050437549449daff7013838f5bad9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Due to its ability to combine multiple base clusterings into a probably better and more robust clustering, the ensemble clustering technique has been attracting increasing attention in recent years. Despite the significant success, one limitation to most of the existing ensemble clustering methods is that they generally treat all base clusterings equally regardless of their reliability, which makes them vulnerable to low-quality base clusterings. Although some efforts have been made to (globally) evaluate and weight the base clusterings, yet these methods tend to view each base clustering as an individual and neglect the local diversity of clusters inside the same base clustering. It remains an open problem how to evaluate the reliability of clusters and exploit the local diversity in the ensemble to enhance the consensus performance, without access to data features or specific assumptions on data distribution. To address this, in this paper, we propose a novel ensemble clustering approach based on ensemble-driven cluster uncertainty estimation and local weighting strategy. In particular, the uncertainty of each cluster is estimated by considering the cluster labels in the entire ensemble via an entropic criterion. A novel ensemble-driven cluster validity measure is introduced, and a locally weighted co-association matrix is presented to serve as a summary for the ensemble of diverse clusters. With the local diversity in ensembles exploited, two novel consensus functions are further proposed. Extensive experiments on a variety of real-world datasets demonstrate the superiority of the proposed approach over the state-of-the-art.", "targets": "Locally Weighted Ensemble Clustering"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2f2243639b62431e929c2ac043b34579", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Learning effective configurations in computer systems without hand-crafting models for every parameter is a long-standing problem. This paper investigates the use of deep reinforcement learning for runtime parameters of cloud databases under latency constraints. Cloud services serve up to thousands of concurrent requests per second and can adjust critical parameters by leveraging performance metrics. In this work, we use continuous deep reinforcement learning to learn optimal cache expirations for HTTP caching in content delivery networks. To this end, we introduce a technique for asynchronous experience management called delayed experience injection, which facilitates delayed reward and next-state computation in concurrent environments where measurements are not immediately available. Evaluation results show that our approach based on normalized advantage functions and asynchronous CPU-only training outperforms a statistical estimator.", "targets": "Learning Runtime Parameters in Computer Systems with Delayed Experience Injection"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a2ba660cd7b943849d352b8d507d7538", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "As a well-known NP-hard problem, the Three-Index Assignment Problem (AP3) has attracted lots of research efforts for developing heuristics. However, existing heuristics either obtain less competitive solutions or consume too much running time. In this paper, a new heuristic named Approximate Muscle guided Beam Search (AMBS) is developed to achieve a good trade-off between solution quality and running time. By combining the approximate muscle with beam search, the solution space size can be significantly decreased, thus the time for searching the solution can be sharply reduced. Extensive experimental results on the benchmark indicate that the new algorithm is able to obtain solutions with competitive quality and it can be employed on instances with large-scale. Work of this paper not only proposes a new efficient heuristic, but also provides a promising method to improve the efficiency of beam search.", "targets": "Approximate Muscle Guided Beam Search for Three-Index Assignment Problem"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-73a9eda755074f239e7393e374a261c2", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Pattern classification systems are commonly used in adversarial applications, like biometric authentication, network intrusion detection, and spam filtering, in which data can be purposely manipulated by humans to undermine their operation. As this adversarial scenario is not taken into account by classical design methods, pattern classification systems may exhibit vulnerabilities, whose exploitation may severely affect their performance, and consequently limit their practical utility. Extending pattern classification theory and design methods to adversarial settings is thus a novel and very relevant research direction, which has not yet been pursued in a systematic way. In this paper, we address one of the main open issues: evaluating at design phase the security of pattern classifiers, namely, the performance degradation under potential attacks they may incur during operation. We propose a framework for empirical evaluation of classifier security that formalizes and generalizes the main ideas proposed in the literature, and give examples of its use in three real applications. Reported results show that security evaluation can provide a more complete understanding of the classifier\u2019s behavior in adversarial environments, and lead to better design choices.", "targets": "Security Evaluation of Pattern Classifiers under Attack"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-bfb05f4c26ac41ef8733a1375bb7ae8a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A key part of any evolutionary algorithm is fitness evaluation. When fitness evaluations are corrupted by noise, as happens in many real-world problems as a consequence of various types of uncertainty, a strategy is needed in order to cope with this. Resampling is one of the most common strategies, whereby each solution is evaluated many times in order to reduce the variance of the fitness estimates. When evaluating the performance of a noisy optimisation algorithm, a key consideration is the stopping condition for the algorithm. A frequently used stopping condition in runtime analysis, known as \u201cFirst Hitting Time\u201d, is to stop the algorithm as soon as it encounters the optimal solution. However, this is unrealistic for real-world problems, as if the optimal solution were already known, there would be no need to search for it. This paper argues that the use of First Hitting Time, despite being a commonly used approach, is significantly flawed and overestimates the quality of many algorithms in real-world cases, where the optimum is not known in advance and has to be genuinely searched for. A better alternative is to measure the quality of the solution an algorithm returns after a fixed evaluation budget, i.e., to focus on final solution quality. This paper argues that focussing on final solution quality is more realistic and demonstrates cases where the results produced by each algorithm evaluation method lead to very different conclusions regarding the quality of each noisy optimisation algorithm.", "targets": "Evaluating Noisy Optimisation Algorithms: First Hitting Time is Problematic"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-51c269a98d5f4947b926709f1f587716", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Item Response Theory (IRT) allows for measuring ability of Machine Learning models as compared to a human population. However, it is difficult to create a large dataset to train the ability of deep neural network models (DNNs). We propose Crowd-Informed Fine-Tuning (CIFT) as a new training process, where a pre-trained model is fine-tuned with a specialized supplemental training set obtained via IRT modelfitting on a large set of crowdsourced response patterns. With CIFT we can leverage the specialized set of data obtained through IRT to inform parameter tuning in DNNs. We experiment with two loss functions in CIFT to represent (i) memorization of fine-tuning items and (ii) learning a probability distribution over potential labels that is similar to the crowdsourced distribution over labels to simulate crowd knowledge. Our results show that CIFT improves ability for a state-of-theart DNN model for Recognizing Textual Entailment (RTE) tasks and is generalizable to a large-scale RTE test set.", "targets": "CIFT: Crowd-Informed Fine-Tuning to Improve Machine Learning Ability"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-893b13f8bdb94adda9cac7d7888d69c5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We have developed and trained a convolutional neural network to automatically and simultaneously segment optic disc, fovea and blood vessels. Fundus images were normalized before segmentation was performed to enforce consistency in background lighting and contrast. For every effective point in the fundus image, our algorithm extracted three channels of input from the point\u2019s neighbourhood and forwarded the response across the 7-layer network. The output layer consists of four neurons, representing background, optic disc, fovea and blood vessels. In average, our segmentation correctly classified 92.68% of the ground truths (on the testing set from Drive database). The highest accuracy achieved on a single image was 94.54%, the lowest 88.85%. A single convolutional neural network can be used not just to segment blood vessels, but also optic disc and fovea with good accuracy.", "targets": "Segmentation of optic disc, fovea and retinal vasculature using a single convolutional neural network"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3b8883ec8e4047338f7d7c1aa9be8790", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "It is natural and efficient to use Natural Language (NL) for transferring knowledge from a human to a robot. Recently, research on using NL to support human-robot cooperation (HRC) has received increasing attention in several domains such as robotic daily assistance, robotic health caregiving, intelligent manufacturing, autonomous navigation and robot social accompany. However, a high-level review that can reveal the realization process and the latest methodologies of using NL to facilitate HRC is missing. In this review, a comprehensive summary about the methodology development of natural-language-facilitated human-robot cooperation (NLC) has been made. We first analyzed driving forces for NLC developments. Then, with a temporal realization order, we reviewed three main steps of NLC: human NL understanding, knowledge representation, and knowledge-world mapping. Last, based on our paper review and perspectives, potential research trends in NLC were discussed.", "targets": "Methodologies realizing natural-language-facilitated human-robot cooperation: A review"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9889796ba8504e44aaf33f6a36bd55db", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Kernel-based approaches for sequence classification have been successfully applied to a variety of domains, including the text categorization, image classification, speech analysis, biological sequence analysis, time series and music classification, where they show some of the most accurate results. Typical kernel functions for sequences in these domains (e.g., bag-of-words, mismatch, or subsequence kernels) are restricted to discrete univariate (i.e. one-dimensional) string data, such as sequences of words in the text analysis, codeword sequences in the image analysis, or nucleotide or amino acid sequences in the DNA and protein sequence analysis. However, original sequence data are often of real-valued multivariate nature, i.e. are not univariate and discrete as required by typical k-mer based sequence kernel functions. In this work, we consider the problem of the multivariate sequence classification (e.g., classification of multivariate music sequences, or multidimensional protein sequence representations). To this end, we extend univariate kernel functions typically used in sequence domains and propose efficient multivariate similarity kernel method (MVDFQ-SK) based on (1) a direct feature quantization (DFQ) of each sequence dimension in the original real-valued multivariate sequences and (2) applying novel multivariate discrete kernel measures on these multivariate discrete DFQ sequence representations to more accurately capture similarity relationships among sequences and improve classification performance. Experiments using the proposed MVDFQ-SK kernel method show excellent classification performance on three challenging music classification tasks as well as protein sequence classification with significant 25-40% improvements over univariate kernel methods and existing state-of-the-art sequence classification methods.", "targets": "Efficient multivariate kernels for sequence classification"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-fd3d0d9cee5641faa80687e1ec5261e6", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Contextual bandit learning is an increasingly popular approach to optimizing recommender systems via user feedback, but can be slow to converge in practice due to the need for exploring a large feature space. In this paper, we propose a coarse-to-fine hierarchical approach for encoding prior knowledge that drastically reduces the amount of exploration required. Intuitively, user preferences can be reasonably embedded in a coarse low-dimensional feature space that can be explored efficiently, requiring exploration in the high-dimensional space only as necessary. We introduce a bandit algorithm that explores within this coarse-to-fine spectrum, and prove performance guarantees that depend on how well the coarse space captures the user\u2019s preferences. We demonstrate substantial improvement over conventional bandit algorithms through extensive simulation as well as a live user study in the setting of personalized news recommendation.", "targets": "Hierarchical Exploration for Accelerating Contextual Bandits"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-bee7571f11fa4ed9a1c0df43bd8b9d53", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "While the optimization problem behind deep neural networks is highly non-convex, it is frequently observed in practice that training deep networks seems possible without getting stuck in suboptimal points. It has been argued that this is the case as all local minima are close to being globally optimal. We show that this is (almost) true, in fact almost all local minima are globally optimal, for a fully connected network with squared loss and analytic activation function given that the number of hidden units of one layer of the network is larger than the number of training points and the network structure from this layer on is pyramidal.", "targets": "The Loss Surface of Deep and Wide Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-41b3474fe93b4b24bcc8ec26ff18673f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "To appear in Theory and Practice of Logic Programming (TPLP). GNU Prolog is a general-purpose implementation of the Prolog language, which distinguishes itself from most other systems by being, above all else, a native-code compiler which produces standalone executables which don\u2019t rely on any byte-code emulator or meta-interpreter. Other aspects which stand out include the explicit organization of the Prolog system as a multipass compiler, where intermediate representations are materialized, in Unix compiler tradition. GNU Prolog also includes an extensible and highperformance finite domain constraint solver, integrated with the Prolog language but implemented using independent lower-level mechanisms. This article discusses the main issues involved in designing and implementing GNU Prolog: requirements, system organization, performance and portability issues as well as its position with respect to other Prolog system implementations and the ISO standardization initiative.", "targets": "On the Implementation of GNU Prolog"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7526ccc63b684d6bba542f800bc95b70", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We formulate and study a fundamental search and detection problem, Schedule Optimization, motivated by a variety of real-world applications, ranging from monitoring content changes on the web, social networks, and user activities to detecting failure on large systems with many individual machines. We consider a large system consists of many nodes, where each node has its own rate of generating new events, or items. A monitoring application can probe a small number of nodes at each step, and our goal is to compute a probing schedule that minimizes the expected number of undiscovered items at the system, or equivalently, minimizes the expected time to discover a new item in the system. We study the Schedule Optimization problem both for deterministic and randomized memoryless algorithms. We provide lower bounds on the cost of an optimal schedule and construct close to optimal schedules with rigorous mathematical guarantees. Finally, we present an adaptive algorithm that starts with no prior information on the system and converges to the optimal memoryless algorithms by adapting to observed data.", "targets": "Optimizing Static and Adaptive Probing Schedules for Rapid Event Detection"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-570df8b3885849448bc315c1a91906dd", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "User-machine interaction is important for spoken content retrieval. For text content retrieval, the user can easily scan through and select on a list of retrieved item. This is impossible for spoken content retrieval, because the retrieved items are difficult to show on screen. Besides, due to the high degree of uncertainty for speech recognition, the retrieval results can be very noisy. One way to counter such difficulties is through user-machine interaction. The machine can take different actions to interact with the user to obtain better retrieval results before showing to the user. The suitable actions depend on the retrieval status, for example requesting for extra information from the user, returning a list of topics for user to select, etc. In our previous work, some hand-crafted states estimated from the present retrieval results are used to determine the proper actions. In this paper, we propose to use Deep-Q-Learning techniques instead to determine the machine actions for interactive spoken content retrieval. Deep-Q-Learning bypasses the need for estimation of the hand-crafted states, and directly determine the best action base on the present retrieval status even without any human knowledge. It is shown to achieve significantly better performance compared with the previous hand-crafted states.", "targets": "Interactive Spoken Content Retrieval by Deep Reinforcement Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f6e29dc59f9245b2ac79f2cdcecd99ea", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "\u0423 \u0441\u0442\u0430\u0442\u0442\u0456 \u0437\u0430\u043c\u0430\u043d\u0456\u0444\u0435\u0441\u0442\u043e\u0432\u0430\u043d\u043e \u043f\u0440\u043e\u0435\u043a\u0442 \u043a\u0432\u0430\u043d\u0442\u0438\u0442\u0430\u0442\u0438\u0432\u043d\u043e\u0457 \u043f\u0430\u0440\u0430\u043c\u0435\u0442\u0440\u0438\u0437\u0430\u0446\u0456\u0457 \u0443\u0441\u0456\u0445 \u0442\u0435\u043a\u0441\u0442\u0456\u0432 \u0406. \u0424\u0440\u0430\u043d\u043a\u0430, \u0449\u043e \u043c\u043e\u0436\u043b\u0438\u0432\u043e \u0440\u0435\u0430\u043b\u0456\u0437\u0443\u0432\u0430\u0442\u0438, \u0441\u0442\u0432\u043e\u0440\u0438\u0432\u0448\u0438 \u0447\u0430\u0441\u0442\u043e\u0442\u043d\u0438\u0439 \u0441\u043b\u043e\u0432\u043d\u0438\u043a \u0443\u0441\u0456\u0445 \u0442\u0432\u043e\u0440\u0456\u0432 \u043f\u0438\u0441\u044c\u043c\u0435\u043d\u043d\u0438\u043a\u0430 \u0456 \u043b\u0438\u0448\u0435 \u0456\u0437 \u0437\u0430\u0441\u0442\u043e\u0441\u0443\u0432\u0430\u043d\u043d\u044f\u043c \u0441\u0443\u0447\u0430\u043d\u0438\u0445 \u043a\u043e\u043c\u043f'\u044e\u0442\u0435\u0440\u043d\u0438\u0445 \u0440\u043e\u0437\u0440\u043e\u0431\u043e\u043a. \u0412\u043a\u0430\u0437\u0430\u043d\u043e \u0441\u0444\u0435\u0440\u0438 \u0437\u0430\u0441\u0442\u043e\u0441\u0443\u0432\u0430\u043d\u043d\u044f, \u0435\u0442\u0430\u043f\u0438, \u043c\u0435\u0442\u043e\u0434\u0438\u043a\u0443, \u043f\u0440\u0438\u043d\u0446\u0438\u043f\u0438 \u0456 \u0441\u043f\u0435\u0446\u0438\u0444\u0456\u043a\u0443 \u0443\u043a\u043b\u0430\u0434\u0430\u043d\u043d\u044f \u0447\u0430\u0441\u0442\u043e\u0442\u043d\u043e\u0433\u043e \u0441\u043b\u043e\u0432\u043d\u0438\u043a\u0430 \u043c\u043e\u0432\u0438 \u0434\u0440\u0443\u0433\u043e\u0457 \u043f\u043e\u043b\u043e\u0432\u0438\u043d\u0438 \u0425\u0406\u0425 \u2014 \u043f\u043e\u0447. \u0425\u0425 \u0441\u0442., \u044f\u043a\u043e\u044e \u043f\u0438\u0441\u0430\u0432 \u0406. \u0424\u0440\u0430\u043d\u043a\u043e. \u041e\u043f\u0438\u0441\u0430\u043d\u043e \u0441\u043f\u0456\u0432\u0432\u0456\u0434\u043d\u043e\u0448\u0435\u043d\u043d\u044f \u0447\u0430\u0441\u0442\u043e\u0442\u043d\u043e\u0433\u043e \u0441\u043b\u043e\u0432\u043d\u0438\u043a\u0430 \u0406. \u0424\u0440\u0430\u043d\u043a\u0430 \u0456\u0437 \u0441\u043b\u043e\u0432\u043d\u0438\u043a\u043e\u043c \u043c\u043e\u0432\u0438 \u043f\u0438\u0441\u044c\u043c\u0435\u043d\u043d\u0438\u043a\u0430 \u0442\u0430 \u043a\u043e\u0440\u043f\u0443\u0441\u043e\u043c \u0442\u0435\u043a\u0441\u0442\u0456\u0432.", "targets": "\u0421\u043e\u043b\u043e\u043c\u0456\u044f \u0411\u0443\u043a, \u0410\u043d\u0434\u0440\u0456\u0438\u0306 \u0420\u043e\u0432\u0435\u043d\u0447\u0430\u043a "} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-61ddb76475b34061ac5ab48b476b0ce5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this study, we introduce a new approach for learning language models by training them to estimate word-context pointwise mutual information (PMI), and then deriving the desired conditional probabilities from PMI at test time. Specifically, we show that with minor modifications to word2vec\u2019s algorithm, we get principled language models that are closely related to the well-established Noise Contrastive Estimation (NCE) based language models. A compelling aspect of our approach is that our models are trained with the same simple negative sampling objective function that is commonly used in word2vec to learn word embeddings.", "targets": "A Simple Language Model based on PMI Matrix Approximations"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-624c8194edd4432e80be4a6045104dda", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Dataset bias remains a significant barrier towards solving real world computer vision tasks. Though deep convolutional networks have proven to be a competitive approach for image classification, a question remains: have these models have solved the dataset bias problem? In general, training or fine-tuning a state-ofthe-art deep model on a new domain requires a significant amount of data, which for many applications is simply not available. Transfer of models directly to new domains without adaptation has historically led to poor recognition performance. In this paper, we pose the following question: is a single image dataset, much larger than previously explored for adaptation, comprehensive enough to learn general deep models that may be effectively applied to new image domains? In other words, are deep CNNs trained on large amounts of labeled data as susceptible to dataset bias as previous methods have been shown to be? We show that a generic supervised deep CNN model trained on a large dataset reduces, but does not remove, dataset bias. Furthermore, we propose several methods for adaptation with deep models that are able to operate with little (one example per category) or no labeled domain specific data. Our experiments show that adaptation of deep models on benchmark visual domain adaptation datasets can provide a significant performance boost.", "targets": "One-Shot Adaptation of Supervised Deep Convolutional Models"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-945fcd2ac9bf49389cec457a29365a22", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In practice, a ranking of objects with respect to given set of criteria is of considerable importance. However, due to lack of knowledge, information of time pressure, decision makers might not be able to provide a (crisp) ranking of objects from the top to the bottom. Instead, some objects might be ranked equally, or better than other objects only to some degree. In such cases, a generalization of crisp rankings to fuzzy rankings can be more useful. The aim of the article is to introduce the notion of a fuzzy ranking and to discuss its several properties, namely orderings, similarity and indecisiveness. The proposed approach can be used both for group decision making or multiple criteria decision making when uncertainty is involved.", "targets": "Fuzzy Rankings: Properties and Applications"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9902526cfaf1483abf5bce13f6b0cb53", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Participants in recent discussions of AI-related issues ranging from intelligence explosion to technological unemployment have made diverse claims about the nature, pace, and drivers of progress in AI. However, these theories are rarely specified in enough detail to enable systematic evaluation of their assumptions or to extrapolate progress quantitatively, as is often done with some success in other technological domains. After reviewing relevant literatures and justifying the need for more rigorous modeling of AI progress, this paper contributes to that research program by suggesting ways to account for the relationship between hardware speed increases and algorithmic improvements in AI, the role of human inputs in enabling AI capabilities, and the relationships between different sub-fields of AI. It then outlines ways of tailoring AI progress models to generate insights on the specific issue of technological unemployment, and outlines future directions for research on AI progress.", "targets": "Modeling Progress in AI"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1bb3c9d8542647718972fa96555d0fa5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Indian languages have long history in World Natural languages. Panini was the first to define Grammar for Sanskrit language with about 4000 rules in fifth century. These rules contain uncertainty information. It is not possible to Computer processing of Sanskrit language with uncertain information. In this paper, fuzzy logic and fuzzy reasoning are proposed to deal to eliminate uncertain information for reasoning with Sanskrit grammar. The Sanskrit language processing is also discussed in this paper. .", "targets": "Fuzzy Modeling and Natural Language Processing for Panini\u2019s Sanskrit Grammar"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-26ec77117f6940acb5a0f311be8b37f0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Neutrosophic set has the ability to handle uncertain, incomplete, inconsistent, indeterminate information in a more accurate way. In this paper, we proposed a neutrosophic recommender system to predict the diseases based on neutrosophic set which includes single-criterion neutrosophic recommender system (SCNRS) and multi-criterion neutrosophic recommender system (MC-NRS). Further, we investigated some algebraic operations of neutrosophic recommender system such as union, complement, intersection, probabilistic sum, bold sum, bold intersection, bounded difference, symmetric difference, convex linear sum of min and max operators, Cartesian product, associativity, commutativity and distributive. Based on these operations, we studied the algebraic structures such as lattices, Kleen algebra, de Morgan algebra, Brouwerian algebra, BCK algebra, Stone algebra and MV algebra. In addition, we introduced several types of similarity measures based on these algebraic operations and studied some of their theoretic properties. Moreover, we accomplished a prediction formula using the proposed algebraic similarity measure. We also proposed a new algorithm for medical diagnosis based on neutrosophic recommender system. Finally to check the validity of the proposed methodology, we made experiments on the datasets Heart, RHC, Breast cancer, Diabetes and DMD. At the end, we presented the MSE and computational time by comparing the proposed algorithm with the relevant ones such as ICSM, DSM, CARE, CFMD, as well as other variants namely Variant 67, Variant 69, and Varian 71 both in tabular and graphical form to analyze the efficiency and accuracy. Finally we analyzed the strength of all 8 algorithms by ANOVA statistical tool.", "targets": "A Neutrosophic Recommender System for Medical Diagnosis Based on Algebraic Neutrosophic Measures"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-93e16fb5792d489bbf7708ddada29833", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Convolutional Neural Networks (CNNs) are extensively used in image and video recognition, natural language processing and other machine learning applications. The success of CNNs in these areas corresponds with a significant increase in the number of parameters and computation costs. Recent approaches towards reducing these overheads involve pruning and compressing the weights of various layers without hurting the overall CNN performance. However, using model compression to generate sparse CNNs mostly reduces parameters from the fully connected layers and may not significantly reduce the final computation costs. In this paper, we present a compression technique for CNNs, where we prune the filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole planes in the network, together with their connecting convolution kernels, the computational costs are reduced significantly. In contrast to other techniques proposed for pruning networks, this approach does not result in sparse connectivity patterns. Hence, our techniques do not need the support of sparse convolution libraries and can work with the most efficient BLAS operations for matrix multiplications. In our results, we show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34% and ResNet-110 by upto 38% while regaining close to the original accuracy by retraining the networks.", "targets": "Pruning Filters for Efficient ConvNets"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5c4b20d7f3504494b9bab707a31f605b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The paper addresses the problem of learning a regression model parameterized by a fixedrank positive semidefinite matrix. The focus is on the nonlinear nature of the search space and on scalability to high-dimensional problems. The mathematical developments rely on the theory of gradient descent algorithms adapted to the Riemannian geometry that underlies the set of fixed-rank positive semidefinite matrices. In contrast with previous contributions in the literature, no restrictions are imposed on the range space of the learned matrix. The resulting algorithms maintain a linear complexity in the problem size and enjoy important invariance properties. We apply the proposed algorithms to the problem of learning a distance function parameterized by a positive semidefinite matrix. Good performance is observed on classical benchmarks.", "targets": "Regression on fixed-rank positive semidefinite matrices: a Riemannian approach"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3f7cf499998f44d8baa3b8a9c33f148f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "\"Background subtraction\" is an old technique for finding moving objects in a video sequence-for example, cars driving on a freeway. The idea is that subtracting the current image from a time\u00ad averaged background image will leave only non\u00ad stationary objects. It is, however, a crude ap\u00ad proximation to the task of classifying each pixel of the current image; it fails with slow-moving objects and does not distinguish shadows from moving objects. The basic idea of this paper is that we can classify each pixel using a model of how that pixel looks when it is part of different classes. We learn a mixture-of-Gaussians classi\u00ad fication model for each pixel using an unsuper\u00ad vised technique-an efficient, incremental ver\u00ad sion of EM. Unlike the standard image-averaging approach, this automatically updates the mixture component for each class according to likelihood of membership; hence slow-moving objects are handled perfectly. Our approach also identifies and eliminates shadows much more effectively than other techniques such as thresholding. Ap\u00ad plication of this method as part of the Roadwatch traffic surveillance project is expected to result in significant improvements in vehicle identification and tracking.", "targets": "Image Segmentation in Video Sequences: A Probabilistic Approach"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-75dfc79490324f1e8afbe6662295ecb3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We study the helpful product reviews identification problem in this paper. We observe that the evidence-conclusion discourse relations, also known as arguments, often appear in product reviews, and we hypothesise that some argumentbased features, e.g. the percentage of argumentative sentences, the evidencesconclusions ratios, are good indicators of helpful reviews. To validate this hypothesis, we manually annotate arguments in 110 hotel reviews, and investigate the effectiveness of several combinations of argument-based features. Experiments suggest that, when being used together with the argument-based features, the state-of-the-art baseline features can enjoy a performance boost (in terms of F1) of 11.01% in average.", "targets": "Using Argument-based Features to Predict and Analyse Review Helpfulness"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2baf2f7080c6417499839ccdc8fdf520", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Currently, criminal\u2019s profile (CP) is obtained from investigator\u2019s or forensic psychologist\u2019s interpretation, linking crime scene characteristics and an offender\u2019s behavior to his or her characteristics and psychological profile. This paper seeks an efficient and systematic discovery of non-obvious and valuable patterns between variables from a large database of solved cases via a probabilistic network (PN) modeling approach. The PN structure can be used to extract behavioral patterns and to gain insight into what factors influence these behaviors. Thus, when a new case is being investigated and the profile variables are unknown because the offender has yet to be identified, the observed crime scene variables are used to infer the unknown variables based on their connections in the structure and the corresponding numerical (probabilistic) weights. The objective is to produce a more systematic and empirical approach to profiling, and to use the resulting PN model as a decision tool. Keywords-component; Modeling, criminal profiling, criminal behavior, probabilistic network, Bayes Rule", "targets": "Modeling of Human Criminal Behavior using Probabilistic Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c384ccf5f49a47e7b9e8b9b09b91025c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper explores the real-time summarization of scheduled events such as soccer games from torrential flows of Twitter streams. We propose and evaluate an approach that substantially shrinks the stream of tweets in real-time, and consists of two steps: (i) sub-event detection, which determines if something new has occurred, and (ii) tweet selection, which picks a representative tweet to describe each sub-event. We compare the summaries generated in three languages for all the soccer games in Copa America 2011 to reference live reports offered by Yahoo! Sports journalists. We show that simple text analysis methods which do not involve external knowledge lead to summaries that cover 84% of the sub-events on average, and 100% of key types of sub-events (such as goals in soccer). Our approach should be straightforwardly applicable to other kinds of scheduled events such as other sports, award ceremonies, keynote talks, TV shows, etc.", "targets": "Towards Real-Time Summarization of Scheduled Events from Twitter Streams"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-007ca849493643ed96005456339bfec2", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Massive public resume data emerging on the WWW indicates individual-related characteristics in terms of profile and career experiences. Resume Analysis (RA) provides opportunities for many applications, such as talent seeking and evaluation. Existing RA studies based on statistical analyzing have primarily focused on talent recruitment by identifying explicit attributes. However, they failed to discover the implicit semantic information, i.e., individual career progress patterns and social-relations, which are vital to comprehensive understanding of career development. Besides, how to visualize them for better human cognition is also challenging. To tackle these issues, we propose a visual analytics system ResumeVis to mine and visualize resume data. Firstly, a text-mining based approach is presented to extract semantic information. Then, a set of visualizations are devised to represent the semantic information in multiple perspectives. By interactive exploration on ResumeVis performed by domain experts, the following tasks can be accomplished: to trace individual career evolving trajectory; to mine latent social-relations among individuals; and to hold the full picture of massive resumes\u2019 collective mobility. Case studies with over 2500 online officer resumes demonstrate the effectiveness of our system. We provide a demonstration video.", "targets": "ResumeVis: A Visual Analytics System to Discover Semantic Information in Semi-structured Resume Data"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-79ec45225d974474b2b579904dff6912", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Selecting the right web links for a website is important because appropriate links not only can provide high attractiveness but can also increase the website\u2019s revenue. In this work, we first show that web links have an intrinsic multilevel feedback structure. For example, consider a 2-level feedback web link: the 1st level feedback provides the Click-Through Rate (CTR) and the 2nd level feedback provides the potential revenue, which collectively produce the compound 2-level revenue. We consider the context-free links selection problem of selecting links for a homepage so as to maximize the total compound 2-level revenue while keeping the total 1st level feedback above a preset threshold. We further generalize the problem to links with n (n\u22652)-level feedback structure. To our best knowledge, we are the first to model the links selection problem as a constrained multi-armed bandit problem and design an effective links selection algorithm by learning the links\u2019 multi-level structure with provable sub-linear regret and violation bounds. We uncover the multi-level feedback structures of web links in two real-world datasets. We also conduct extensive experiments on the datasets to compare our proposed LExp algorithm with two state-of-the-art context-free bandit algorithms and show that LExp algorithm is the most effective in links selection while satisfying the constraint.", "targets": "Multi-level Feedback Web Links Selection Problem: Learning and Optimization"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-99a00d57de4b4044a65d7061a3cc2478", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The Bacterial Foraging Optimization (BFO) is one of the metaheuristics algorithms that most widely used to solve optimization problems. The BFO is imitated from the behavior of the foraging bacteria group such as Ecoli. The main aim of algorithm is to eliminate those bacteria that have weak foraging methods and maintaining those bacteria that have strong foraging methods. In this extent, each bacterium communicates with other bacteria by sending signals such that bacterium change the position in the next step if prior factors have been satisfied. In fact, the process of algorithm allows bacteria to follow up nutrients toward the optimal. In this paper, the BFO is used for the solutions of Quadratic Assignment Problem (QAP), and multiobjective QAP (mQAP) by using updating mechanisms including mutation, crossover, and a local search.", "targets": "Bacteria Foraging Algorithm with Genetic Operators for the Solution of QAP and mQAP"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8001e579f4924d738e6658402c9d3c89", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Multitask learning algorithms are typically designed assuming some fixed, a priori known latent structure shared by all the tasks. However, it is usually unclear what type of latent task structure is the most appropriate for a given multitask learning problem. Ideally, the \u201cright\u201d latent task structure should be learned in a data-driven manner. We present a flexible, nonparametric Bayesian model that posits a mixture of factor analyzers structure on the tasks. The nonparametric aspect makes the model expressive enough to subsume many existing models of latent task structures (e.g, meanregularized tasks, clustered tasks, low-rank or linear/non-linear subspace assumption on tasks, etc.). Moreover, it can also learn more general task structures, addressing the shortcomings of such models. We present a variational inference algorithm for our model. Experimental results on synthetic and realworld datasets, on both regression and classification problems, demonstrate the effectiveness of the proposed method.", "targets": "Flexible Modeling of Latent Task Structures in Multitask Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-37f7531d39a842109d966f9ac45fdcc3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We consider the problem of detecting an epidemic in a population where individual diagnoses are extremely noisy. The motivation for this problem is the plethora of examples (influenza strains in humans, or computer viruses in smartphones, etc.) where reliable diagnoses are scarce, but noisy data plentiful. In flu/phone-viruses, exceedingly few infected people/phones are professionally diagnosed (only a small fraction go to a doctor) but less reliable secondary signatures (e.g., people staying home, or greater-than-typical upload activity) are more readily available. These secondary data are often plagued by unreliability: many people with the flu do not stay home, and many people that stay home do not have the flu. This paper identifies the precise regime where knowledge of the contact network enables finding the needle in the haystack: we provide a distributed, efficient and robust algorithm that can correctly identify the existence of a spreading epidemic from highly unreliable local data. Our algorithm requires only local-neighbor knowledge of this graph, and in a broad array of settings that we describe, succeeds even when false negatives and false positives make up an overwhelming fraction of the data available. Our results show it succeeds in the presence of partial information about the contact network, and also when there is not a single \u201cpatient zero,\u201d but rather many (hundreds, in our examples) of initial patient-zeroes, spread across the graph.", "targets": "Localized epidemic detection in networks with overwhelming noise"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2980ceba15a546f9aeb1f8e65159d734", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Rough set theory, a mathematical tool to deal with vague concepts, has originally described the indiscernibility of elements by equivalence relations. Covering rough sets are a natural extension of classical rough sets by relaxing the partitions arising from equivalence relations to covers. Recently, some topological concepts such as neighborhood have been applied to covering rough sets. In this paper, we further investigate the covering rough sets based on neighborhoods by approximation operations. We show that the upper approximation based on neighborhoods can be defined equivalently without using neighborhoods. To analyze the covers themselves, we introduce unary and composition operations on covers. A notion of homomorphism is provided to relate two covering approximation spaces. We also examine the properties of approximations preserved by the operations and homomorphisms, respectively.", "targets": "Covering rough sets based on neighborhoods"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b191664f6c1e46658a434c09bbf1e4c5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Automated Theorem Proving (ATP) is an established branch of Artificial Intelligence. The purpose of ATP is to design a system which can automatically figure out an algorithm either to prove or disprove a mathematical claim, on the basis of a set of given premises, using a set of fundamental postulates and following the method of logical inference. In this paper, we propose GraATP, a generalized framework for automated theorem proving in plane geometry. Our proposed method translates the geometric entities into nodes of a graph and the relations between them as edges of that graph. The automated system searches for different ways to reach the conclusion for a claim via graph traversal by which the validity of the geometric theorem is examined.", "targets": "GraATP: A Graph Theoretic Approach for Automated Theorem Proving in Plane Geometry"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-46e32004ef884698bd6919ffac6cac06", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Domain adaptation, and transfer learning more generally, seeks to remedy the problem created when training and testing datasets are generated by different distributions. In this work, we introduce a new unsupervised domain adaptation algorithm for when there are multiple sources available to a learner. Our technique assigns a rough labeling on the target samples, then uses it to learn a transformation that aligns the two datasets before final classification. In this article we give a convenient implementation of our method, show several experiments using it, and compare it to other methods commonly used in the field.", "targets": "Multi-Source Domain Adaptation Using Approximate Label Matching"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1bb90619b64c4516a365a36a6e4d47e3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Most existing Neural Machine Translation models use groups of characters or whole words as their unit of input and output. We propose a model with a hierarchical char2word encoder, that takes individual characters both as input and output. We first argue that this hierarchical representation of the character encoder reduces computational complexity, and show that it improves translation performance. Secondly, by qualitatively studying attention plots from the decoder we find that the model learns to compress common words into a single embedding whereas rare words, such as names and places, are represented character by character.", "targets": "NEURAL MACHINE TRANSLATION WITH CHARACTERS AND HIERARCHICAL ENCODING"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-81aa10499b25465fb230e3bacd17bb02", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Despite outstanding success in vision amongst other domains, many of the recent deep learning approaches have evident drawbacks for robots. This manuscript surveys recent work in the literature that pertain to applying deep learning systems to the robotics domain, either as means of estimation or as a tool to resolve motor commands directly from raw percepts. These recent advances are only a piece to the puzzle. We suggest that deep learning as a tool alone is insufficient in building a unified framework to acquire general intelligence. For this reason, we complement our survey with insights from cognitive development and refer to ideas from classical control theory, producing an integrated direction for a lifelong learning architecture.", "targets": "Towards Lifelong Self-Supervision: A Deep Learning Direction for Robotics"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d9f3d414d59544e89bd042d0120c9d71", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A number of visual question answering approaches have been proposed recently, aiming at understanding the visual scenes by answering the natural language questions. While the image question answering has drawn significant attention, video question answering is largely unexplored. Video-QA is different from Image-QA since the information and the events are scattered among multiple frames. In order to better utilize the temporal structure of the videos and the phrasal structures of the answers, we propose two mechanisms: the re-watching and the re-reading mechanisms and combine them into the forgettable-watcher model. Then we propose a TGIF-QA dataset for video question answering with the help of automatic question generation. Finally, we evaluate the models on our dataset. The experimental results show the effectiveness of our proposed models.", "targets": "The Forgettable-Watcher Model for Video Question Answering"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4d35058a3d6a4a2cab73e9f9948f57ec", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We consider the problem of proper learning a Boolean Halfspace with integer weights {0, 1, . . . , t} from membership queries only. The best known algorithm for this problem is an adaptive algorithm that asks n ) membership queries where the best lower bound for the number of membership queries is n [4]. In this paper we close this gap and give an adaptive proper learning algorithm with two rounds that asks n membership queries. We also give a non-adaptive proper learning algorithm that asks n ) membership queries.", "targets": "Learning Boolean Halfspaces with Small Weights from Membership Queries"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-031ed59f7a7844f4bc4498808acff441", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present DataGrad, a general back-propagation style training procedure for deep neural architectures that uses regularization of a deep Jacobian-based penalty. It can be viewed as a deep extension of the layerwise contractive auto-encoder penalty. More importantly, it unifies previous proposals for adversarial training of deep neural nets \u2013 this list includes directly modifying the gradient, training on a mix of original and adversarial examples, using contractive penalties, and approximately optimizing constrained adversarial objective functions. In an experiment using a Deep Sparse Rectifier Network, we find that the deep Jacobian regularization of DataGrad (which also has L1 and L2 flavors of regularization) outperforms traditional L1 and L2 regularization both on the original dataset as well as on adversarial examples.", "targets": "Unifying Adversarial Training Algorithms with Flexible Deep Data Gradient Regularization"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-40fb5964c1804bc4b8a95f02902c4904", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Syntax-Guided Synthesis (SyGuS) is the computational problem of finding an implementation f that meets both a semantic constraint given by a logical formula \u03c6 in a background theory T , and a syntactic constraint given by a grammar G, which specifies the allowed set of candidate implementations. Such a synthesis problem can be formally defined in SyGuS-IF, a language that is built on top of SMT-LIB. The Syntax-Guided Synthesis Competition (SyGuS-Comp) is an effort to facilitate, bring together and accelerate research and development of efficient solvers for SyGuS by providing a platform for evaluating different synthesis techniques on a comprehensive set of benchmarks. In this year\u2019s competition we added a new track devoted to programming by examples. This track consisted of two categories, one using the theory of bit-vectors and one using the theory of strings. This paper presents and analyses the results of SyGuS-Comp\u201916.", "targets": "SyGuS-Comp 2016: Results and Analysis"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-18b5f1ea70ef4cc19ea347a864c3c38a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In computing, spell checking is the process of detecting and sometimes providing spelling suggestions for incorrectly spelled words in a text. Basically, a spell checker is a computer program that uses a dictionary of words to perform spell checking. The bigger the dictionary is, the higher is the error detection rate. The fact that spell checkers are based on regular dictionaries, they suffer from data sparseness problem as they cannot capture large vocabulary of words including proper names, domain-specific terms, technical jargons, special acronyms, and terminologies. As a result, they exhibit low error detection rate and often fail to catch major errors in the text. This paper proposes a new context-sensitive spelling correction method for detecting and correcting non-word and real-word errors in digital text documents. The approach hinges around data statistics from Google Web 1T 5-gram data set which consists of a big volume of n-gram word sequences, extracted from the World Wide Web. Fundamentally, the proposed method comprises an error detector that detects misspellings, a candidate spellings generator based on a character 2-gram model that generates correction suggestions, and an error corrector that performs contextual error correction. Experiments conducted on a set of text documents from different domains and containing misspellings, showed an outstanding spelling error correction rate and a drastic reduction of both non-word and real-word errors. In a further study, the proposed algorithm is to be parallelized so as to lower the computational cost of the error detection and correction processes.", "targets": "Context-sensitive Spelling Correction Using Google Web 1T 5-Gram Information"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4dba947acef94c33961e9c31886aaca5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "LTL synthesis \u2013 the construction of a function to satisfy a logical specification formulated in Linear Temporal Logic \u2013 is a 2EXPTIME-complete problem with relevant applications in controller synthesis and a myriad of artificial intelligence applications. In this research note we consider De Giacomo and Vardi\u2019s variant of the synthesis problem for LTL formulas interpreted over finite rather than infinite traces. Rather surprisingly, given the existing claims on complexity, we establish that LTL synthesis is EXPTIME-complete for the finite interpretation, and not 2EXPTIME-complete as previously reported. Our result coincides nicely with the planning perspective where non-deterministic planning with full observability is EXPTIME-complete and partial observability increases the complexity to 2EXPTIME-complete; a recent related result for LTL synthesis shows that in the finite case with partial observability, the problem is 2EXPTIME-complete.", "targets": "Finite LTL Synthesis is EXPTIME-complete"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3583ccd47b4a4d53bd2779c8695c6558", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Nowadays, neural networks play an important role in the task of relation classification. By designing different neural architectures, researchers have improved the performance to a large extent, compared with traditional methods. However, existing neural networks for relation classification are usually of shallow architectures (e.g., one-layer convolution neural networks or recurrent networks). They may fail to explore the potential representation space in different abstraction levels. In this paper, we propose deep recurrent neural networks (DRNNs) to tackle this challenge. Further, we propose a data augmentation method by leveraging the directionality of relations. We evaluate our DRNNs on the SemEval-2010 Task 8, and achieve an F1score of 85.81%, outperforming state-of-theart recorded results.", "targets": "Improved Relation Classification by Deep Recurrent Neural Networks with Data Augmentation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5d4ff9a3a4fa488487700d021ce51d62", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The generation of political event data has remained much the same since the mid-1990s, both in terms of data acquisition and the process of coding text into data. Since the 1990s, however, there have been significant improvements in open-source natural language processing software and in the availability of digitized news content. This paper presents a new, next-generation event dataset, named Phoenix, that builds from these and other advances. This dataset includes improvements in the underlying news collection process and event coding software, along with the creation of a general processing pipeline necessary to produce daily-updated data. This paper provides a face validity checks by briefly examining the data for the conflict in Syria, and a comparison between Phoenix and the Integrated Crisis Early Warning System data. 1 Moving Event Data Forward Automated coding of political event data, or the record of who-did-what-towhom within the context of political actions, has existed for roughly two decades. The approach has remained largely the same during this time, with the underlying coding procedures not updating to reflect changes in natural language processing (NLP) technology. These NLP technologies have now advanced to such a level, and with accompanying open-source software implementations, that their inclusion in the event-data coding process comes as an obvious advancement. When combined with changes in how news content is obtained, the ability to store and process large amounts of text, and enhancements based on two decades worth of event-data experience, it becomes clear that political event data is ready for a next generation dataset. In this chapter, I provide the technical details for creating such a nextgeneration dataset. The technical details lead to a pipeline for the production of the Phoenix event dataset. The Phoenix dataset is a daily updated, nearreal-time political event dataset. The coding process makes use of open-source NLP software, an abundance of online news content, and other technical advances made possible by open-source software. This enables a dataset that is transparent and replicable, while providing a more accurate coding process than previously possible. Additionally, the dataset\u2019s near-real-time nature also enables many applications that were previously impossible with batchupdated datasets, such as monitoring of ongoing events. Thus, this dataset", "targets": "Creating a Real-Time, Reproducible Event Dataset"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c5161b9824264450a4b7603519d83dad", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Multiclass prediction is the problem of classifying an object into a relevant target class. We consider the problem of learning a multiclass predictor that uses only few features, and in particular, the number of used features should increase sub-linearly with the number of possible classes. This implies that features should be shared by several classes. We describe and analyze the ShareBoost algorithm for learning a multiclass predictor that uses few shared features. We prove that ShareBoost efficiently finds a predictor that uses few shared features (if such a predictor exists) and that it has a small generalization error. We also describe how to use ShareBoost for learning a non-linear predictor that has a fast evaluation time. In a series of experiments with natural data sets we demonstrate the benefits of ShareBoost and evaluate its success relatively to other state-of-the-art approaches.", "targets": "ShareBoost: Efficient Multiclass Learning with Feature Sharing"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c617bfa1faf546c4bb033c8426447e8c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Quantifying the degree of spatial dependence for linguistic variables is a key task for analyzing dialectal variation. However, existing approaches have important drawbacks. First, they make unjustified assumptions about the nature of spatial variation: some assume that the geographical distribution of linguistic variables is Gaussian, while others assume that linguistic variation is aligned to pre-defined geopolitical units such as states or counties. Second, they are not applicable to all types of linguistic data: some approaches apply only to frequencies, others to boolean indicators of whether a linguistic variable is present. We present a new method for measuring geographical language variation, which solves both of these problems. Our approach builds on reproducing kernel Hilbert space (RKHS) representations for nonparametric statistics, and takes the form of a test statistic that is computed from pairs of individual geotagged observations without aggregation into predefined geographical bins. We compare this test with prior work using synthetic data as well as a diverse set of real datasets: a corpus of Dutch tweets, a Dutch syntactic atlas, and a dataset of letters to the editor in North American newspapers. Our proposed test is shown to support robust inferences across a broad range of scenarios and types of data.", "targets": "A Kernel Independence Test for Geographical Language Variation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-91f1d6bb74c744d087dec4264c32f596", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This position paper advocates a communicationsinspired approach to the design of machine learning systems on energy-constrained embedded \u2018always-on\u2019 platforms. The communicationsinspired approach has two versions 1) a deterministic version where existing low-power communication IC design methods are repurposed, and 2) a stochastic version referred to as Shannon-inspired statistical information processing employing information-based metrics, statistical error compensation (SEC), and retraining-based methods to implement ML systems on stochastic circuit/device fabrics operating at the limits of energy-efficiency. The communications-inspired approach has the potential to fully leverage the opportunities afforded by ML algorithms and applications in order to address the challenges inherent in their deployment on energy-constrained platforms.", "targets": "Energy-efficient Machine Learning in Silicon: A Communications-inspired Approach"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2192e5d24a3a4afd94f0228193e1b967", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data. In one pertinent, well-motivated attack scenario, an adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples. In this work, we present a simple but effective gradientbased approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks. Following a recently proposed framework for security evaluation, we simulate attack scenarios that exhibit different risk levels for the classifier by increasing the attacker\u2019s knowledge of the system and her ability to manipulate attack samples. This gives the classifier designer a better picture of the classifier performance under evasion attacks, and allows him to perform a more informed model selection (or parameter setting). We evaluate our approach on the relevant security task of malware detection in PDF files, and show that such systems can be easily evaded. We also sketch some countermeasures suggested by our analysis.", "targets": "Evasion attacks against machine learning at test time"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-79240c6c277c48cd8240fb8d8f36f592", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Denoising autoencoders (DAs) are typically applied to relatively large datasets for unsupervised learning of representative data encodings; they rely on the idea of making the learned representations robust to partial corruption of the input pattern, and perform learning using stochastic gradient descent with relatively large datasets. In this paper, we present a fully Bayesian DA architecture that allows for the application of DAs even when data is scarce. Our novel approach formulates the signal encoding problem under a nonparametric Bayesian regard, considering a Gaussian process prior over the latent input encodings generated given the (corrupt) input observations. Subsequently, the decoder modules of our model are formulated as large-margin regression models, treated under the Bayesian inference paradigm, by exploiting the maximum entropy discrimination (MED) framework. We exhibit the effectiveness of our approach using several datasets, dealing with both classification and transfer learning applications.", "targets": "Maximum Entropy Discrimination Denoising Autoencoders"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-01043a0481ec4d47b87fc5156282ecf6", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The subpath planning problem is a branch of the path planning problem, which has widespread applications in automated manufacturing process as well as vehicle and robot navigation. This problem is to find the shortest path or tour subject for travelling a set of given subpaths. The current approaches for dealing with the subpath planning problem are all based on meta-heuristic approaches. It is well-known that meta-heuristic based approaches have several deficiencies. To address them, we propose a novel approximation algorithm in the O(n3) time complexity class, which guarantees to solve any subpath planning problem instance with the fixed ratio bound of 2. Beside the formal proofs of the claims, our empirical evaluation shows that our approximation method acts much better than a state-of-the-art method, both in result and execution time. Note to Practitioners\u2014In some real world applications such as robot and vehicle navigation in structured and industrial environments as well as some of the manufacturing processes such as electronic printing and polishing, it is required for the agent to travel a set of predefined paths. Automating this process includes three steps: 1) capturing the environment of the actual problem and formulating it as a subpath planning problem; 2) solving subpath planning problem to find the near optimal path or tour; 3) command the robot to follow the output. The most challenging phase is the second one that this paper tries to tackle it. To design an effective automation for the aforementioned applications, it is essential to make use of methods with low computational cost but near optimal outputs in the second phase. According to the fact that the length of the final output has a direct effect on the cost of performing the task, it is desirable to incorporate methods with low complexity that can guarantee a bound for the difference between length of the optimal path and 1 ar X iv :1 60 3. 06 21 7v 1 [ cs .R O ] 2 0 M ar 2 01 6 the output. Current approaches for solving subpath planning problem are all meta-heuristic based. These methods do not provide such a bound. And plus, they are usually very time consuming. They may find promising results for some instances of problems, but there is no guarantee that they always exhibit such a good behaviour. In this paper, in order to avoid the issues of metaheuristics methods, we present an approximation algorithm, which provides an appropriate bound for the optimality of its solution. To gauge the performance of proposed methods, we conducted a set of experiments the results of which show that our proposed method finds shorter paths in less time in comparison with a state-of-the-art method.", "targets": "An Approximation Approach for Solving the Subpath Planning Problem"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f5a6012c387045849d299600693c37c7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In many settings, we have multiple data sets (also called views) that capture different and overlapping aspects of the same phenomenon. We are often interested in finding patterns that are unique to one or to a subset of the views. For example, we might have one set of molecular observations and one set of physiological observations on the same group of individuals, and we want to quantify molecular patterns that are uncorrelated with physiology. Despite being a common problem, this is highly challenging when the correlations come from complex distributions. In this paper, we develop the general framework of Rich Component Analysis (RCA) to model settings where the observations from different views are driven by different sets of latent components, and each component can be a complex, highdimensional distribution. We introduce algorithms based on cumulant extraction that provably learn each of the components without having to model the other components. We show how to integrate RCA with stochastic gradient descent into a meta-algorithm for learning general models, and demonstrate substantial improvement in accuracy on several synthetic and real datasets in both supervised and unsupervised tasks. Our method makes it possible to learn latent variable models when we don\u2019t have samples from the true model but only samples after complex perturbations.", "targets": "Rich Component Analysis"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b690024170b04e42aca0c37f635b5513", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we address the problem of data description using a Bayesian framework. The goal of data description is to draw a boundary around objects of a certain class of interest to discriminate that class from the rest of the feature space. Data description is also known as one-class learning and has a wide range of applications. The proposed approach uses a Bayesian framework to precisely compute the class boundary and therefore can utilize domain information in form of prior knowledge in the framework. It can also operate in the kernel space and therefore recognize arbitrary boundary shapes. Moreover, the proposed method can utilize unlabeled data in order to improve accuracy of discrimination. We evaluate our method using various real-world datasets and compare it with other state of the art approaches of data description. Experiments show promising results and improved performance over other data description and one-class learning algorithms.", "targets": "A Bayesian Approach to the Data Description Problem"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-fbcf44aee7ec463abff8447bb8be3076", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Mixed Integer Optimization has been a topic of active research in past decades. It has been used to solve Statistical problems of classification and regression involving massive data. However, there is an inherent degree of vagueness present in huge real life data. This impreciseness is handled by Fuzzy Sets. In this Paper, Fuzzy Mixed Integer Optimization Method (FMIOM) is used to find solution to Regression problem. The methodology exploits discrete character of problem. In this way large scale problems are solved within practical limits. The data points are separated into different polyhedral regions and each region has its own distinct regression coefficients. In this attempt, an attention is drawn to Statistics and Data Mining community that Integer Optimization can be significantly used to revisit different Statistical problems. Computational experimentations with generated and real data sets show that FMIOM is comparable to and often outperforms current leading methods. The results illustrate potential for significant impact of Fuzzy Integer Optimization methods on Computational Statistics and Data Mining. Keywords\u2013Mixed Integer Optimization; Fuzzy Sets; Regression; Polyhedral Regions", "targets": "Fuzzy Mixed Integer Optimization Model for Regression Approach"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-37e049ec0e4d46d0ac5ab7741c063dd3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "When faced with complex choices, users refine their own preference criteria as they explore the catalogue of options. In this paper we propose an approach to preference elicitation suited for this scenario. We extend Coactive Learning, which iteratively collects manipulative feedback, to optionally query example critiques. User critiques are integrated into the learning model by dynamically extending the feature space. Our formulation natively supports constructive learning tasks, where the option catalogue is generated on-the-fly. We present an upper bound on the average regret suffered by the learner. Our empirical analysis highlights the promise of", "targets": "Coactive Critiquing: Elicitation of Preferences and Features"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c5ba7a132fcf4ff3abc3a3a217e42d07", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A hierarchical clustering method is stable if small perturbations on the data set produce small perturbations in the result. This perturbationsare measured using the Gromov-Hausdorff metric. We study the problem of stability on linkage-based hierarchical clustering methods. We obtain that, under some basic conditions, standard linkage-based methods are semi-stable.This means that they are stable if the input data is close enough to an ultrametric space. We prove that, apart from exotic examples, introducing any unchaining condition in the algorithm always produces unstable methods.", "targets": "GROMOV-HAUSDORFF STABILITY OF LINKAGE-BASED HIERARCHICAL CLUSTERING METHODS"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1f98ee92fc5f46e29a8eebdff14e9ffd", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We introduce a compact graph-theoretic repre\u00ad sentation for multi-party game theory. Our main result is a provably correct and efficient algo\u00ad rithm for computing approximate Nash equilibria in one-stage games represented by trees or sparse graphs.", "targets": "Graphical Models for Game Theory"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ad6d0ae3396441218265febb85158da4", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Several large cloze-style context-questionanswer datasets have been introduced recently: the CNN and Daily Mail news data and the Children\u2019s Book Test. Thanks to the size of these datasets, the associated text comprehension task is well suited for deep-learning techniques that currently seem to outperform all alternative approaches. We present a new, simple model that uses attention to directly pick the answer from the context as opposed to computing the answer using a blended representation of words in the document as is usual in similar models. This makes the model particularly suitable for question-answering problems where the answer is a single word from the document. Our model outperforms models previously proposed for these tasks by a large margin.", "targets": "Text Understanding with the Attention Sum Reader Network"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0ed3daa6e0fb4eacb7d6a935dae459c7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The ICDM Challenge 2013 is to apply machine learning to the problem of hotel ranking, aiming to maximize purchases according to given hotel characteristics, location attractiveness of hotels, users aggregated purchase history and competitive online travel agency (OTA) information for each potential hotel choice. This paper describes the solution of team \u201dbinghsu & MLRush & BrickMover\u201d. We conduct simple feature engineering work and train different models by each individual team member. Afterwards, we use listwise ensemble method to combine each model\u2019s output. Besides describing effective model and features, we will discuss about the lessons we learned while using deep learning in this competition.", "targets": "Combination of Diverse Ranking Models for Personalized Expedia Hotel Searches"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-94556b76a7a9428abc3f518b91162203", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The rapid advancement of machine learning techniques has re-energized research into general artificial intelligence. While the idea of domain-agnostic meta-learning is appealing, this emerging field must come to terms with its relationship to human cognition and the statistics and structure of the tasks humans perform. The position of this article is that only by aligning our agents\u2019 abilities and environments with those of humans do we stand a chance at developing general artificial intelligence (GAI).", "targets": "Minimally Naturalistic Artificial Intelligence"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2c2401e97fe841818edb136988e8dbf3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Normalized graph cut (NGC) has become a popular research topic due to its wide applications in a large variety of areas like machine learning and very large scale integration (VLSI) circuit design. Most of traditional NGC methods are based on pairwise relationships (similarities). However, in real-world applications relationships among the vertices (objects) may be more complex than pairwise, which are typically represented as hyperedges in hypergraphs. Thus, normalized hypergraph cut (NHC) has attracted more and more attention. Existing NHC methods cannot achieve satisfactory performance in real applications. In this paper, we propose a novel relaxation approach, which is called relaxed NHC (RNHC), to solve the NHC problem. Our model is defined as an optimization problem on the Stiefel manifold. To solve this problem, we resort to the Cayley transformation to devise a feasible learning algorithm. Experimental results on a set of large hypergraph benchmarks for clustering and partitioning in VLSI domain show that RNHC can outperform the state-of-the-art methods.", "targets": "A New Relaxation Approach to Normalized Hypergraph Cut"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-505c1ec3a4de477f83df50183b125d57", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We study the problem of identifying the best action among a set of possible options when the value of each action is given by a mapping from a number of noisy micro-observables in the so-called fixed confidence setting. Our main motivation is the application to the minimax game search, which has been a major topic of interest in artificial intelligence. In this paper we introduce an abstract setting to clearly describe the essential properties of the problem. While previous work only considered a two-move game tree search problem, our abstract setting can be applied to the general minimax games where the depth can be non-uniform and arbitrary, and transpositions are allowed. We introduce a new algorithm (LUCB-micro) for the abstract setting, and give its lower and upper sample complexity results. Our bounds recover some previous results, which were only available in more limited settings, while they also shed further light on how the structure of minimax problems influence sample complexity.", "targets": "Structured Best Arm Identification with Fixed Confidence"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9041e4e341ff49ecb742d9f19341d75e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Opinion mining aims at extracting useful subjective information from reliable amounts of text. Opinion mining holder recognition is a task that has not been considered yet in Arabic Language. This task essentially requires deep understanding of clauses structures. Unfortunately, the lack of a robust, publicly available, Arabic parser further complicates the research. This paper presents a leading research for the opinion holder extraction in Arabic news independent from any lexical parsers. We investigate constructing a comprehensive feature set to compensate the lack of parsing structural outcomes. The proposed feature set is tuned from English previous works coupled with our proposed semantic field and named entities features. Our feature analysis is based on Conditional Random Fields (CRF) and semi-supervised pattern recognition techniques. Different research models are evaluated via cross-validation experiments achieving 54.03 F-measure. We publicly release our own research outcome corpus and lexicon for opinion mining community to encourage further research.", "targets": "A MACHINE LEARNING APPROACH FOR OPINION HOLDER EXTRACTION IN ARABIC LANGUAGE"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-266b9eb0f82e4e00a9538f26c1c4c814", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Plagiarism is one of the growing issues in academia and is always a concern in Universities and other academic institutions. The situation is becoming even worse with the availability of ample resources on the web. This paper focuses on creating an effective and fast tool for plagiarism detection for text based electronic assignments. Our plagiarism detection tool named AntiPlag is developed using the tri-gram sequence matching technique. Three sets of text based assignments were tested by AntiPlag and the results were compared against an existing commercial plagiarism detection tool. AntiPlag showed better results in terms of false positives compared to the commercial tool due to the pre-processing steps performed in AntiPlag. In addition, to improve the detection latency, AntiPlag applies a data clustering technique making it four times faster than the commercial tool considered. AntiPlag could be used to isolate plagiarized text based assignments from non-plagiarised assignments easily. Therefore, we present AntiPlag, a fast and effective tool for plagiarism detection on text based electronic assignments.", "targets": "AntiPlag: Plagiarism Detection on Electronic Submissions of Text Based Assignments"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-240710ada1514d109403627fb6c5e0af", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Most past work on social network link fraud detection tries to separate genuine users from fraudsters, implicitly assuming that there is only one type of fraudulent behavior. But is this assumption true? And, in either case, what are the characteristics of such fraudulent behaviors? In this work, we set up honeypots, (\u201cdummy\u201d social network accounts), and buy fake followers (after careful IRB approval). We report the signs of such behaviors including oddities in local network connectivity, account attributes, and similarities and differences across fraud providers. Most valuably, we discover and characterize several types of fraud behaviors. We discuss how to leverage our insights in practice by engineering strongly performing entropy-based features and demonstrating high classification accuracy. Our contributions are (a) instrumentation: we detail our experimental setup and carefully engineered data collection process to scrape Twitter data while respecting API rate-limits, (b) observations on fraud multimodality: we analyze our honeypot fraudster ecosystem and give surprising insights into the multifaceted behaviors of these fraudster types, and (c) features: we propose novel features that give strong (>0.95 precision/recall) discriminative power on ground-truth Twitter data.", "targets": "The Many Faces of Link Fraud"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-39ea820350b3401795f2a9ccff5ac52f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We consider the adaptive shortest-path routing problem in wireless networks under unknown and stochastically varying link states. In this problem, we aim to optimize the quality of communication between a source and a destination through adaptive path selection. Due to the randomness and uncertainties in the network dynamics, the quality of each link varies over time according to a stochastic process with unknown distributions. After a path is selected for communication, the aggregated quality of all links on this path (e.g., total path delay) is observed. The quality of each individual link is not observable. We formulate this problem as a multi-armed bandit with dependent arms. We show that by exploiting arm dependencies, a regret polynomial with network size can be achieved while maintaining the optimal logarithmic order with time. This is in sharp contrast with the exponential regret order with network size offered by a direct application of the classic MAB policies that ignore arm dependencies. Furthermore, our results are obtained under a general model of link-quality distributions (including heavy-tailed distributions) and find applications in cognitive radio and ad hoc networks with unknown and dynamic communication environments.", "targets": "Adaptive Shortest-Path Routing under Unknown and Stochastically Varying Link States"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-62b73e0358664ac2b460ac6f150731f4", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Domain knowledge is crucial for effective performance in autonomous control systems. Typically, human effort is required to encode this knowledge into a control algorithm. In this paper, we present an approach to language grounding which automatically interprets text in the context of a complex control application, such as a game, and uses domain knowledge extracted from the text to improve control performance. Both text analysis and control strategies are learned jointly using only a feedback signal inherent to the application. To effectively leverage textual information, our method automatically extracts the text segment most relevant to the current game state, and labels it with a task-centric predicate structure. This labeled text is then used to bias an action selection policy for the game, guiding it towards promising regions of the action space. We encode our model for text analysis and game playing in a multi-layer neural network, representing linguistic decisions via latent variables in the hidden layers, and game action quality via the output layer. Operating within the Monte-Carlo Search framework, we estimate model parameters using feedback from simulated games. We apply our approach to the complex strategy game Civilization II using the official game manual as the text guide. Our results show that a linguistically-informed game-playing agent significantly outperforms its language-unaware counterpart, yielding a 34% absolute improvement and winning over 65% of games when playing against the built-in AI of Civilization.", "targets": "Learning to Win by Reading Manuals in a Monte-Carlo Framework"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-849f075d54a1476a815744f9ac7653f3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We address a challenging fine-grain classification problem: recognizing a font style from an image of text. In this task, it is very easy to generate lots of rendered font examples but very hard to obtain real-world labeled images. This realto-synthetic domain gap caused poor generalization to new real data in previous methods (Chen et al. (2014)). In this paper, we refer to Convolutional Neural Networks, and use an adaptation technique based on a Stacked Convolutional AutoEncoder that exploits unlabeled real-world images combined with synthetic data. The proposed method achieves an accuracy of higher than 80% (top-5) on a realworld dataset.", "targets": "REAL-WORLD FONT RECOGNITION USING DEEP NET-"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-02e822b61fac4514a437ffe587223d9b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recently, triggered by the impressive results in TV-games or game of Go by Google DeepMind, end-to-end reinforcement learning (RL) is collecting attentions. Although little is known, the author\u2019s group has propounded this framework for around 20 years and already has shown a variety of functions that emerge in a neural network (NN) through RL. In this paper, they are introduced again at this timing. \u201cFunction Modularization\u201d approach is deeply penetrated subconsciously. The inputs and outputs for a learning system can be raw sensor signals and motor commands. \u201cState space\u201d or \u201caction space\u201d generally used in RL show the existence of functional modules. That has limited reinforcement learning to learning only for the action-planning module. In order to extend reinforcement learning to learning of the entire function on a huge degree of freedom of a massively parallel learning system and to explain or develop human-like intelligence, the author has believed that end-to-end RL from sensors to motors using a recurrent NN (RNN) becomes an essential key. Especially in the higher functions, since their inputs or outputs are difficult to decide, this approach is very effective by being free from the need to decide them. The functions that emerge, we have confirmed, through RL using a NN cover a broad range from real robot learning with raw camera pixel inputs to acquisition of dynamic functions in a RNN. Those are (1)image recognition, (2)color constancy (optical illusion), (3)sensor motion (active recognition), (4)hand-eye coordination and hand reaching movement, (5)explanation of brain activities, (6)communication, (7)knowledge transfer, (8)memory, (9)selective attention, (10)prediction, (11)exploration. The end-to-end RL enables the emergence of very flexible comprehensive functions that consider many things in parallel although it is difficult to give the boundary of each function clearly.", "targets": "Functions that Emerge through End-to-endReinforcement Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-617dfc145f40436893d6cb79491050a1", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The ability to perform pixel-wise semantic segmentation in real-time is of paramount importance in practical mobile applications. Recent deep neural networks aimed at this task have the disadvantage of requiring a large number of floating point operations and have long run-times that hinder their usability. In this paper, we propose a novel deep neural network architecture named ENet (efficient neural network), created specifically for tasks requiring low latency operation. ENet is up to 18\u00d7 faster, requires 75\u00d7 less FLOPs, has 79\u00d7 less parameters, and provides similar or better accuracy to existing models. We have tested it on CamVid, Cityscapes and SUN datasets and report on comparisons with existing state-of-the-art methods, and the trade-offs between accuracy and processing time of a network. We present performance measurements of the proposed architecture on embedded systems and suggest possible software improvements that could make ENet even faster.", "targets": "ENET: A DEEP NEURAL NETWORK ARCHITECTURE FOR REAL-TIME SEMANTIC SEGMENTATION"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-59e59f8107a747429708559a7a35bfe5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We discuss methodological issues related to the evaluation of unsupervised binary code construction methods for nearest neighbor search. These issues have been widely ignored in literature. These coding methods attempt to preserve either Euclidean distance or angular (cosine) distance in the binary embedding space. We explain why when comparing a method whose goal is preserving cosine similarity to one designed for preserving Euclidean distance, the original features should be normalized by mapping them to the unit hypersphere before learning the binary mapping functions. To compare a method whose goal is to preserves Euclidean distance to one that preserves cosine similarity, the original feature data must be mapped to a higher dimension by including a bias term in binary mapping functions. These conditions ensure the fair comparison between different binary code methods for the task of nearest neighbor search. Our experiments show under these conditions the very simple methods (e.g. LSH and ITQ) often outperform recent state-of-the-art methods (e.g. MDSH and OK-means).", "targets": "Comparing apples to apples in the evaluation of binary coding methods"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a3c3c647cc7f43aca7ad32d49f2e7aaa", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We introduce a method for constructing skills capable of solving tasks drawn from a distribution of parameterized reinforcement learning problems. The method draws example tasks from a distribution of interest and uses the corresponding learned policies to estimate the topology of the lower-dimensional piecewise-smooth manifold on which the skill policies lie. This manifold models how policy parameters change as task parameters vary. The method identifies the number of charts that compose the manifold and then applies non-linear regression in each chart to construct a parameterized skill by predicting policy parameters from task parameters. We evaluate our method on an underactuated simulated robotic arm tasked with learning to accurately throw darts at a parameterized target location.", "targets": "Learning Parameterized Skills"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f1c760d4bc354caf9d636abd48e34c5c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The commonly used Q-learning algorithm combined with function approximation induces systematic overestimations of state-action values. These systematic errors might cause instability, poor performance and sometimes divergence of learning. In this work, we present the AVERAGED TARGET DQN (ADQN) algorithm, an adaptation to the DQN class of algorithms which uses a weighted average over past learned networks to reduce generalization noise variance. As a consequence, this leads to reduced overestimations, more stable learning process and improved performance. Additionally, we analyze ADQN variance reduction along trajectories and demonstrate the performance of ADQN on a toy Gridworld problem, as well as on several of the Atari 2600 games from the Arcade Learning Environment.", "targets": "Deep Reinforcement Learning with Averaged Target DQN"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7488c483258440298aa4ca7a8fca8483", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Studying characters plays a vital role in computationally representing and interpreting narratives. Unlike previous work, which has focused on inferring character roles, we focus on the problem of modeling their relationships. Rather than assuming a fixed relationship for a character pair, we hypothesize that relationships are dynamic and temporally evolve with the progress of the narrative, and formulate the problem of relationship modeling as a structured prediction problem. We propose a semisupervised framework to learn relationship sequences from fully as well as partially labeled data. We present a Markovian model capable of accumulating historical beliefs about the relationship and status changes. We use a set of rich linguistic and semantically motivated features that incorporate world knowledge to investigate the textual content of narrative. We empirically demonstrate that such a framework outperforms competitive baselines.", "targets": "Modeling Dynamic Relationships Between Characters in Literary Novels"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-84d3201b2c494b96a7ec7bb71a18e456", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "To coordinate with other agents in its envi\u00ad ronment, an agent needs models of what the other agents are trying to do. When com\u00ad munication is impossible or expensive, this information must be acquired indirectly via plan recognition. Typical approaches to plan recognition start with a specification of the possible plans the other agents may be follow\u00ad ing, and develop special techniques for dis\u00ad criminating among the possibilities. Perhaps more desirable would be a uniform procedure for mapping plans to general structures sup\u00ad porting inference based on uncertain and in\u00ad complete observations. In this paper, we de\u00ad scribe a set of methods for converting plans represented in a flexible procedural language to observation models represented as proba\u00ad bilistic belief networks.", "targets": "The Automated Mapping of Plans for Plan Recognition*"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-927c2320ca6e4ec7977cb9db4315cd0a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This work proposes a novel support vector machine (SVM) based robust automatic speech recognition (ASR) front-end that operates on an ensemble of the subband components of high-dimensional acoustic waveforms. The key issues of selecting the appropriate SVM kernels for classification in frequency subbands and the combination of individual subband classifiers using ensemble methods are addressed. The proposed front-end is compared with state-of-the-art ASR front-ends in terms of robustness to additive noise and linear filtering. Experiments performed on the TIMIT phoneme classification task demonstrate the benefits of the proposed subband based SVM front-end: it outperforms the standard cepstral front-end in the presence of noise and linear filtering for signal-to-noise ratio (SNR) below 12-dB. A combination of the proposed front-end with a conventional front-end such as MFCC yields further improvements over the individual front ends across the full range of noise levels.", "targets": "A Subband-Based SVM Front-End for Robust ASR"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8049792c34e14d97bbf46bf531661476", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We analyze online [6] and mini-batch [20] k-means variants. Bothscale up the widely used Lloyd\u2019s algorithm via stochastic approximation,and have become popular for large-scale clustering and unsupervisedfeature learning. We show, for the first time, that they have globalconvergence towards \u201clocal optima\u201d at rateO(1t ) under general condi-tions. In addition, we show if the dataset is clusterable, with suitableinitialization, mini-batch k-means converges to an optimal k-meanssolution at rateO(1t ) with high probability. The k-means objective isnon-convex and non-differentiable: we exploit ideas from non-convexgradient-based optimization by providing a novel characterization of thetrajectory of k-means algorithm on its solution space, and circumventits non-differentiability via geometric insights about k-means update.", "targets": "Convergence rate of stochastic k-means"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7254dbcfa26c4d83b1d6e8fdcc9f5956", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We investigate the use of temporally abstract actions, or macro-actions, in the solution of Markov decision processes. Unlike current mod\u00ad els that combine both primitive actions and macro-actions and leave the state space un\u00ad changed, we propose a hierarchical model (using an abstract MDP) that works with macro-actions only, and that significantly reduces the size of the state space. This is achieved by treating macro\u00ad actions as local policies that act in certain regions of state space, and by restricting states in the ab\u00ad stract MDP to those at the boundaries of regions. The abstract MDP approximates the original and can be solved more efficiently. We discuss sev\u00ad eral ways in which macro-actions can be gen\u00ad erated to ensure good solution quality. Finally, we consider ways in which macro-actions can be reused to solve multiple, related MDPs; and we show that this can justify the computational over\u00ad head of macro-action generation.", "targets": "Hierarchical Solution of Markov Decision Processes using Macro-actions"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1aa34cff49fa4908bbf2d2dab9267c16", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Many real-world applications require robust algorithms to learn point processes based on a type of incomplete data \u2014 the so-called short doublycensored (SDC) event sequences. We study this critical problem of quantitative asynchronous event sequence analysis under the framework of Hawkes processes by leveraging the idea of data synthesis. Given SDC event sequences observed in a variety of time intervals, we propose a sampling-stitching data synthesis method \u2014 sampling predecessors and successors for each SDC event sequence from potential candidates and stitching them together to synthesize long training sequences. The rationality and the feasibility of our method are discussed in terms of arguments based on likelihood. Experiments on both synthetic and real-world data demonstrate that the proposed data synthesis method improves learning results indeed for both timeinvariant and time-varying Hawkes processes.", "targets": "Learning Hawkes Processes from Short Doubly-Censored Event Sequences"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5832db8e13f94799b4ba558ec85e95e5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recent work has proposed several generative neural models for constituency parsing that achieve state-of-the-art results. Since direct search in these generative models is difficult, they have primarily been used to rescore candidate outputs from base parsers in which decoding is more straightforward. We first present an algorithm for direct search in these generative models. We then demonstrate that the rescoring results are at least partly due to implicit model combination rather than reranking effects. Finally, we show that explicit model combination can improve performance even further, resulting in new state-of-the-art numbers on the PTB of 94.25 F1 when training only on gold data and 94.66 F1 when using external data.", "targets": "Improving Neural Parsing by Disentangling Model Combination and Reranking Effects"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-db639f97c7d14ce5b117c98235a2be03", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Parkinson\u2019s disease (PD) is one of the major public health problems in the world. It is a well-known fact that around one million people suffer from Parkinson\u2019s disease in the United States whereas the number of people suffering from Parkinson\u2019s disease worldwide is around 5 millions. Thus, it is important to predict Parkinson\u2019s disease in early stages so that early plan for the necessary treatment can be made. People are mostly familiar with the motor symptoms of Parkinson\u2019s disease, however an increasing amount of research is being done to predict the Parkinson\u2019s disease from non-motor symptoms that precede the motor ones. If early and reliable prediction is possible then a patient can get a proper treatment at the right time. Nonmotor symptoms considered are Rapid Eye Movement (REM) sleep Behaviour Disorder (RBD) and olfactory loss. Developing machine learning models that can help us in predicting the disease can play a vital role in early prediction. In this paper we extend a work which used the non-motor features such as RBD and olfactory loss. Along with this the extended work also uses important biomarkers. In this paper we try to model this classifier using different machine learning models that have not been used before. We developed automated diagnostic models using Multilayer Perceptron, BayesNet, Random Forest and Boosted Logistic Regression. It has been observed that Boosted Logistic Regression provides the best performance with an impressive accuracy of 97.159 % and the area under the ROC curve was 98.9%. Thus, it is concluded that this models can be used for early prediction of Parkinson\u2019s disease. Keywords\u2014Improved Accuracy, Prediction of Parkinson\u2019s Disease, Non Motor Features, Biomarkers, Machine Learning Techniques, Boosted Logistic Regression, BayesNet, Multilayer Perceptron,", "targets": "An Improved Approach for Prediction of Parkinson\u2019s Disease using Machine Learning Techniques"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ac10e645ad684298af2b1c8fa19a6ba6", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Program authorship attribution has implications for the privacy of programmers who wish to contribute code anonymously. While previous work has shown that complete files that are individually authored can be attributed, we show here for the first time that accounts belonging to open source contributors containing short, incomplete, and typically uncompilable fragments can also be effectively attributed. We propose a technique for authorship attribution of contributor accounts containing small source code samples, such as those that can be obtained from version control systems or other direct comparison of sequential versions. We show that while application of previous methods to individual small source code samples yields an accuracy of about 73% for 106 programmers as a baseline, by ensembling and averaging the classification probabilities of a sufficiently large set of samples belonging to the same author we achieve 99% accuracy for assigning the set of samples to the correct author. Through these results, we demonstrate that attribution is an important threat to privacy for programmers even in real-world collaborative environments such as GitHub. Additionally, we propose the use of calibration curves to identify samples by unknown and previously unencountered authors in the open world setting. We show that we can also use these calibration curves in the case that we do not have linking information and thus are forced to classify individual samples directly. This is because the calibration curves allow us to identify which samples are more likely to have been correctly attributed. Using such a curve can help an analyst choose a cut-off point which will prevent most misclassifications, at the cost of causing the rejection of some of the more dubious correct attributions.", "targets": "Git Blame Who?: Stylistic Authorship Attribution of Small, Incomplete Source Code Fragments"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ce02e08394684c038e9e5505e42d7c58", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Software design is crucial to successful software development, yet is a demanding multi-objective problem for software engineers. In an attempt to assist the software designer, interactive (i.e. human in-the-loop) meta-heuristic search techniques such as evolutionary computing have been applied and show promising results. Recent investigations have also shown that Ant Colony Optimization (ACO) can outperform evolutionary computing as a potential search engine for interactive software design. With a limited computational budget, ACO produces superior candidate design solutions in a smaller number of iterations. Building on these findings, we propose a novel interactive ACO (iACO) approach to assist the designer in early lifecycle software design, in which the search is steered jointly by subjective designer evaluation as well as machine fitness functions relating the structural integrity and surrogate elegance of software designs. Results show that iACO is speedy, responsive and highly effective in enabling interactive, dynamic multi-objective search in early lifecycle software design. Study participants rate the iACO search experience as compelling. Results of machine learning of fitness measure weightings indicate that software design elegance does indeed play a significant role in designer evaluation of candidate software design. We conclude that the evenness of the number of attributes and methods among classes (NAC) is a significant surrogate elegance measure, which in turn suggests that this evenness of distribution, when combined with structural integrity, is an implicit but crucial component of effective early lifecycle software design.", "targets": "Interactive Ant Colony Optimization (iACO) for Early Lifecycle Software Design"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-feb14178636c4a138dc8e3df349d806b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "An evaluation of distributed word representation is generally conducted using a word similarity task and/or a word analogy task. There are many datasets readily available for these tasks in English. However, evaluating distributed representation in languages that do not have such resources (e.g., Japanese) is difficult. Therefore, as a first step toward evaluating distributed representations in Japanese, we constructed a Japanese word similarity dataset. To the best of our knowledge, our dataset is the first resource that can be used to evaluate distributed representations in Japanese. Moreover, our dataset contains various parts of speech and includes rare words in addition to common words.", "targets": "Construction of a Japanese Word Similarity Dataset"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4f5b114cabd643ce806d9a8a80eb768a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "When you need to enable deep learning on low-cost embedded SoCs, is it better to port an existing deep learning framework or should you build one from scratch? In this paper, we share our practical experiences of building an embedded inference engine using ARM Compute Library (ACL). The results show that, contradictory to conventional wisdoms, for simple models, it takes much less development time to build an inference engine from scratch compared to porting existing frameworks. In addition, by utilizing ACL, we managed to build an inference engine that outperforms TensorFlow by 25%. Our conclusion is that, on embedded devices, we most likely will use very simple deep learning models for inference, and with well-developed building blocks such as ACL, it may be better in both performance and development time to build the engine from scratch.", "targets": "Enabling Embedded Inference Engine with ARM Compute Library"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-331665ed7bf54e829f9fbab78b4d0876", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We introduce a globally normalized transition-based neural network model that achieves state-of-the-art part-ofspeech tagging, dependency parsing and sentence compression results. Our model is a simple feed-forward neural network that operates on a task-specific transition system, yet achieves comparable or better accuracies than recurrent models. The key insight is based on a novel proof illustrating the label bias problem and showing that globally normalized models can be strictly more expressive than locally normalized models.", "targets": "Globally Normalized Transition-Based Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3ca0b32d199c4910907512abb7a06762", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The large scale of Q&A archives accumulated in community based question answering (CQA) servivces are important information and knowledge resource on the web. Question and answer matching task has been attached much importance to for its ability to reuse knowledge stored in these systems: it can be useful in enhancing user experience with recurrent questions. In this paper, a Word Embedding based Correlation (WEC) model is proposed by integrating advantages of both the translation model and word embedding. Given a random pair of words, WEC can score their co-occurrence probability in Q&A pairs, while it can also leverage the continuity and smoothness of continuous space word representation to deal with new pairs of words that are rare in the training parallel text. An experimental study on Yahoo! Answers dataset and Baidu Zhidao dataset shows this new method\u2019s promising", "targets": "Word Embedding Based Correlation Model for Question/Answer Matching"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6e79e43701bb4dea86151fdff95d305c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We describe a general framework for online adaptation of optimization hyperparameters by \u2018hot swapping\u2019 their values during learning. We investigate this approach in the context of adaptive learning rate selection using an explore-exploit strategy from the multi-armed bandit literature. Experiments on a benchmark neural network show that the hot swapping approach leads to consistently better solutions compared to well-known alternatives such as AdaDelta and stochastic gradient with exhaustive hyperparameter search.", "targets": "OPTIMIZATION HYPERPARAMETERS"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-fd3b0ae144e84d06bae97a1621cc080f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "So-called combined approaches answer a conjunctive query over a description logic ontology in three steps: first, they materialise certain consequences of the ontology and the data; second, they evaluate the query over the data; and third, they filter the result of the second phase to eliminate unsound answers. Such approaches were developed for various members of the DL-Lite and the EL families of languages, but none of them can handle ontologies containing nominals. In our work, we bridge this gap and present a combined query answering approach for ELHO \u22a5\u2014a logic that contains all features of the OWL 2 EL standard apart from transitive roles and complex role inclusions. This extension is nontrivial because nominals require equality reasoning, which introduces complexity into the first and the third step. Our empirical evaluation suggests that our technique is suitable for practical application, and so it provides a practical basis for conjunctive query answering in a large fragment of OWL 2 EL.", "targets": "Introducing Nominals to the Combined Query Answering Approaches for EL"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c4a726047b6c40038399ca2623167ddf", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Despite the prevalence of collaborative filtering in recommendation systems, there has been little theoretical development on why and how well it works, especially in the \u201conline\u201d setting, where items are recommended to users over time. We address this theoretical gap by introducing a model for online recommendation systems, cast item recommendation under the model as a learning problem, and analyze the performance of a cosine-similarity collaborative filtering method. In our model, each of n users either likes or dislikes each of m items. We assume there to be k types of users, and all the users of a given type share a common string of probabilities determining the chance of liking each item. At each time step, we recommend an item to each user, where a key distinction from related bandit literature is that once a user consumes an item (e.g., watches a movie), then that item cannot be recommended to the same user again. The goal is to maximize the number of likable items recommended to users over time. Our main result establishes that after nearly log(km) initial learning time steps, a simple collaborative filtering algorithm achieves essentially optimal performance without knowing k. The algorithm has an exploitation step that uses cosine similarity and two types of exploration steps, one to explore the space of items (standard in the literature) and the other to explore similarity between users (novel to this work).", "targets": "A Latent Source Model for Online Collaborative Filtering"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-45a95bd033174b5b93b23742f4c80e63", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The speech feature extraction has been a key focus in robust speech recognition research; it significantly affects the recognition performance. In this paper, we first study a set of different features extraction methods such as linear predictive coding (LPC), mel frequency cepstral coefficient (MFCC) and perceptual linear prediction (PLP) with several features normalization techniques like rasta filtering and cepstral mean subtraction (CMS). Based on this, a comparative evaluation of these features is performed on the task of text independent speaker identification using a combination between gaussian mixture models (GMM) and linear and non-linear kernels based on support vector machine (SVM).", "targets": "On the Use of Different Feature Extraction Methods for Linear and Non Linear kernels"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3cf3b6ff1e824caea849084a71f83a35", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The paper analyzes dynamic epistemic logic from a topological perspective. The main contribution consists of a framework in which dynamic epistemic logic satisfies the requirements for being a topological dynamical system thus interfacing discrete dynamic logics with continuous mappings of dynamical systems. The setting is based on a notion of logical convergence, demonstratively equivalent with convergence in Stone topology. Presented is a flexible, parametrized family of metrics inducing the latter, used as an analytical aid. We show maps induced by action model transformations continuous with respect to the Stone topology and present results on the recurrent behavior of said maps.", "targets": "Convergence, Continuity and Recurrence in Dynamic Epistemic Logic"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-393c88577c3c46598f1045bc5d784b5b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Cross-document coreference, the problem of resolving entity mentions across multi-document collections, is crucial to automated knowledge base construction and data mining tasks. However, the scarcity of large labeled data sets has hindered supervised machine learning research for this task. In this paper we develop and demonstrate an approach based on \u201cdistantly-labeling\u201d a data set from which we can train a discriminative cross-document coreference model. In particular we build a dataset of more than a million people mentions extracted from 3.5 years of New York Times articles, leverage Wikipedia for distant labeling with a generative model (and measure the reliability of such labeling); then we train and evaluate a conditional random field coreference model that has factors on cross-document entities as well as mention-pairs. This coreference model obtains high accuracy in resolving mentions and entities that are not present in the training data, indicating applicability to non-Wikipedia data. Given the large amount of data, our work is also an exercise demonstrating the scalability of our approach.", "targets": "Distantly Labeling Data for Large Scale Cross-Document Coreference"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9143184e439647edb267c1baac771c15", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We explore the use of segments learnt using Byte Pair Encoding (referred to as BPE units) as basic units for statistical machine translation between related languages and compare it with orthographic syllables, which are currently the best performing basic units for this translation task. BPE identifies the most frequent character sequences as basic units, while orthographic syllables are linguistically motivated pseudo-syllables. We show that BPE units outperform orthographic syllables as units of translation, showing up to 11% increase in BLEU scores. In addition, BPE can be applied to any writing system, while orthographic syllables can be used only for languages whose writing systems use vowel representations. We show that BPE units outperform word and morpheme level units for translation involving languages like Urdu, Japanese whose writing systems do not use vowels (either completely or partially). Across many language pairs, spanning multiple language families and types of writing systems, we show that translation with BPE segments outperforms orthographic syllables, especially for morphologically rich languages.", "targets": "Learning variable length units for SMT between related languages via Byte Pair Encoding"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-324df8a3f8da4f869d0b800919d511cc", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "One of the main challenges in Grid systems is designing an adaptive, scalable, and model-independent method for job scheduling to achieve a desirable degree of load balancing and system efficiency. Centralized job scheduling methods have some drawbacks, such as single point of failure and lack of scalability. Moreover, decentralized methods require a coordination mechanism with limited communications. In this paper, we propose a multi-agent approach to job scheduling in Grid, named Centralized Learning Distributed Scheduling (CLDS), by utilizing the reinforcement learning framework. The CLDS is a model free approach that uses the information of jobs and their completion time to estimate the efficiency of resources. In this method, there are a learner agent and several scheduler agents that perform the task of learning and job scheduling with the use of a coordination strategy that maintains the communication cost at a limited level. We evaluated the efficiency of the CLDS method by designing and performing a set of experiments on a simulated Grid system under different system scales and loads. The results show that the CLDS can effectively balance the load of system even in large scale and heavy loaded Grids, while maintains its adaptive performance and scalability.", "targets": "A centralized reinforcement learning method for multi-agent job scheduling in Grid"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3114dcccaa3646fcb79535600ced2486", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A fundamental challenge in developing semantic parsers is the paucity of strong supervision in the form of language utterances annotated with logical form. In this paper, we propose to exploit structural regularities in language in different domains, and train semantic parsers over multiple knowledge-bases (KBs), while sharing information across datasets. We find that we can substantially improve parsing accuracy by training a single sequence-tosequence model over multiple KBs, when providing an encoding of the domain at decoding time. Our model achieves state-ofthe-art performance on the OVERNIGHT dataset (containing eight domains), improves performance over a single KB baseline from 75.6% to 79.6%, while obtaining a 7x reduction in the number of model parameters.", "targets": "Neural Semantic Parsing over Multiple Knowledge-bases"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-fcff62c80b2f49e494bb4dcd06283236", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper concerns the probabilistic evalu\u00ad ation of the effects of actions in the presence of unmeasured variables. We show that the identification of causal effect between a sin\u00ad gleton variable X and a set of variables Y can be accomplished systematically, in time polynomial in the number of variables in the graph. When the causal effect is identifiable, a closed-form expression can be obtained for the probability that the action will achieve a specified goal, or a set of goals.", "targets": "Testing Identifiability of Causal Effects"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-fbe8e9ff297c4d3c81d1ba1befe03aeb", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Reactive (memoryless) policies are sufficient in completely observable Markov decision pro\u00ad cesses (MDPs), but some kind of memory is usually necessary for optimal control of a par\u00ad tially observable MDP. Policies with finite mem\u00ad ory can be represented as finite-state automata. In this paper, we extend Baird and Moore's YAPS algorithm to the problem of learning gen\u00ad eral finite-state automata. Because it performs stochastic gradient descent, this algorithm can be shown to converge to a locally optimal finite\u00ad state controller. We provide the details of the algorithm and then consider the question of un\u00ad der what conditions stochastic gradient descent will outperform exact gradient descent. We con\u00ad clude with empirical results comparing the per\u00ad formance of stochastic and exact gradient de\u00ad scent, and showing the ability of our algorithm to extract the useful information contained in the sequence of past observations to compensate for the lack of observability at each time-step.", "targets": "Learning Finite-State Controllers for Partially Observable Environments"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9cc313135c4343779eb3181b90258d3a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The objective of machine learning is to extract useful information from data, while privacy is preserved by concealing information. Thus it seems hard to reconcile these competing interests. However, they frequently must be balanced when mining sensitive data. For example, medical research represents an important application where it is necessary both to extract useful information and protect patient privacy. One way to resolve the conflict is to extract general characteristics of whole populations without disclosing the private information of individuals. In this paper, we consider differential privacy, one of the most popular and powerful definitions of privacy. We explore the interplay between machine learning and differential privacy, namely privacy-preserving machine learning algorithms and learning-based data release mechanisms. We also describe some theoretical results that address what can be learned differentially privately and upper bounds of loss functions for differentially", "targets": "Differential Privacy and Machine Learning: a Survey and Review"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b66d049484834f4f98a166a8ad860e20", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "There are two main approaches to the distributed representation of words: lowdimensional deep learning embeddings and high-dimensional distributional models, in which each dimension corresponds to a context word. In this paper, we combine these two approaches by learning embeddings based on distributionalmodel vectors \u2013 as opposed to one-hot vectors as is standardly done in deep learning. We show that the combined approach has better performance on a word relatedness judgment task.", "targets": "Distributional Models and Deep Learning Embeddings: Combining the Best of Both Worlds"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-677ef07a21a54ff08f0a1a13b9b53c38", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper studies single-image depth perception in the wild, i.e., recovering depth from a single image taken in unconstrained settings. We introduce a new dataset \u201cDepth in the Wild\u201d consisting of images in the wild annotated with relative depth between pairs of random points. We also propose a new algorithm that learns to estimate metric depth using annotations of relative depth. Compared to the state of the art, our algorithm is simpler and performs better. Experiments show that our algorithm, combined with existing RGB-D data and our new relative depth annotations, significantly improves single-image depth perception in the wild. Deep Network with Pixel-wise Prediction Metric Depth RGB-D Data Relative Depth Annotations", "targets": "Single-Image Depth Perception in the Wild"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-03e4a2096b71434d9f5e4a98b5d6f86f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Deep Neural Network architectures with external memory components allow the model to perform inference and capture long term dependencies, by storing information explicitly. In this paper, we generalize Key-Value Memory Networks to a multimodal setting, introducing a novel keyaddressing mechanism to deal with sequence-to-sequence models. The advantages of the framework are demonstrated on the task of video captioning, i.e generating natural language descriptions for videos. Conditioning on the previous time-step attention distributions for the key-value memory slots, we introduce a temporal structure in the memory addressing schema. The proposed model naturally decomposes the problem of video captioning into vision and language segments, dealing with them as key-value pairs. More specifically, we learn a semantic embedding (v) corresponding to each frame (k) in the video, thereby creating (k, v) memory slots. This allows us to exploit the temporal dependencies at multiple hierarchies (in the recurrent keyaddressing; and in the language decoder). Exploiting this flexibility of the framework, we additionally capture spatial dependencies while mapping from the visual to semantic embedding. Extensive experiments on the Youtube2Text dataset demonstrate usefulness of recurrent key-addressing, while achieving competitive scores on BLEU@4, METEOR metrics against state-of-the-art models.", "targets": "Recurrent Memory Addressing for describing videos"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-332eea059df84cba917a835087070c40", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper presents a novel approach for enhancing the multiple sets of acoustic patterns automatically discovered from a given corpus. In a previous work it was proposed that different HMM configurations (number of states per model, number of distinct models) for the acoustic patterns form a two-dimensional space. Multiple sets of acoustic patterns automatically discovered with the HMM configurations properly located on different points over this two-dimensional space were shown to be complementary to one another, jointly capturing the characteristics of the given corpus. By representing the given corpus as sequences of acoustic patterns on different HMM sets, the pattern indices in these sequences can be relabeled considering the context consistency across the different sequences. Good improvements were observed in preliminary experiments of pattern spoken term detection (STD) performed on both TIMIT and Mandarin Broadcast News with such enhanced patterns.", "targets": "ENHANCING AUTOMATICALLY DISCOVERED MULTI-LEVEL ACOUSTIC PATTERNS CONSIDERING CONTEXT CONSISTENCY WITH APPLICATIONS IN SPOKEN TERM DETECTION"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a1e9097d07e6490fba9c69ad3ced1dc6", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we consider the problem of predicting demographics of geographic units given geotagged Tweets that are composed within these units. Traditional survey methods that offer demographics estimates are usually limited in terms of geographic resolution, geographic boundaries, and time intervals. Thus, it would be highly useful to develop computational methods that can complement traditional survey methods by offering demographics estimates at finer geographic resolutions, with flexible geographic boundaries (i.e. not confined to administrative boundaries), and at different time intervals. While prior work has focused on predicting demographics and health statistics at relatively coarse geographic resolutions such as the county-level or state-level, we introduce an approach to predict demographics at finer geographic resolutions such as the blockgroup-level. For the task of predicting gender and race/ethnicity counts at the blockgrouplevel, an approach adapted from prior work to our problem achieves an average correlation of 0.389 (gender) and 0.569 (race) on a held-out test dataset. Our approach outperforms this prior approach with an average correlation of 0.671 (gender) and 0.692 (race).", "targets": "Predicting Demographics of High-Resolution Geographies with Geotagged Tweets"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f1ddf0fb9d0949cc9216263c9c2cfa64", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a statistical model applicable to character level language modeling and show that it is a good fit for both, program source code and English text. The model is parameterized by a program from a domain-specific language (DSL) that allows expressing non-trivial data dependencies. Learning is done in two phases: (i) we synthesize a program from the DSL, essentially learning a good representation for the data, and (ii) we learn parameters from the training data \u2013 the process is done via counting, as in simple language models such as n-gram. Our experiments show that the precision of our model is comparable to that of neural networks while sharing a number of advantages with n-gram models such as fast query time and the capability to quickly add and remove training data samples. Further, the model is parameterized by a program that can be manually inspected, understood and updated, addressing a major problem of neural networks.", "targets": "PROGRAM SYNTHESIS FOR CHARACTER LEVEL LANGUAGE MODELING"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-adde2cc6c63945a79101600380050ac1", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We seek decision rules for prediction-time cost reduction, where complete data is available for training, but during prediction-time, each feature can only be acquired for an additional cost. We propose a novel random forest algorithm to minimize prediction error for a user-specified average feature acquisition budget. While random forests yield strong generalization performance, they do not explicitly account for feature costs and furthermore require low correlation among trees, which amplifies costs. Our random forest grows trees with low acquisition cost and high strength based on greedy minimax cost-weighted-impurity splits. Theoretically, we establish near-optimal acquisition cost guarantees for our algorithm. Empirically, on a number of benchmark datasets we demonstrate superior accuracy-cost curves against state-of-the-art prediction-time algorithms.", "targets": "Feature-Budgeted Random Forest"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-44f3e11b850a4762a09681a4d7a84a6a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Subjective questions such as \u2018does neymar dive\u2019, or \u2018is clinton lying\u2019, or \u2018is trump a fascist\u2019, are popular queries to web search engines, as can be seen by autocompletion suggestions on Google, Yahoo and Bing. In the era of cognitive computing, beyond search, they could be handled as hypotheses issued for evaluation. Our vision is to leverage on unstructured data and metadata of the rich user-generated multimedia that is often shared as material evidence in favor or against hypotheses in social media platforms. In this paper we present two preliminary experiments along those lines and discuss challenges for a cognitive computing system that collects material evidence from user-generated multimedia towards aggregating it into some form of collective decision on the hypothesis. Keywords-Material evidence; User-generated multimedia; Social media hypothesis management; Cognitive computing. In: Proc. of the 1st Workshop on Multimedia Support for Decision-Making Processes, at IEEE Intl. Symposium on Multimedia (ISM\u201916), San Jose, CA, 2016.", "targets": "Show me the material evidence \u2014 Initial experiments on evaluating hypotheses from user-generated multimedia data"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-138ecd7cd85d41e19fa8d785126dd8ab", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper describes our solution to the multi-modal learning challenge of ICML. This solution comprises constructing threelevel representations in three consecutive stages and choosing correct tag words with a data-specific strategy. Firstly, we use typical methods to obtain level-1 representations. Each image is represented using MPEG-7 and gist descriptors with additional features released by the contest organizers. And the corresponding word tags are represented by bag-of-words model with a dictionary of 4000 words. Secondly, we learn the level-2 representations using two stacked RBMs for each modality. Thirdly, we propose a bimodal auto-encoder to learn the similarities/dissimilarities between the pairwise image-tags as level-3 representations. Finally, during the test phase, based on one observation of the dataset, we come up with a data-specific strategy to choose the correct tag words leading to a leap of an improved overall performance. Our final average accuracy on the private test set is 100%, which ranks the first place in this challenge.", "targets": "Constructing Hierarchical Image-tags Bimodal Representations for Word Tags Alternative Choice"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5a7a0815f9c44f319e0a5900aa76316b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present Exponentiated Gradient LINUCB, an algorithm for contextual multi-armed bandits. This algorithm uses Exponentiated Gradient to find the optimal exploration of the LINUCB. Within a deliberately designed offline simulation framework we conduct evaluations with real online event log data. The experimental results demonstrate that our algorithm outperforms surveyed algorithms.", "targets": "Exponentiated Gradient LINUCB for Contextual Multi- Armed Bandits"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c9a0d8a9616d4d15b5affc2d0841824a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Organ transplants can improve the life expectancy and quality of life for the recipient but carries the risk of serious post-operative complications, such as septic shock and organ rejection. The probability of a successful transplant depends in a very subtle fashion on compatibility between the donor and the recipient \u2013 but current medical practice is short of domain knowledge regarding the complex nature of recipient-donor compatibility. Hence a data-driven approach for learning compatibility has the potential for significant improvements in match quality. This paper proposes a novel system (ConfidentMatch) that is trained using data from electronic health records. ConfidentMatch predicts the success of an organ transplant (in terms of the 3-year survival rates) on the basis of clinical and demographic traits of the donor and recipient. ConfidentMatch captures the heterogeneity of the donor and recipient traits by optimally dividing the feature space into clusters and constructing different optimal predictive models to each cluster. The system controls the complexity of the learned predictive model in a way that allows for assuring more granular and confident predictions for a larger number of potential recipient-donor pairs, thereby ensuring that predictions are \u201cpersonalized\u201d and tailored to individual characteristics to the finest possible granularity. Experiments conducted on the UNOS heart transplant dataset show the superiority of the prognostic value of ConfidentMatch to other competing benchmarks; ConfidentMatch can provide predictions of success with 95% confidence for 5,489 patients of a total population of 9,620 patients, which corresponds to 410 more patients than the most competitive benchmark algorithm (DeepBoost).", "targets": "Personalized Donor-Recipient Matching for Organ Transplantation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-173244dff343471c940d214d53e04777", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper investigates two feature-scoring criteria that make use of estimated class probabilities: one method proposed by Shen et al. (2008) and a complementary approach proposed below. We develop a theoretical framework to analyze each criterion and show that both estimate the spread (across all values of a given feature) of the probability that an example belongs to the positive class. Based on our analysis, we predict when each scoring technique will be advantageous over the other and give empirical results validating our predictions.", "targets": "Feature Selection via Probabilistic Outputs"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9f3c08437edd4842a8b40117e357be23", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Standard belief change assumes an underlying logic containing full classical propositional logic. However, there are good reasons for considering belief change in less expressive logics as well. In this paper we build on recent investigations by Delgrande on contraction for Horn logic. We show that the standard basic form of contraction, partial meet, is too strong in the Horn case. This result stands in contrast to Delgrande\u2019s conjecture that orderly maxichoice is the appropriate form of contraction for Horn logic. We then define a more appropriate notion of basic contraction for the Horn case, influenced by the convexity property holding for full propositional logic and which we refer to as infra contraction. The main contribution of this work is a result which shows that the construction method for Horn contraction for belief sets based on our infra remainder sets corresponds exactly to Hansson\u2019s classical kernel contraction for belief sets, when restricted to Horn logic. This result is obtained via a detour through contraction for belief bases. We prove that kernel contraction for belief bases produces precisely the same results as the belief base version of infra contraction. The use of belief bases to obtain this result provides evidence for the conjecture that Horn belief change is best viewed as a \u2018hybrid\u2019 version of belief set change and belief base change. One of the consequences of the link with base contraction is the provision of a representation result for Horn contraction for belief sets in which a version of the Core-retainment postulate features.", "targets": "On the Link between Partial Meet, Kernel, and Infra Contraction and its Application to Horn Logic"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4a289445d79645e380a912a091b6cc65", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We study sequential prediction of real-valued, arbitrary and unknown sequences under the squared error loss as well as the best parametric predictor out of a large, continuous class of predictors. Inspired by recent results from computational learning theory, we refrain from any statistical assumptions and define the performance with respect to the class of general parametric predictors. In particular, we present generic lower and upper bounds on this relative performance by transforming the prediction task into a parameter learning problem. We first introduce the lower bounds on this relative performance in the mixture of experts framework, where we show that for any sequential algorithm, there always exists a sequence for which the performance of the sequential algorithm is lower bounded by zero. We then introduce a sequential learning algorithm to predict such arbitrary and unknown sequences, and calculate upper bounds on its total squared prediction error for every bounded sequence. We further show that in some scenarios we achieve matching lower and upper bounds demonstrating that our algorithms are optimal in a strong minimax sense such that their performances cannot be improved further. As an interesting result we also prove that for the worst case scenario, the performance of randomized algorithms can be achieved by sequential algorithms so that randomized algorithms does not improve the performance.", "targets": "A Unified Approach to Universal Prediction: Generalized Upper and Lower Bounds"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-079ef59de16749da80f17d00ce0755ac", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In natural speech, the speaker does not pause between words, yet a human listener somehow perceives this continuous stream of phonemes as a series of distinct words. The detection of boundaries between spoken words is an instance of a general capability of the human neocortex to remember and to recognize recurring sequences. This paper describes a computer algorithm that is designed to solve the problem of locating word boundaries in blocks of English text from which the spaces have been removed. This problem avoids the complexities of processing speech but requires similar capabilities for detecting recurring sequences. The algorithm that is described in this paper relies entirely on statistical relationships between letters in the input stream to infer the locations of word boundaries. The source code for a C++ version of this algorithm is presented in an appendix.", "targets": "A Statistical Learning Algorithm for Word Segmentation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-86d45b3cb2954e7b94a4851ad1a3bb41", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a soft attention based model for the task of action recognition in videos. We use multi-layered Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units which are deep both spatially and temporally. Our model learns to focus selectively on parts of the video frames and classifies videos after taking a few glimpses. The model essentially learns which parts in the frames are relevant for the task at hand and attaches higher importance to them. We evaluate the model on UCF-11 (YouTube Action), HMDB-51 and Hollywood2 datasets and analyze how the model focuses its attention depending on the scene and the action being performed.", "targets": "ACTION RECOGNITION USING VISUAL ATTENTION"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-58dc4e7de9f947eaafe6d69d1f798050", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recent applications of neural language models have led to an increased interest in the automatic generation of natural language. However impressive, the evaluation of neurally generated text has so far remained rather informal and anecdotal. Here, we present an attempt at the systematic assessment of one aspect of the quality of neurally generated text. We focus on a specific aspect of neural language generation: its ability to reproduce authorial writing styles. Using established models for authorship attribution, we empirically assess the stylistic qualities of neurally generated text. In comparison to conventional language models, neural models generate fuzzier text that is relatively harder to attribute correctly. Nevertheless, our results also suggest that neurally generated text offers more valuable perspectives for the augmentation of training data.", "targets": "Assessing the Stylistic Properties of Neurally Generated Text in Authorship Attribution"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0d6c349d9b4940bbaf4861f739f9f5d9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "It is well known that different solution strategies work well for different types of instances of hard combinatorial problems. As a consequence, most solvers for the propositional satisfiability problem (SAT) expose parameters that allow them to be customized to a particular family of instances. In the international SAT competition series, these parameters are ignored: solvers are run using a single default parameter setting (supplied by the authors) for all benchmark instances in a given track. While this competition format rewards solvers with robust default settings, it does not reflect the situation faced by a practitioner who only cares about performance on one particular application and can invest some time into tuning solver parameters for this application. The new Configurable SAT Solver Competition (CSSC) compares solvers in this latter setting, scoring each solver by the performance it achieved after a fully automated configuration step. This article describes the CSSC in more detail, and reports the results obtained in its two instantiations so far, CSSC 2013 and 2014.", "targets": "The Configurable SAT Solver Challenge (CSSC)"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-fe34e45d3c11488b80e822851f8b2255", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Belief revision is an operation that aims at modifying old beliefs so that they become consistent with new ones. The issue of belief revision has been studied in various formalisms, in particular, in qualitative algebras (QAs) in which the result is a disjunction of belief bases that is not necessarily representable in a QA. This motivates the study of belief revision in formalisms extending QAs, namely, their propositional closures: in such a closure, the result of belief revision belongs to the formalism. Moreover, this makes it possible to define a contraction operator thanks to the Harper identity. Belief revision in the propositional closure of QAs is studied, an algorithm for a family of revision operators is designed, and an opensource implementation is made freely available on the web.", "targets": "Belief revision in the propositional closure of a qualitative algebra"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-abad74fe9d5949e19057fd48eca6745c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In search engines, online marketplaces and other human\u2013computer interfaces large collectives of individuals sequentially interact with numerous alternatives of varying quality. In these contexts, individual trial and error (exploration) is crucial for uncovering novel high-quality items or solutions, but entails a high cost for individual agents [Frazier et al. 2014]. Self-interested decision makers, we will show, are often better off imitating the choices of individuals who have already incurred the costs of exploration. Although imitation makes sense at the individual level, it deprives the group of additional information that could have been gleaned by individual explorers [Rogers 1988]. Under these grim circumstances, certain non-monetary mechanisms can keep imitation forces in check and allow the collective to reap some of the benefits of the independent collection of information. For example, in simultaneous exploration problems, a natural equilibrium evolves between explorers and imitators [Conlisk 1980; Kameda and Nakanishi 2002]. Further, in some collective exploration settings, barriers to communication such as a sparser communication network among individuals can prove beneficial at the collective level. They encourage people to explore more, thus supplying useful information to the group [Fang et al. 2010; Lazer and Friedman 2007; Mason et al. 2008; Toyokawa et al. 2014]. Diversity is known to be a blessing for groups and collectives, as they can leverage the wealth of information possessed by different individuals [Conradt et al. 2013; Davis-Stober et al. 2014; M\u00fcllerTrede et al. 2017] or take advantage of the complementarities between group members to solve complex problems [Clearwater et al. 1991; Hong and Page 2004]. Could some preference diversity be beneficial in problems where collectives sequentially explore numerous alternatives, and thus despite reducing the immediate value of social learning lead to an increase in collective welfare?", "targets": "Diversity of preferences can increase collective welfare in sequential exploration problems"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a6a57f98ae3e41899af98da64eeb026f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper presents capabilities of using genetic algorithms to find approximations of function extrema, which cannot be found using analytic ways. To enhance effectiveness of calculations, algorithm has been parallelized using OpenMP library. We gained much increase in speed on platforms using multithreaded processors with shared memory free access. During analysis we used different modifications of genetic operator, using them we obtained varied evolution process of potential solutions. Results allow to choose best methods among many applied in genetic algorithms and observation of acceleration on Yorkfield, Bloomfield, Westmere-EX and most recent Sandy Bridge cores.", "targets": "GENERATING EXTREMA APPROXIMATION OF ANALYTICALLY INCOMPUTABLE FUNCTIONS THROUGH USAGE OF PARALLEL COMPUTER AIDED GENETIC ALGORITHMS"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d5ea925d494740f894dea02142b0df9c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The genetic selection of keywords set, the text frequencies of which are considered as attributes in text classification analysis, has been analyzed. The genetic optimization was performed on a set of words, which is the fraction of the frequency dictionary with given frequency limits. The frequency dictionary was formed on the basis of analyzed text array of texts of English fiction. As the fitness function which is minimized by the genetic algorithm, the error of nearest k neighbors classifier was used. The obtained results show high precision and recall of texts classification by authorship categories on the basis of attributes of keywords set which were selected by the genetic algorithm from the frequency dictionary.", "targets": "Genetic Optimization of Keywords Subset in the Classification Analysis of Texts Authorship"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-735622b2f1384fa0b9a2b10b8f2342b5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "An important use of machine learning is to learn what people value. What posts or photos should a user be shown? Which jobs or activities would a person find rewarding? In each case, observations of people\u2019s past choices can inform our inferences about their likes and preferences. If we assume that choices are approximately optimal according to some utility function, we can treat preference inference as Bayesian inverse planning. That is, given a prior on utility functions and some observed choices, we invert an optimal decision-making process to infer a posterior distribution on utility functions. However, people often deviate from approximate optimality. They have false beliefs, their planning is sub-optimal, and their choices may be temporally inconsistent due to hyperbolic discounting and other biases. We demonstrate how to incorporate these deviations into algorithms for preference inference by constructing generative models of planning for agents who are subject to false beliefs and time inconsistency. We explore the inferences these models make about preferences, beliefs, and biases. We present a behavioral experiment in which human subjects perform preference inference given the same observations of choices as our model. Results show that human subjects (like our model) explain choices in terms of systematic deviations from optimal behavior and suggest that they take such deviations into account when inferring preferences.", "targets": "Learning the Preferences of Ignorant, Inconsistent Agents"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9940fe62ddbb462dab9f35a680c5135a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Probabilistic independence can dramatically sim\u00ad plify the task of eliciting, representing, and com\u00ad puting with probabilities in large domains. A key technique in achieving these benefits is the idea of graphical modeling. We survey existing no\u00ad tions of independence for utility functions in a multi-attribute space, and suggest that these can be used to achieve similar advantages. Our new results concern conditional additive in\u00ad dependence, which we show always has a per\u00ad fect representation as separation in an undirected graph (a Markov network). Conditional addi\u00ad tive independencies entail a particular functional form for the utility function that is analogous to a product decomposition of a probability function, and confers analogous benefits. This functional form has been utilized in the Bayesian network and influence diagram literature, but generally without an explanation in terms of independence. The functional form yields a decomposition of the utility function that can greatly speed up expected utility calculations, particularly when the utility graph has a similar topology to the probabilistic network being used.", "targets": "Graphical models for preference and utility"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-eed93479b11942fcab3a7d5105e505c7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper considers the problem of knowledge\u00ad based model construction in the presence of uncertainty about the association of domain entities to random variables. Multi-entity Bayesian networks (MEBNs) are defined as a representation for knowledge in domains characterized by uncertainty in the number of relevant entities, their interrelationships, and their association with observables. An MEBN implicitly specifies a probability distribution in terms of a hierarchically structured collection of Bayesian network fragments that together encode a joint probability distribution over arbitrarily many interrelated hypotheses. Although a finite query-complete model can always be constructed, association uncertainty typically makes exact model construction and evaluation intractable. The objective of hypothesis management is to balance tractability against accuracy. We describe an approach to hypothesis management, present an application to the problem of military situation awareness, and compare our approach to related work in the tracking and fusion literature.", "targets": "Hypothesis Management in Situation-Specific Network Construction"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-106d77a62a6348389d320bf42d28379e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper presents an ontology-based approach for the design of a collaborative business process model (CBP). This CBP is considered as a specification of needs in order to build a collaboration information system (CIS) for a network of organisations. The study is a part of a model driven engineering approach of the CIS in a specific enterprise interoperability framework that will be summarised. An adaptation of the Business Process Modeling Notation (BPMN) is used to represent the CBP model. We develop a knowledge-based system (KbS) which is composed of three main parts: knowledge gathering, knowledge representation and reasoning, and collaborative business process modelling. The first part starts from a high abstraction level where knowledge from business partners is captured. A collaboration ontology is defined in order to provide a structure to store and use the knowledge captured. In parallel, we try to reuse generic existing knowledge about business processes from the MIT Process Handbook repository. This results in a collaboration process ontology that is also described. A set of rules is defined in order to extract knowledge about fragments of the CBP model from the two previous ontologies. These fragments are finally assembled in the third part of the KbS. A prototype of the KbS has been developed in order to implement and support this approach. The prototype is a computer-aided design tool of the CBP. In this paper, we will present the theoretical aspects of each part of this KbS as well as the tools that we developed and used in order to support its functionalities.", "targets": "Knowledge-based system for collaborative process specification"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-dedad2d8cb994e61ba33df269cd9af63", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Dung\u2019s abstract argumentation framework consists of a set of interacting arguments and a series of semantics for evaluating them. Those semantics partition the powerset of the set of arguments into two classes: extensions and nonextensions. In order to reason with a specific semantics, one needs to take a credulous or skeptical approach, i.e. an argument is eventually accepted, if it is accepted in one or all extensions, respectively. In our previous work [1], we have proposed a novel semantics, called counting semantics, which allows for a more fine-grained assessment to arguments by counting the number of their respective attackers and defenders based on argument graph and argument game. In this paper, we continue our previous work by presenting some supplementaries about how to choose the damaging factor for the counting semantics, and what relationships with some existing approaches, such as Dung\u2019s classical semantics, generic gradual valuations. Lastly, an axiomatic perspective on the ranking semantics induced by our counting semantics are presented. Keywords\u2014abstract argumentation; argument game; graded assessment; counting semantics; ranking-based semantics;", "targets": "Some Supplementaries to The Counting Semantics for Abstract Argumentation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3aa96c7df9684eea84e49358d6803a0e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In principle, reinforcement learning and policy search methods can enable robots to learn highly complex and general skills that may allow them to function amid the complexity and diversity of the real world. However, training a policy that generalizes well across a wide range of realworld conditions requires far greater quantity and diversity of experience than is practical to collect with a single robot. Fortunately, it is possible for multiple robots to share their experience with one another, and thereby, learn a policy collectively. In this work, we explore distributed and asynchronous policy learning as a means to achieve generalization and improved training times on challenging, real-world manipulation tasks. We propose a distributed and asynchronous version of Guided Policy Search and use it to demonstrate collective policy learning on a vision-based door opening task using four robots. We show that it achieves better generalization, utilization, and training times than the single robot alternative.", "targets": "Collective Robot Reinforcement Learning with Distributed Asynchronous Guided Policy Search"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-affd8d316c014445b516abc6fc483ed8", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Classification involves the learning of the mapping function that associates input samples to corresponding target label. There are two major categories of classification problems: Single-label classification and Multi-label classification. Traditional binary and multi-class classifications are subcategories of single-label classification. Several classifiers are developed for binary, multi-class and multi-label classification problems, but there are no classifiers available in the literature capable of performing all three types of classification. In this paper, a novel online universal classifier capable of performing all the three types of classification is proposed. Being a high speed online classifier, the proposed technique can be applied to streaming data applications. The performance of the developed classifier is evaluated using datasets from binary, multi-class and multi-label problems. The results obtained are compared with state-of-the-art techniques from each of the classification types. Keywords\u2014Universal, Classification, Binary, Multi-class, Multi-label, Online, Extreme learning machines, Data stream.", "targets": "An Online Universal Classifier for Binary, Multi- class and Multi-label Classification"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-272a36d6f1034245a5c41666fd7592b5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a novel fully-automated approach towards inducing multilingual taxonomies from Wikipedia. Given an English taxonomy, our approach first leverages the interlanguage links of Wikipedia to automatically construct training datasets for the is-a relation in the target language. Character-level classifiers are trained on the constructed datasets, and used in an optimal path discovery framework to induce high-precision, high-coverage taxonomies in other languages. Through experiments, we demonstrate that our approach significantly outperforms the state-of-the-art, heuristics-heavy approaches for six languages. As a consequence of our work, we release presumably the largest and the most accurate multilingual taxonomic resource spanning over 280 languages.", "targets": "280 Birds with One Stone: Inducing Multilingual Taxonomies from Wikipedia Using Character-level Classification"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c8407ec657c64e6b846149173d0cea46", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Present incremental learning methods are limited in the ability to achieve reliable credit assignment over a large number time steps (or events). However, this situation is typical for cases where the dynamical system to be controlled requires relatively frequent control updates in order to maintain stability or robustness yet has some action/consequences which must be established over relatively long periods of time. To address this problem, the learning capabilities of a control architecture comprised of two Backpropagated Adaptive Critics (BAC\u2019s) in a two-level hierarchy with continuous actions are explored. The high-level BAC updates less frequently than the low-level BAC and controls the latter to some degree. The response of the low-level to high-level signals can either be determined a priori or it can emerge during learning. A general approach called Response Induction Learning is introduced to address the latter case.", "targets": "Reinforcement Control with Hierarchical Backpropagated Adaptive Critics\u2217"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1e86420250e047849dcc3c8b4742e268", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The main goal of this paper is to describe a new pruning method for solving decision trees and game trees. The pruning method for decision trees suggests a slight variant of decision trees that we call scenario trees. In scenario trees, we do not need a conditional probability for each edge emanating from a chance node. Instead, we require a joint probability for each path from the root node to a leaf node. We compare the pruning method to the traditional rollback method for decision trees and game trees. For problems that require Bayesian revision of probabilities, a scenario tree representation with the pruning method is more efficient than a decision tree representation with the rollback method. For game trees, the pruning method is more efficient than the rollback method.", "targets": "A New Pruning Method for Solving Decision Trees and Game Trees"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2357b84c9e534caea841eb83bd6f6e2a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Theano is a linear algebra compiler that optimizes a user\u2019s symbolically-specified mathematical computations to produce efficient low-level implementations. In this paper, we present new features and efficiency improvements to Theano, and benchmarks demonstrating Theano\u2019s performance relative to Torch7, a recently introduced machine learning library, and to RNNLM, a C++ library targeted at recurrent neural networks.", "targets": "Theano: new features and speed improvements"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-094fa39d61cf4ad0a6c26b4fee9a9a28", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this work, we build a generic architecture of Convolutional Neural Networks to discover empirical properties of neural networks. Our first contribution is to introduce a state-of-the-art framework that depends upon few hyper parameters and to study the network when we vary them. It has no max pooling, no biases, only 13 layers, is purely convolutional and yields up to 95.4% and 79.6% accuracy respectively on CIFAR10 and CIFAR100. We show that the nonlinearity of a deep network does not need to be continuous, non expansive or point-wise, to achieve good performance. We show that increasing the width of our network permits being competitive with very deep networks. Our second contribution is an analysis of the contraction and separation properties of this network. Indeed, a 1-nearest neighbor classifier applied on deep features progressively improves with depth, which indicates that the representation is progressively more regular. Besides, we defined and analyzed local support vectors that separate classes locally. All our experiments are reproducible and code is available online, based on TensorFlow.", "targets": "Building a Regular Decision Boundary with Deep Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f956da5f1ee04a47a2fe289a17ea641d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Knowledge representation is a popular research field in IT. As mathematical knowledge is most formalized, its representation is important and interesting. Mathematical knowledge consists of various mathematical theories. In this paper we consider a deductive system that derives mathematical notions, axioms and theorems. All these notions, axioms and theorems can be considered a small mathematical theory. This theory will be represented as a semantic net. We start with the signature where Set is the support set, \uf02dis the membership predicate. Using the MathSem program we build the signature This paper addresses the problem of ad hoc microphone array calibration where only partialinformation about the distances between microphones is available. We construct a matrixconsisting of the pairwise distances and propose to estimate the missing entries based on a novelEuclidean distance matrix completion algorithm by alternative low-rank matrix completion andprojection onto the Euclidean distance space. This approach confines the recovered matrix to theEDM cone at each iteration of the matrix completion algorithm. The theoretical guarantees ofthe calibration performance are obtained considering the random and locally structured missingentries as well as the measurement noise on the known distances. This study elucidates the linksbetween the calibration error and the number of microphones along with the noise level and theratio of missing distances. Thorough experiments on real data recordings and simulated setupsare conducted to demonstrate these theoretical insights. A significant improvement is achievedby the proposed Euclidean distance matrix completion algorithm over the state-of-the-arttechniques for ad hoc microphone array calibration.", "targets": "Ad Hoc Microphone Array Calibration: Euclidean Distance Matrix Completion Algorithm and Theoretical Guarantees"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e1a63f17d2474168bd4720a212397fe8", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We introduce Discriminative BLEU (\u2206BLEU), a novel metric for intrinsic evaluation of generated text in tasks that admit a diverse range of possible outputs. Reference strings are scored for quality by human raters on a scale of [\u22121, +1] to weight multi-reference BLEU. In tasks involving generation of conversational responses, \u2206BLEU correlates reasonably with human judgments and outperforms sentence-level and IBM BLEU in terms of both Spearman\u2019s \u03c1 and Kendall\u2019s \u03c4 .", "targets": "\u2206BLEU: A Discriminative Metric for Generation Tasks with Intrinsically Diverse Targets\u2217"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b81f06c60fa94326afb7c59edba36bc9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recent works on end-to-end neural network-based architectures for machine translation have shown promising results for English-French and English-German translation. Unlike these language pairs, however, in the majority of scenarios, there is a lack of high quality parallel corpora. In this work, we focus on applying neural machine translation to challenging/low-resource languages Turkish and low-resource domains such as parallel corpora of Chinese chat messages. In particular, we investigated how to leverage abundant monolingual data for these low-resource translation tasks. Without the use of external alignment tools, we obtained up to a 1.96 BLEU score improvement with our proposed method compared to the previous best result in Turkish-to-English translation on the IWLST 2014 dataset. On Chinese-toEnglish translation by using the OpenMT 2015 dataset, we were able to obtain up to a 1.59 BLEU score improvement over phrase-based and hierarchical phrase-based baselines.", "targets": "On Using Monolingual Corpora in Neural Machine Translation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-438c1183deb74e049f12e0e5c9489889", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Energy-based models are popular in machine learning due to the elegance of their formulation and their relationship to statistical physics. Among these, the Restricted Boltzmann Machine (RBM) has been the prototype for some recent advancements in the unsupervised training of deep neural networks. However, the contrastive divergence training algorithm, so often used for such models, has a number of drawbacks and ineligancies both in theory and in practice. Here, we investigate the performance of Minimum Probability Flow learning for training RBMs. This approach reconceptualizes the nature of the dynamics defined over a model, rather than thinking about Gibbs sampling, and derives a simple, tractable, and elegant objective function using a Taylor expansion, allowing one to learn the parameters of any distribution over visible states. In the paper, we expound the Minimum Probability Flow learning algorithm under various dynamics. We empirically analyze its performance on these dynamics and demonstrate that MPF algorithms outperform CD on various RBM configurations.", "targets": "UNDERSTANDING MINIMUM PROBABILITY FLOW FOR RBMS UNDER VARIOUS KINDS OF DYNAMICS"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9e7cba2ac51b4e798c3f153f53f3a396", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "High precision assembly of mechanical parts requires accuracy exceeding the robot precision. Conventional part mating methods used in the current manufacturing requires tedious tuning of numerous parameters before deployment. We show how the robot can successfully perform a tight clearance peg-in-hole task through training a recurrent neural network with reinforcement learning. In addition to saving the manual effort, the proposed technique also shows robustness against position and angle errors for the peg-in-hole task. The neural network learns to take the optimal action by observing the robot sensors to estimate the system state. The advantages of our proposed method is validated experimentally on a 7-axis articulated robot arm.", "targets": "Deep Reinforcement Learning for High Precision Assembly Tasks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0533f0df126c4bf1815dfc19a95f4662", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Artificial Neural Network (ANN) s has widely been used for recognition of optically scanned character, which partially emulates human thinking in the domain of the Artificial Intelligence. But prior to recognition, it is necessary to segment the character from the text to sentences, words etc. Segmentation of words into individual letters has been one of the major problems in handwriting recognition. Despite several successful works all over the work, development of such tools in specific languages is still an ongoing process especially in the Indian context. This work explores the application of ANN as an aid to segmentation of handwritten characters in Assamesean important language in the North Eastern part of India. The work explores the performance difference obtained in applying an ANN-based dynamic segmentation algorithm compared to projectionbased static segmentation. The algorithm involves, first training of an ANN with individual handwritten characters recorded from different individuals. Handwritten sentences are separated out from text using a static segmentation method. From the segmented line, individual characters are separated out by first over segmenting the entire line. Each of the segments thus obtained, next, is fed to the trained ANN. The point of segmentation at which the ANN recognizes a segment or a combination of several segments to be similar to a handwritten character, a segmentation boundary for the character is assumed to exist and segmentation performed. The segmented character is next compared to the best available match and the segmentation boundary confirmed.", "targets": "ANN-based Innovative Segmentation Method for Handwritten text in Assamese"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8505e94d534f40e6a607584271666583", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Distantly supervised relation extraction has been widely used to find novel relational facts from plain text. To predict the relation between a pair of two target entities, existing methods solely rely on those direct sentences containing both entities. In fact, there are also many sentences containing only one of the target entities, which provide rich and useful information for relation extraction. To address this issue, we build inference chains between two target entities via intermediate entities, and propose a path-based neural relation extraction model to encode the relational semantics from both direct sentences and inference chains. Experimental results on realworld datasets show that, our model can make full use of those sentences containing only one target entity, and achieves significant and consistent improvements on relation extraction as compared with baselines.", "targets": "Incorporating Relation Paths in Neural Relation Extraction"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-413b88af89ab44dfb6b23c357919d216", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We study the design of interactive clustering algorithms for data sets satisfying natural stability assumptions. Our algorithms start with any initial clustering and only make local changes in each step; both are desirable features in many applications. We show that in this constrained setting one can still design provably efficient algorithms that produce accurate clusterings. We also show that our algorithms perform well on real-world data.", "targets": "Local algorithms for interactive clustering"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b624aa33000148439c482fa4b3967c6e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Online sequence prediction is the problem of predicting the next element of a sequence given previous elements. This problem has been extensively studied in the context of individual sequence prediction, where no prior assumptions are made on the origin of the sequence. Individual sequence prediction algorithms work quite well for long sequences, where the algorithm has enough time to learn the temporal structure of the sequence. However, they might give poor predictions for short sequences. A possible remedy is to rely on the general model of prediction with expert advice, where the learner has access to a set of r experts, each of which makes its own predictions on the sequence. It is well known that it is possible to predict almost as well as the best expert if the sequence length is order of log(r). But, without firm prior knowledge on the problem, it is not clear how to choose a small set of good experts. In this paper we describe and analyze a new algorithm that learns a good set of experts using a training set of previously observed sequences. We demonstrate the merits of our approach by applying it on the task of click prediction on the web.", "targets": "Learning the Experts for Online Sequence Prediction"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d6879b74ddaf4c29bf392690b143cc3b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The aim of this paper is to investigate the interplay between knowledge shared by a group of agents and its coalition ability. We characterize this relation in the standard context of imperfect information concurrent game. We assume that whenever a set of agents form a coalition to achieve a goal, they share their knowledge before acting. Based on this assumption, we propose new semantics for alternating-time temporal logic with imperfect information and perfect recall. It turns out that this semantics is sufficient to preserve all the desirable properties of coalition ability in traditional coalition logics. Meanwhile, we investigate how knowledge sharing within a group of agents contributes to its coalitional ability through the interplay of epistemic and coalition modalities. This work provides a partial answer to the question: which kind of group knowledge is required for a group to achieve their goals in the context of imperfect information.", "targets": "Knowledge Sharing in Coalitions"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-69a18977854e4cbbac5f28622517cd52", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We develop a novel bi-directional attention model for dependency parsing, which learns to agree on headword predictions from the forward and backward parsing directions. The parsing procedure for each direction is formulated as sequentially querying the memory component that stores continuous headword embeddings. The proposed parser makes use of soft headword embeddings, allowing the model to implicitly capture high-order parsing history without dramatically increasing the computational complexity. We conduct experiments on English, Chinese, and 12 other languages from the CoNLL 2006 shared task, showing that the proposed model achieves state-of-the-art unlabeled attachment scores on 6 languages.1", "targets": "Bi-directional Attention with Agreement for Dependency Parsing"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a879c357c4e04d2ebdc2b37e4b46c7ae", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper covers a number of approaches that leverage Artificial Intelligence algorithms and techniques to aid Unmanned Combat Aerial Vehicle (UCAV) autonomy. An analysis of current approaches to autonomous control is provided followed by an exploration of how these techniques can be extended and enriched with AI techniques including Artificial Neural Networks (ANN), Ensembling and Reinforcement Learning (RL) to evolve control strategies for UCAVs.", "targets": "Artificial Intelligence Approaches To UCAV Autonomy"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8770c24e3b8b4b63ad29ff37d0b30cd1", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recommendation system is a common demand in daily life and matrix completion is a widely adopted technique for this task. However, most matrix completion methods lack semantic interpretation and usually result in weak-semantic recommendations. To this end, this paper proposes a Semantic Analysis approach for Recommendation systems (SAR), which applies a two-level hierarchical generative process that assigns semantic properties and categories for user and item. SAR learns semantic representations of users/items merely from user ratings on items, which offers a new path to recommendation by semantic matching with the learned representations. Extensive experiments demonstrate SAR outperforms other state-of-the-art baselines substantially.", "targets": "SAR: A Semantic Analysis Approach for Recommendation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-491c52031eac41da9ae2fa36219153ae", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper investigates the mining of class association rules with rough set approach. In data mining, an association occurs between two set of elements when one element set happen together with another. A class association rule set (CARs) is a subset of association rules with classes specified as their consequences. We present an efficient algorithm for mining the finest class rule set inspired form Apriori algorithm, where the support and confidence are computed based on the elementary set of lower approximation included in the property of rough set theory. Our proposed approach has been shown very effective, where the rough set approach for class association discovery is much simpler than the classic association method. Data Mining, RST, CAR, ARM, NAR, Bitmap, class association rules, Rough Set Theory", "targets": "Class Association Rules Mining based Rough Set Method"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-144c4db3bff6473cbed47514f1728c32", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Label propagation is a powerful and flexible semi-supervised learning technique on graphs. Neural network architectures, on the other hand, have proven track records in many supervised learning tasks. In this work, we propose a training objective for neural networks, Neural Graph Machines, for combining the power of neural networks and label propagation. The new objective allows the neural networks to harness both labeled and unlabeled data by: (a) allowing the network to train using labeled data as in the supervised setting, (b) biasing the network to learn similar hidden representations for neighboring nodes on a graph, in the same vein as label propagation. Such architectures with the proposed objective can be trained efficiently using stochastic gradient descent and scaled to large graphs. The proposed method is experimentally validated on a wide range of tasks (multilabel classification on social graphs, news categorization and semantic intent classification) using different architectures (NNs, CNNs, and LSTM RNNs).", "targets": "NEURAL GRAPH MACHINES: LEARNING NEURAL NETWORKS USING GRAPHS"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-94788841ecce409dbd8ceeb740e7b455", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Variational Autoencoders (VAEs) are expressive latent variable models that can be used to learn complex probability distributions from training data. However, the quality of the resulting model crucially relies on the expressiveness of the inference model used during training. We introduce Adversarial Variational Bayes (AVB), a technique for training Variational Autoencoders with arbitrarily expressive inference models. We achieve this by introducing an auxiliary discriminative network that allows to rephrase the maximum-likelihood-problem as a two-player game, hence establishing a principled connection between VAEs and Generative Adversarial Networks (GANs). We show that in the nonparametric limit our method yields an exact maximumlikelihood assignment for the parameters of the generative model, as well as the exact posterior distribution over the latent variables given an observation. Contrary to competing approaches which combine VAEs with GANs, our approach has a clear theoretical justification, retains most advantages of standard Variational Autoencoders and is easy to implement.", "targets": "Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-efd04e09199e4d08b48ef7f682b50775", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The comparison of heterogeneous samples extensively exists in many applications, especially in the task of image classification. In this paper, we propose a simple but effective coupled neural network, called Deeply Coupled Autoencoder Networks (DCAN), which seeks to build two deep neural networks, coupled with each other in every corresponding layers. In DCAN, each deep structure is developed via stacking multiple discriminative coupled auto-encoders, a denoising auto-encoder trained with maximum margin criterion consisting of intra-class compactness and inter-class penalty. This single layer component makes our model simultaneously preserve the local consistency and enhance its discriminative capability. With increasing number of layers, the coupled networks can gradually narrow the gap between the two views. Extensive experiments on cross-view image classification tasks demonstrate the superiority of our method over state-of-the-art methods.", "targets": "Deeply Coupled Auto-encoder Networks for Cross-view Classification"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-40c860633eb04f199bd3054f021e8a30", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Based on a new atomic norm, we propose a new convex formulation for sparse matrix factorization problems in which the number of nonzero elements of the factors is assumed fixed and known. The formulation counts sparse PCA with multiple factors, subspace clustering and low-rank sparse bilinear regression as potential applications. We compute slow rates and an upper bound on the statistical dimension Amelunxen et al. (2013) of the suggested norm for rank 1 matrices, showing that its statistical dimension is an order of magnitude smaller than the usual l1-norm, trace norm and their combinations. Even though our convex formulation is in theory hard and does not lead to provably polynomial time algorithmic schemes, we propose an active set algorithm leveraging the structure of the convex problem to solve it and show promising numerical results.", "targets": "Tight convex relaxations for sparse matrix factorization"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-473f1ac8376b47428978adc81b27b8b1", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author. Copyright is held by the owner/author(s). CSCW\u201916 Companion, February 27 March 02, 2016, San Francisco, CA, USA ACM 978-1-4503-3950-6/16/02. http://dx.doi.org/10.1145/2818052.2869110 Abstract Language in social media is mostly driven by new words and spellings that are constantly entering the lexicon thereby polluting it and resulting in high deviation from the formal written version. The primary entities of such language are the out-of-vocabulary (OOV) words. In this paper, we study various sociolinguistic properties of the OOV words and propose a classification model to categorize them into at least six categories. We achieve 81.26% accuracy with high precision and recall. We observe that the content features are the most discriminative ones followed by lexical and context features.", "targets": "WASSUP? LOL : Characterizing Out-of-Vocabulary Words in Twitter"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e217a70d61f44722929d5a225f011f68", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The Dialog State Tracking Challenge 4 (DSTC 4) differentiates itself from the previous three editions as follows: the number of slot-value pairs present in the ontology is much larger, no spoken language understanding output is given, and utterances are labeled at the subdialog level. This paper describes a novel dialog state tracking method designed to work robustly under these conditions, using elaborate string matching, coreference resolution tailored for dialogs and a few other improvements. The method can correctly identify many values that are not explicitly present in the utterance. On the final evaluation, our method came in first among 7 competing teams and 24 entries. The F1-score achieved by our method was 9 and 7 percentage points higher than that of the runner-up for the utterance-level evaluation and for the subdialog-level evaluation, respectively.", "targets": "Robust Dialog State Tracking for Large Ontologies"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c326160908a941af84402c58a0d9367b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We study the representation and encoding of phonemes in a recurrent neural network model of grounded speech. We use a model which processes images and their spoken descriptions, and projects the visual and auditory representations into the same semantic space. We perform a number of analyses on how information about individual phonemes is encoded in the MFCC features extracted from the speech signal, and the activations of the layers of the model. Via experiments with phoneme decoding and phoneme discrimination we show that phoneme representations are most salient in the lower layers of the model, where low-level signals are processed at a fine-grained level, although a large amount of phonological information is retain at the top recurrent layer. We further find out that the attention mechanism following the top recurrent layer significantly attenuates encoding of phonology and makes the utterance embeddings much more invariant to synonymy. Moreover, a hierarchical clustering of phoneme representations learned by the network shows an organizational structure of phonemes similar to those proposed in linguistics.", "targets": "Encoding of phonology in a recurrent neural model of grounded speech"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d9df579ad09c4faf8c028cea05f789a2", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Tree-structured neural networks exploit valuable syntactic parse information as they interpret the meanings of sentences. However, they suffer from two key technical problems that make them slow and unwieldy for large-scale NLP tasks: they can only operate on parsed sentences and they do not directly support batched computation. We address these issues by introducing the Stackaugmented Parser-Interpreter Neural Network (SPINN), which combines parsing and interpretation within a single treesequence hybrid model by integrating treestructured sentence interpretation into the linear sequential structure of a shift-reduce parser. Our model supports batched computation for a speedup of up to 25x over other tree-structured models, and its integrated parser allows it to operate on unparsed data with little loss of accuracy. We evaluate it on the Stanford NLI entailment task and show that it significantly outperforms other sentence-encoding models.", "targets": "A Fast Unified Model for Parsing and Sentence Understanding"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-771ec19f3da34513b81519b0b7f55bed", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Markov chain Monte Carlo (MCMC) is one of the main workhorses of probabilistic inference, but it is notoriously hard to measure the quality of approximate posterior samples. This challenge is particularly salient in black box inference methods, which can hide details and obscure inference failures. In this work, we extend the recently introduced bidirectional Monte Carlo [GGA15] technique to evaluate MCMC-based posterior inference algorithms. By running annealed importance sampling (AIS) chains both from prior to posterior and vice versa on simulated data, we upper bound in expectation the symmetrized KL divergence between the true posterior distribution and the distribution of approximate samples. We present Bounding Divergences with REverse Annealing (BREAD), a protocol for validating the relevance of simulated data experiments to real datasets, and integrate it into two probabilistic programming languages: WebPPL [GS] and Stan [CGHL+ p]. As an example of how BREAD can be used to guide the design of inference algorithms, we apply it to study the effectiveness of different model representations in both WebPPL and Stan.", "targets": "Measuring the reliability of MCMC inference with bidirectional Monte Carlo"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-cb8bec27c17a403db27e0827c4ad8daa", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The Support Vector Machine using Privileged Information (SVM+) has been proposed to train a classifier to utilize the additional privileged information that is only available in the training phase but not available in the test phase. In this work, we propose an efficient solution for SVM+ by simply utilizing the squared hinge loss instead of the hinge loss as in the existing SVM+ formulation, which interestingly leads to a dual form with less variables and in the same form with the dual of the standard SVM. The proposed algorithm is utilized to leverage the additional web knowledge that is only available during training for the image categorization tasks. The extensive experimental results on both Caltech101 and WebQueries datasets show that our proposed method can achieve a factor of up to hundred times speedup with the comparable accuracy when compared with the existing SVM+ method.", "targets": "Simple and Efficient Learning using Privileged Information"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-46138a5e9a7641ae8fc0ec58da0032c3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Many tasks in AI require the collaboration of multiple agents. Typically, the communication protocol between agents is manually specified and not altered during training. In this paper we explore a simple neural model, called CommNN, that uses continuous communication for fully cooperative tasks. The model consists of multiple agents and the communication between them is learned alongside their policy. We apply this model to a diverse set of tasks, demonstrating the ability of the agents to learn to communicate amongst themselves, yielding improved performance over non-communicative agents and baselines. In some cases, it is possible to interpret the language devised by the agents, revealing simple but effective strategies for solving the task at hand.", "targets": "Learning Multiagent Communication with Backpropagation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-042b4e172cda4d179df8d06d83fe76c9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A totally semantic measure is presented which is able to calculate a similarity value between concept descriptions and also between concept description and individual or between individuals expressed in an expressive description logic. It is applicable on symbolic descriptions although it uses a numeric approach for the calculus. Considering that Description Logics stand as the theoretic framework for the ontological knowledge representation and reasoning, the proposed measure can be effectively used for agglomerative and divisional clustering task applied to the semantic web domain.", "targets": "A Semantic Similarity Measure for Expressive Description Logics"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e0cbe898f9694977aed903f939326910", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Independent Component Analysis (ICA) is the problem of learning a square matrix A, given samples of X = AS, where S is a random vector with independent coordinates. Most existing algorithms are provably efficient only when each Si has finite and moderately valued fourth moment. However, there are practical applications where this assumption need not be true, such as speech and finance. Algorithms have been proposed for heavy-tailed ICA, but they are not practical, using random walks and the full power of the ellipsoid algorithm multiple times. The main contributions of this paper are: (1) A practical algorithm for heavy-tailed ICA that we call HTICA. We provide theoretical guarantees and show that it outperforms other algorithms in some heavy-tailed regimes, both on real and synthetic data. Like the current state-of-the-art, the new algorithm is based on the centroid body (a first moment analogue of the covariance matrix). Unlike the state-of-the-art, our algorithm is practically efficient. To achieve this, we use explicit analytic representations of the centroid body, which bypasses the use of the ellipsoid method and random walks. (2) We study how heavy tails affect different ICA algorithms, including HTICA. Somewhat surprisingly, we show that some algorithms that use the covariance matrix or higher moments can successfully solve a range of ICA instances with infinite second moment. We study this theoretically and experimentally, with both synthetic and real-world heavy-tailed data.", "targets": "Heavy-Tailed Analogues of the Covariance Matrix for ICA"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-dff8907153f74b5aa0b312e1bfa6da59", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "With the recent proliferation of large-scale learning problems, there have been a lot of interest on distributed machine learning algorithms, particularly those that are based on stochastic gradient descent (SGD) and its variants. However, existing algorithms either suffer from slow convergence due to the inherent variance of stochastic gradients, or have a fast linear convergence rate but at the expense of poorer solution quality. In this paper, we combine their merits together by proposing a distributed asynchronous SGD-based algorithm with variance reduction. A constant learning rate can be used, and it is also guaranteed to converge linearly to the optimal solution. Experiments on the Google Cloud Computing Platform demonstrate that the proposed algorithm outperforms state-of-the-art distributed asynchronous algorithms in terms of both wall clock time and solution quality.", "targets": "Fast Distributed Asynchronous SGD with Variance Reduction"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2f71949c95684b1ea4d6330edd39f7e5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a protocol for intrusion detection in distributed systems based on a relatively recent theory in immunology called danger theory. Based on danger theory, immune response in natural systems is a result of sensing corruption as well as sensing unknown substances. In contrast, traditional self-nonself discrimination theory states that immune response is only initiated by sensing nonself (unknown) patterns. Danger theory solves many problems that could only be partially explained by the traditional model. Although the traditional model is simpler, such problems result in high false positive rates in immune-inspired intrusion detection systems. We believe using danger theory in a multi-agent environment that computationally emulates the behavior of natural immune systems is effective in reducing false positive rates. We first describe a simplified scenario of immune response in natural systems based on danger theory and then, convert it to a computational model as a network protocol. In our protocol, we define several immune signals and model cell signaling via message passing between agents that emulate cells. Most messages include application-specific patterns that must be meaningfully extracted from various system properties. We finally provide a few rules of thumb to simplify the task of pattern extraction in most distributed systems. \u201cDo not just declare things to be irreducibly complex...\u201d Richard Dawkins", "targets": "A Danger-Based Approach to Intrusion Detection"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-674cda1aa7784e3b85c5ed0b6707f836", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The paper presents a knowledge representation language A log which extends ASP with aggregates. The goal is to have a language based on simple syntax and clear intuitive and mathematical semantics. We give some properties of A log, an algorithm for computing its answer sets, and comparison with other approaches.", "targets": "Vicious Circle Principle and Logic Programs with Aggregates"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9e1ec38334d146aaa7fe5159485536ef", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recently deeplearning models have been shown to be capable of making remarkable performance in sentences and documents classification tasks. In this work, we propose a novel framework called AC-BLSTM for modeling sentences and documents, which combines the asymmetric convolution neural network (ACNN) with the Bidirectional Long ShortTerm Memory network (BLSTM). Experiment results demonstrate that our model achieves state-ofthe-art results on five tasks, including sentiment analysis, question type classification, and subjectivity classification. In order to further improve the performance of AC-BLSTM, we propose a semi-supervised learning framework called G-AC-BLSTM for text classification by combining the generative model with AC-BLSTM.", "targets": "AC-BLSTM: Asymmetric Convolutional Bidirectional LSTM Networks for Text Classification"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2561053cc9a24d098c6745701cca78f2", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Unsupervised dictionary learning has been a key component in state-of-the-art computer vision recognition architectures. While highly effective methods exist for patchbased dictionary learning, these methods may learn redundant features after the pooling stage in a given early vision architecture. In this paper, we offer a novel dictionary learning scheme to efficiently take into account the invariance of learned features after the spatial pooling stage. The algorithm is built on simple clustering, and thus enjoys efficiency and scalability. We discuss the underlying mechanism that justifies the use of clustering algorithms, and empirically show that the algorithm finds better dictionaries than patch-based methods with the same dictionary size.", "targets": "Pooling-Invariant Image Feature Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-dbeba065b6184d899d718a18f5c83789", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Interval temporal logics (ITLs) are logics for reasoning about temporal statements expressed over intervals, i.e., periods of time. The most famous ITL studied so far is Halpern and Shoham\u2019s HS, which is the logic of the thirteen Allen\u2019s interval relations. Unfortunately, HS and most of its fragments have an undecidable satisfiability problem. This discouraged the research in this area until recently, when a number non-trivial decidable ITLs have been discovered. This paper is a contribution towards the complete classification of all different fragments of HS. We consider different combinations of the interval relations begins (B), after (A), later (L) and their inverses A, B and L. We know from previous works that the combinationABBA is decidable only when finite domains are considered (and undecidable elsewhere), and thatABB is decidable over the natural numbers. We extend these results by showing that decidability of ABB can be further extended to capture the language ABBL, which lies in between ABB and ABBA, and that turns out to be maximal w.r.t decidability over strongly discrete linear orders (e.g. finite orders, the naturals, the integers). We also prove that the proposed decision procedure is optimal with respect to the EXPSPACE complexity class.", "targets": "Begin, After, and Later: a Maximal Decidable Interval Temporal Logic"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-55960529e7164a23a3f5df29ccac43aa", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Sentiment analysis predicts the presence of positive or negative emotions in a text document. In this paper, we consider higher dimensional extensions of the sentiment concept, which represent a richer set of human emotions. Our approach goes beyond previous work in that our model contains a continuous manifold rather than a finite set of human emotions. We investigate the resulting model, compare it to psychological observations, and explore its predictive capabilities.", "targets": "The Manifold of Human Emotions"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e26dde3acae143a396d90259ee0310db", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Consider a weighted or unweighted k-nearest neighbor graph that has been built on n data points drawn randomly according to some density p on R. We study the convergence of the shortest path distance in such graphs as the sample size tends to infinity. We prove that for unweighted kNN graphs, this distance converges to an unpleasant distance function on the underlying space whose properties are detrimental to machine learning. We also study the behavior of the shortest path distance in weighted kNN graphs.", "targets": "Shortest path distance in random k-nearest neighbor graphs"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-bc49b878b0b741a197b0eaf23b4db559", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Statistical topic models efficiently facilitate the exploration of large-scale data sets. Many models have been developed and broadly used to summarize the semantic structure in news, science, social media, and digital humanities. However, a common and practical objective in data exploration tasks is not to enumerate all existing topics, but to quickly extract representative ones that broadly cover the content of the corpus, i.e., a few topics that serve as a good summary of the data. Most existing topic models fit exactly the same number of topics as a user specifies, which have imposed an unnecessary burden to the users who have limited prior knowledge. We instead propose new models that are able to learn fewer but more representative topics for the purpose of data summarization. We propose a reinforced random walk that allows prominent topics to absorb tokens from similar and smaller topics, thus enhances the diversity among the top topics extracted. With this reinforced random walk as a general process embedded in classical topic models, we obtain diverse topic models that are able to extract the most prominent and diverse topics from data. The inference procedures of these diverse topic models remain as simple and efficient as the classical models. Experimental results demonstrate that the diverse topic models not only discover topics that better summarize the data, but also require minimal prior knowledge of the users.", "targets": "Less is More: Learning Prominent and Diverse Topics for Data Summarization"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5c5ba9b98ce141bc87f595272a64a291", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Most real-world data can be modeled as heterogeneous information networks (HINs) consisting of vertices of multiple types and their relationships. Search for similar vertices of the same type in large HINs, such as bibliographic networks and business-review networks, is a fundamental problem with broad applications. Although similarity search in HINs has been studied previously, most existing approaches neither explore rich semantic information embedded in the network structures nor take user\u2019s preference as a guidance. In this paper, we re-examine similarity search in HINs and propose a novel embedding-based framework. It models vertices as low-dimensional vectors to explore network structureembedded similarity. To accommodate user preferences at defining similarity semantics, our proposed framework, ESim, accepts user-defined meta-paths as guidance to learn vertex vectors in a user-preferred embedding space. Moreover, an efficient and parallel sampling-based optimization algorithm has been developed to learn embeddings in large-scale HINs. Extensive experiments on real-world large-scale HINs demonstrate a significant improvement on the effectiveness of ESim over several state-of-the-art algorithms as well as its scalability.", "targets": "Meta-Path Guided Embedding for Similarity Search in Large-Scale Heterogeneous Information Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-34e1997cf25042609fb1c7aa5dedae4a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "\u008ce proliferation of social media in communication and information dissemination has made it an ideal platform for spreading rumors. Automatically debunking rumors at their stage of di\u0082usion is known as early rumor detection, which refers to dealing with sequential posts regarding disputed factual claims with certain variations and highly textual duplication over time. \u008cus, identifying trending rumors demands an e\u0081cient yet \u0083exible model that is able to capture long-range dependencies among postings and produce distinct representations for the accurate early detection. However, it is a challenging task to apply conventional classi\u0080cation algorithms to rumor detection in earliness since they rely on hand-cra\u0089ed features which require intensive manual e\u0082orts in the case of large amount of posts. \u008cis paper presents a deep a\u008aention model on the basis of recurrent neural networks (RNN) to learn selectively temporal hidden representations of sequential posts for identifying rumors. \u008ce proposed model delves so\u0089-a\u008aention into the recurrence to simultaneously pool out distinct features with particular focus and produce hidden representations that capture contextual variations of relevant posts over time. Extensive experiments on real datasets collected from social media websites demonstrate that (1) the deep a\u008aention based RNN model outperforms state-of-thearts that rely on hand-cra\u0089ed features; (2) the introduction of so\u0089 a\u008aention mechanism can e\u0082ectively distill relevant parts to rumors from original posts in advance; (3) the proposed method detects rumors more quickly and accurately than competitors.", "targets": "Call Attention to Rumors: Deep Attention Based Recurrent Neural Networks for Early Rumor Detection"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c229a5ab14194ae1a541fe8ead184f3a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "As historically acknowledged in the Reasoning about Actions and Change community, intuitiveness of a logical domain description cannot be fully automated. Moreover, like any other logical theory, action theories may also evolve, and thus knowledge engineers need revision methods to help in accommodating new incoming information about the behavior of actions in an adequate manner. The present work is about changing action domain descriptions in multimodal logic. Its contribution is threefold: first we revisit the semantics of action theory contraction proposed in previous work, giving more robust operators that express minimal change based on a notion of distance between Kripke-models. Second we give algorithms for syntactical action theory contraction and establish their correctness with respect to our semantics for those action theories that satisfy a principle of modularity investigated in previous work. Since modularity can be ensured for every action theory and, as we show here, needs to be computed at most once during the evolution of a domain description, it does not represent a limitation at all to the method here studied. Finally we state AGM-like postulates for action theory contraction and assess the behavior of our operators with respect to them. Moreover, we also address the revision counterpart of action theory change, showing that it benefits from our semantics for contraction.", "targets": "On Action Theory Change"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8cd3e436d35b474780c427ae0db3fb90", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We introduce a novel algorithmic approach to content recommendation based on adaptive clustering of exploration-exploitation (\u201cbandit\u201d) strategies. We provide a sharp regret analysis of this algorithm in a standard stochastic noise setting, demonstrate its scalability properties, and prove its effectiveness on a number of artificial and real-world datasets. Our experiments show a significant increase in prediction performance over state-of-the-art methods for bandit problems.", "targets": "Online Clustering of Bandits"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4dfaac1b3ff74cbebb6f1aa7f9fd0045", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Financial fraud detection is an important problem with a number of design aspects to consider. Issues such as algorithm selection and performance analysis will affect the perceived ability of proposed solutions, so for auditors and researchers to be able to sufficiently detect financial fraud it is necessary that these issues be thoroughly explored. In this paper we will revisit the key performance metrics used for financial fraud detection with a focus on credit card fraud, critiquing the prevailing ideas and offering our own understandings. There are many different performance metrics that have been employed in prior financial fraud detection research. We will analyse several of the popular metrics and compare their effectiveness at measuring the ability of detection mechanisms. We further investigated the performance of a range of computational intelligence techniques when applied to this problem domain, and explored the efficacy of several binary classification methods. Keywords\u2014Financial fraud detection, credit card fraud; data mining; computational intelligence; performance metric", "targets": "Some Experimental Issues in Financial Fraud Detection: An Investigation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8c09b77460834f0abda512e246f558ec", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper investigates the validity of Kleinberg\u2019s axioms for clustering functions with respect to the quite popular clustering algorithm called k-means.We suggest that the reason why this algorithm does not fit Kleinberg\u2019s axiomatic system stems from missing match between informal intuitions and formal formulations of the axioms. While Kleinberg\u2019s axioms have been discussed heavily in the past, we concentrate here on the case predominantly relevant for k-means algorithm, that is behavior embedded in Euclidean space. We point at some contradictions and counter intuitiveness aspects of this axiomatic set within R that were evidently not discussed so far. Our results suggest that apparently without defining clearly what kind of clusters we expect we will not be able to construct a valid axiomatic system. In particular we look at the shape and the gaps between the clusters. Finally we demonstrate that there exist several ways to reconcile the formulation of the axioms with their intended meaning and that under this reformulation the axioms stop to be contradictory and the real-world k-means algorithm conforms to this axiomatic system.", "targets": "On the Discrepancy Between Kleinberg\u2019s Clustering Axioms and k-Means Clustering Algorithm Behavior"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-12e394c379534287aa7e41c07df55c12", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper presents a systematic evaluation of Neural Network (NN) for classification of real-world data. In the field of machine learning, it is often seen that a single parameter that is \u2018predictive accuracy\u2019 is being used for evaluating the performance of a classifier model. However, this parameter might not be considered reliable given a dataset with very high level of skewness. To demonstrate such behavior, seven different types of datasets have been used to evaluate a Multilayer Perceptron (MLP) using twelve(12) different parameters which include microand macro-level estimation. In the present study, the most common problem of prediction called \u2018multiclass\u2019 classification has been considered. The results that are obtained for different parameters for each of the dataset could demonstrate interesting findings to support the usability of these set of performance evaluation parameters.", "targets": "Reliable Evaluation of Neural Network for Multiclass Classification of Real-world Data"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ecb03bd404e746e4af8c6a18b1417e4e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We showed in this work how the Hassanat distance metric enhances the performance of the nearest neighbour classifiers. The results demonstrate the superiority of this distance metric over the traditional and most-used distances, such as Manhattan distance and Euclidian distance. Moreover, we proved that the Hassanat distance metric is invariant to data scale, noise and outliers. Throughout this work, it is clearly notable that both ENN and IINC performed very well with the distance investigated, as their accuracy increased significantly by 3.3% and 3.1% respectively, with no significant advantage of the ENN over the IINC in terms of accuracy. Correspondingly, it can be noted from our results that there is no optimal algorithm that can solve all reallife problems perfectly; this is supported by the no-free-lunch theorem.", "targets": "ON ENHANCING THE PERFORMANCE OF NEAREST NEIGHBOUR CLASSIFIERS USING HASSANAT DISTANCE METRIC"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3d9ba913cdc64ffa9d06b50c838b6e00", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Major advances in Question Answering technology were needed for IBM Watson to play Jeopardy! at championship level \u2013 the show requires rapid-fire answers to challenging natural language questions, broad general knowledge, high precision, and accurate confidence estimates. In addition, Jeopardy! features four types of decision making carrying great strategic importance: (1) Daily Double wagering; (2) Final Jeopardy wagering; (3) selecting the next square when in control of the board; (4) deciding whether to attempt to answer, i.e., \u201cbuzz in.\u201d Using sophisticated strategies for these decisions, that properly account for the game state and future event probabilities, can significantly boost a player\u2019s overall chances to win, when compared with simple \u201crule of thumb\u201d strategies. This article presents our approach to developing Watson\u2019s game-playing strategies, comprising development of a faithful simulation model, and then using learning and MonteCarlo methods within the simulator to optimize Watson\u2019s strategic decision-making. After giving a detailed description of each of our game-strategy algorithms, we then focus in particular on validating the accuracy of the simulator\u2019s predictions, and documenting performance improvements using our methods. Quantitative performance benefits are shown with respect to both simple heuristic strategies, and actual human contestant performance in historical episodes. We further extend our analysis of human play to derive a number of valuable and counterintuitive examples illustrating how human contestants may improve their performance on the show.", "targets": "Analysis of Watson\u2019s Strategies for Playing Jeopardy!"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d4a90ac5c6d6491fa61203ddc8ef2553", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The linear layer is one of the most pervasive modules in deep learning representations. However, it requiresO(N) parameters and O(N) operations. These costs can be prohibitive in mobile applications or prevent scaling in many domains. Here, we introduce a deep, differentiable, fully-connected neural network module composed of diagonal matrices of parameters, A and D, and the discrete cosine transform C. The core module, structured as ACDC, has O(N) parameters and incurs O(N logN) operations. We present theoretical results showing how deep cascades of ACDC layers approximate linear layers. ACDC is, however, a stand-alone module and can be used in combination with any other types of module. In our experiments, we show that it can indeed be successfully interleaved with ReLU modules in convolutional neural networks for image recognition. Our experiments also study critical factors in the training of these structured modules, including initialization and depth. Finally, this paper also provides a connection between structured linear transforms used in deep learning and the field of Fourier optics, illustrating how ACDC could in principle be implemented with lenses and diffractive elements.", "targets": "ACDC: A STRUCTURED EFFICIENT LINEAR LAYER"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f6574241ad664c72ba8f41e26902f2e4", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Opinions about the 2016 U.S. Presidential Candidates have been expressed in millions of tweets that are challenging to analyze automatically. Crowdsourcing the analysis of political tweets effectively is also difficult, due to large inter-rater disagreements when sarcasm is involved. Each tweet is typically analyzed by a fixed number of workers and majority voting. We here propose a crowdsourcing framework that instead uses a dynamic allocation of the number of workers. We explore two dynamic-allocation methods: (1) The number of workers queried to label a tweet is computed offline based on the predicted difficulty of discerning the sentiment of a particular tweet. (2) The number of crowd workers is determined online, during an iterative crowd sourcing process, based on inter-rater agreements between labels. We applied our approach to 1,000 twitter messages about the four U.S. presidential candidates Clinton, Cruz, Sanders, and Trump, collected during February 2016. We implemented the two proposed methods using decision trees that allocate more crowd efforts to tweets predicted to be sarcastic. We show that our framework outperforms the traditional static allocation scheme. It collects opinion labels from the crowd at a much lower cost while maintaining labeling accuracy.", "targets": "Dynamic Allocation of Crowd Contributions for Sentiment Analysis during the 2016 U.S. Presidential Election"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6220b21842814665a67d37c8ee137556", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The deep Boltzmann machine (DBM) has been an important development in the quest for powerful \u201cdeep\u201d probabilistic models. To date, simultaneous or joint training of all layers of the DBM has been largely unsuccessful with existing training methods. We introduce a simple regularization scheme that encourages the weight vectors associated with each hidden unit to have similar norms. We demonstrate that this regularization can be easily combined with standard stochastic maximum likelihood to yield an effective training strategy for the simultaneous training of all layers of the deep Boltzmann machine.", "targets": "On Training Deep Boltzmann Machines"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-80befd21acfa41f7aeb11ce7b4488dd8", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we analyze a generic algorithm scheme for sequential global optimization using Gaussian processes. The upper bounds we derive on the cumulative regret for this generic algorithm improve by an exponential factor the previously known bounds for algorithms like GP-UCB. We also introduce the novel Gaussian Process Mutual Information algorithm (GP-MI), which significantly improves further these upper bounds for the cumulative regret. We confirm the efficiency of this algorithm on synthetic and real tasks against the natural competitor, GP-UCB, and also the Expected Improvement heuristic. Preprint for the 31st International Conference on Machine Learning (ICML 2014) 1 ar X iv :1 31 1. 48 25 v3 [ st at .M L ] 8 J un 2 01 5 Erratum After the publication of our article, we found an error in the proof of Lemma 1 which invalidates the main theorem. It appears that the information given to the algorithm is not sufficient for the main theorem to hold true. The theoretical guarantees would remain valid in a setting where the algorithm observes the instantaneous regret instead of noisy samples of the unknown function. We describe in this page the mistake and its consequences. Let f : X \u2192 R be the unknown function to be optimized, which is a sample from a Gaussian process. Let\u2019s fix x, x1, . . . , xT \u2208 X and the observations yt = f(xt)+ t where the noise variables t are independent Gaussian noise N (0, \u03c3). We define the instantaneous regret rt = f(x?)\u2212 f(xt) and, MT = T \u2211", "targets": "Gaussian Process Optimization with Mutual Information"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-df921296060540fa8eb285a40b314b4f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper we present the greedy step averaging(GSA) method, a parameter-free stochastic optimization algorithm for a variety of machine learning problems. As a gradient-based optimization method, GSA makes use of the information from the minimizer of a single sample\u2019s loss function, and takes average strategy to calculate reasonable learning rate sequence. While most existing gradient-based algorithms introduce an increasing number of hyper parameters or try to make a trade-off between computational cost and convergence rate, GSA avoids the manual tuning of learning rate and brings in no more hyper parameters or extra cost. We perform exhaustive numerical experiments for logistic and softmax regression to compare our method with the other state of the art ones on 16 datasets. Results show that GSA is robust on various scenarios.", "targets": "Greedy Step Averaging: A parameter-free stochastic optimization method"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0b62fe1528b646a885f23d623d98ac0c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Deep neural networks have proved very successful in domains where large training sets are available, but when the number of training samples is small, their performance suffers from overfitting. Prior methods of reducing overfitting such as weight decay, Dropout and DropConnect are data-independent. This paper proposes a new method, GraphConnect, that is data-dependent, and is motivated by the observation that data of interest lie close to a manifold. The new method encourages the relationships between the learned decisions to resemble a graph representing the manifold structure. Essentially GraphConnect is designed to learn attributes that are present in data samples in contrast to weight decay, Dropout and DropConnect which are simply designed to make it more difficult to fit to random error or noise. Empirical Rademacher complexity is used to connect the generalization error of the neural network to spectral properties of the graph learned from the input data. This framework is used to show that GraphConnect is superior to weight decay. Experimental results on several benchmark datasets validate the theoretical analysis, and show that when the number of training samples is small, GraphConnect is able to significantly improve performance over weight decay.", "targets": "GraphConnect: A Regularization Framework for Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5803ff6634574041b0ea6a1694f109a7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The Weighted Constraint Satisfaction Problem (WCSP) framework allows representing and solving problems involving both hard constraints and cost functions. It has been applied to various problems, including resource allocation, bioinformatics, scheduling, etc. To solve such problems, solvers usually rely on branch-and-bound algorithms equipped with local consistency filtering, mostly soft arc consistency. However, these techniques are not well suited to solve problems with very large domains. Motivated by the resolution of an RNA gene localization problem inside large genomic sequences, and in the spirit of bounds consistency for large domains in crisp CSPs, we introduce soft bounds arc consistency, a new weighted local consistency specifically designed for WCSP with very large domains. Compared to soft arc consistency, BAC provides significantly improved time and space asymptotic complexity. In this paper, we show how the semantics of cost functions can be exploited to further improve the time complexity of BAC. We also compare both in theory and in practice the efficiency of BAC on a WCSP with bounds consistency enforced on a crisp CSP using cost variables. On two different real problems modeled as WCSP, including our RNA gene localization problem, we observe that maintaining bounds arc consistency outperforms arc consistency and also improves over bounds consistency enforced on a constraint model with cost variables.", "targets": "Bounds Arc Consistency for Weighted CSPs"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1b772fa288ee4363a92c90277c63fc1e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Robots will eventually be part of every household. It is thus critical to enable algorithms to learn from and be guided by non-expert users. In this paper, we bring a human in the loop, and enable a human teacher to give feedback to a learning agent in the form of natural language. We argue that a descriptive sentence can provide a much stronger learning signal than a numeric reward in that it can easily point to where the mistakes are and how to correct them. We focus on the problem of image captioning in which the quality of the output can easily be judged by non-experts. We propose a hierarchical phrase-based captioning model trained with policy gradients, and design a feedback network that provides reward to the learner by conditioning on the human-provided feedback. We show that by exploiting descriptive feedback our model learns to perform better than when given independently written human captions.", "targets": "Teaching Machines to Describe Images via Natural Language Feedback"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-24a82da2a2ac4e85bc7a548189b8f1c1", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Accurate prediction of suitable discourse connectives (however, furthermore, etc.) is a key component of any system aimed at building coherent and fluent discourses from shorter sentences and passages. As an example, a dialog system might assemble a long and informative answer by sampling passages extracted from different documents retrieved from the web. We formulate the task of discourse connective prediction and release a dataset of 2.9M sentence pairs separated by discourse connectives for this task. Then, we evaluate the hardness of the task for human raters, apply a recently proposed decomposable attention (DA) model to this task and observe that the automatic predictor has a higher F1 than human raters (32 vs. 30). Nevertheless, under specific conditions the raters still outperform the DA model, suggesting that there is headroom for future improvements. Finally, we further demonstrate the usefulness of the connectives dataset by showing that it improves implicit discourse relation prediction when used for model pre-training.", "targets": "Automatic Prediction of Discourse Connectives"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e9f0a382b5704ebd94e1d6a6dc5cf14e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The efficiency of algorithms using sec\u00ad ondary structures for probabilistic inference in Bayesian networks can be improved by ex\u00ad ploiting independence relations induced by evidence and the direction of the links in the original network. In this paper we present an algorithm that on-line exploits indepen\u00ad dence relations induced by evidence and the direction of the links in the original network to reduce both time and space costs. In\u00ad stead of multiplying the conditional proba\u00ad bility distributions for the various cliques, we determine on-line which potentials to multi\u00ad ply when a message is to be produced. The performance improvement of the algorithm is emphasized through empirical evaluations in\u00ad volving large real world Bayesian networks, and we compare the method with the HUGIN and Shafer-Shenoy inference algorithms.", "targets": "Lazy Propagation in Junction Trees"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7ee48d5039c947e985f746a3037a0a2c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In many combinatorial problems one may need to model the diversity or similarity of sets of assignments. For example, one may wish to maximise or minimise the number of distinct values in a solution. To formulate problems of this type we can use soft variants of the well known AllDifferent and AllEqual constraints. We present a taxonomy of six soft global constraints, generated by combining the two latter ones and the two standard cost functions, which are either maximised or minimised. We characterise the complexity of achieving arc and bounds consistency on these constraints, resolving those cases for which NP-hardness was neither proven nor disproven. In particular, we explore in depth the constraint ensuring that at least k pairs of variables have a common value. We show that achieving arc consistency is NP-hard, however bounds consistency can be achieved in polynomial time through dynamic programming. Moreover, we show that the maximum number of pairs of equal variables can be approximated by a factor of 12 with a linear time greedy algorithm. Finally, we provide a fixed parameter tractable algorithm with respect to the number of values appearing in more than two distinct domains. Interestingly, this taxonomy shows that enforcing equality is harder than enforcing difference.", "targets": "Soft Constraints of Difference and Equality"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3e9b80a8a15f4adf9163f7e6d413a2fe", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "De nos jours, l\u2019utilisation de l\u2019Internet pour la recherche de d\u00e9finitions est de plus en plus importante. Wikip\u00e9dia et Medline sont devenu les sites les plus consult\u00e9s de la Web. Or, il existe un \u00e9norme nombre de d\u00e9finitions qui sont parfois inaccessibles aux utilisateurs. Celles-ci peuvent se trouver dans des sites non encyclop\u00e9diques ou dans de documents divers. Dans cette perspective nous avons d\u00e9velopp\u00e9 le moteur de recherche Describe, qui permet de trouver des d\u00e9finitions en espagnol (Sierra et al., 2009). Une caract\u00e9ristique de ce moteur est qu\u2019il regroupe les r\u00e9sultats des recherches (d\u00e9finitions li\u00e9es \u00e0 un terme). Cet article pr\u00e9sente la m\u00e9thodologie de regroupement et l\u2019\u00e9valuation des r\u00e9sultats. Ceux-ci sont encourageants du point de vue qualitatif. Par contre, l\u2019\u00e9valuation quantitative pose des contraintes car il est compliqu\u00e9 d\u2019\u00e9valuer la s\u00e9mantique. Cet article est organis\u00e9 comme suit : dans la section 2 nous introduisons les contextes d\u00e9finitoires (CD), dans la section 3 nous pr\u00e9sentons des strat\u00e9gies de regroupement des d\u00e9finitions. Le corpus utilis\u00e9 dans nos exp\u00e9riences est pr\u00e9sent\u00e9 en section 4. Des \u00e9valuations avec des analyses quantitative et qualitative sont pr\u00e9sent\u00e9es au chapitre 5 avant de conclure et de donner quelques perspectives.", "targets": "Regroupement se\u0301mantique de de\u0301finitions en espagnol"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-aa0134a0621b4e7db53deb557d3ce78c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "For an artificial creative agent, an essential driver of the search for novelty is a value function which is often provided by the system designer or users. We argue that an important barrier for progress in creativity research is the inability of these systems to develop their own notion of value for novelty. We propose a notion of knowledge-driven creativity that circumvent the need for an externally imposed value function, allowing the system to explore based on what it has learned from a set of referential objects. The concept is illustrated by a specific knowledge model provided by a deep generative autoencoder. Using the described system, we train a knowledge model on a set of digit images and we use the same model to build coherent sets of new digits that do not belong to known", "targets": "Digits that are not: Generating new types through deep neural nets"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e0d7d009b4444c2cba6f24ce6fa018e7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "SNOMED Clinical Terms (SNOMED CT) is one of the most widespread ontologies in the life sciences, with more than 300,000 concepts and relationships, but is distributed with no associated software tools. In this paper we present MySNOM, a web-based SNOMED CT browser. MySNOM allows organizations to browse their own distribution of SNOMED CT under a controlled environment, focuses on navigating using the structure of SNOMED CT, and has diagramming capabilities.", "targets": "Are SNOMED CT Browsers Ready for Institutions? Introducing MySNOM"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-213a040036e74167801102f6cb62071f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Deep CCA is a recently proposed deep neural network extension to the traditional canonical correlation analysis (CCA), and has been successful for multi-view representation learning in several domains. However, stochastic optimization of the deep CCA objective is not straightforward, because it does not decouple over training examples. Previous optimizers for deep CCA are either batch-based algorithms or stochastic optimization using large minibatches, which can have high memory consumption. In this paper, we tackle the problem of stochastic optimization for deep CCA with small minibatches, based on an iterative solution to the CCA objective, and show that we can achieve as good performance as previous optimizers and thus alleviate the memory requirement.", "targets": "Stochastic Optimization for Deep CCA via Nonlinear Orthogonal Iterations"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7a7c3c5b9e474aba8d92d7a75ccc63d3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Besides spoken words, speech signals also carry information about speaker gender, age, and emotional state which can be used in a variety of speech analysis applications. In this paper, a divide and conquer strategy for ensemble classification has been proposed to recognize emotions in speech. Intrinsic hierarchy in emotions has been utilized to construct an emotions tree, which assisted in breaking down the emotion recognition task into smaller sub tasks. The proposed framework generates predictions in three phases. Firstly, emotions are detected in the input speech signal by classifying it as neutral or emotional. If the speech is classified as emotional, then in the second phase, it is further classified into positive and negative classes. Finally, individual positive or negative emotions are identified based on the outcomes of the previous stages. Several experiments have been performed on a widely used benchmark dataset. The proposed method was able to achieve improved recognition rates as compared to several other approaches.", "targets": "Divide-and-Conquer based Ensemble to Spot Emotions in Speech using MFCC and Random Forest"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-de8ed1ab765542f3b0cdb15937435f0a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We study the stability vis a vis adversarial noise of matrix factorization algorithm for matrix completion. In particular, our results include: (I) we bound the gap between the solution matrix of the factorization method and the ground truth in terms of root mean square error; (II) we treat the matrix factorization as a subspace fitting problem and analyze the difference between the solution subspace and the ground truth; (III) we analyze the prediction error of individual users based on the subspace stability. We apply these results to the problem of collaborative filtering under manipulator attack, which leads to useful insights and guidelines for collaborative filtering system design.", "targets": "Stability of Matrix Factorization for Collaborative Filtering"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7e654e1ee6864df98c16777d454773ba", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Content on the Internet is heterogeneous and arises from various domains like News, Entertainment, Finance and Technology. Understanding such content requires identifying named entities (persons, places and organizations) as one of the key steps. Traditionally Named Entity Recognition (NER) systems have been built using available annotated datasets (like CoNLL, MUC) and demonstrate excellent performance. However, these models fail to generalize onto other domains like Sports and Finance where conventions and language use can differ significantly. Furthermore, several domains do not have large amounts of annotated labeled data for training robust Named Entity Recognition models. A key step towards this challenge is to adapt models learned on domains where large amounts of annotated training data are available to domains with scarce annotated data. In this paper, we propose methods to effectively adapt models learned on one domain onto other domains using distributed word representations. First we analyze the linguistic variation present across domains to identify key linguistic insights that can boost performance across domains. We propose methods to capture domain specific semantics of word usage in addition to global semantics. We then demonstrate how to effectively use such domain specific knowledge to learn NER models that outperform previous baselines in the domain adaptation setting. \u2217This work was done when the author was a research intern at Yahoo. \u2217\u00a9 2016 This is the authors draft of the work. It is posted here for your personal use. Not for redistribution.", "targets": "Domain Adaptation for Named Entity Recognition in Online Media with Word Embeddings"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a3eede37103044a7ab9ae32e0eaec414", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Humans can ground natural language commands to tasks at both abstract and fine-grained levels of specificity. For instance, a human forklift operator can be instructed to perform a high-level action, like \u201cgrab a pallet\u201d or a lowlevel action like \u201ctilt back a little bit.\u201d While robots are also capable of grounding language commands to tasks, previous methods implicitly assume that all commands and tasks reside at a single, fixed level of abstraction. Additionally, those approaches that do not use abstraction experience inefficient planning and execution times due to the large, intractable state-action spaces, which closely resemble real world complexity. In this work, by grounding commands to all the tasks or subtasks available in a hierarchical planning framework, we arrive at a model capable of interpreting language at multiple levels of specificity ranging from coarse to more granular. We show that the accuracy of the grounding procedure is improved when simultaneously inferring the degree of abstraction in language used to communicate the task. Leveraging hierarchy also improves efficiency: our proposed approach enables a robot to respond to a command within one second on 90% of our tasks, while baselines take over twenty seconds on half the tasks. Finally, we demonstrate that a real, physical robot can ground commands at multiple levels of abstraction allowing it to efficiently plan different subtasks within the same planning hierarchy.", "targets": "Accurately and Efficiently Interpreting Human-Robot Instructions of Varying Granularities"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0e1527ee31384fefa8a923f1401cf050", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Many real-world machine learning applications involve several learning tasks which are inter-related. For example, in healthcare domain, we need to learn a predictive model of a certain disease for many hospitals. The models for each hospital may be different because of the inherent differences in the distributions of the patient populations. However, the models are also closely related because of the nature of the learning tasks modeling the same disease. By simultaneously learning all the tasks, multi-task learning (MTL) paradigm performs inductive knowledge transfer among tasks to improve the generalization performance. When datasets for the learning tasks are stored at different locations, it may not always be feasible to transfer the data to provide a data-centralized computing environment due to various practical issues such as high data volume and privacy. In this paper, we propose a principled MTL framework for distributed and asynchronous optimization to address the aforementioned challenges. In our framework, gradient update does not wait for collecting the gradient information from all the tasks. Therefore, the proposed method is very efficient when the communication delay is too high for some task nodes. We show that many regularized MTL formulations can benefit from this framework, including the low-rank MTL for shared subspace learning. Empirical studies on both synthetic and realworld datasets demonstrate the efficiency and effectiveness of the proposed framework.", "targets": "Asynchronous Multi-Task Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9a7d1cfe81dc445ea1813f85c874e9a3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This is a working paper summarizing results of an ongoing research project whose aim is to uniquely characterize the uncertainty mea\u00ad sure for the Dempster-Shafer Theory. A set of intuitive axiomatic requirements is pre\u00ad sented, some of their implications are shown, and the proof is given of the minimality of re\u00ad cently proposed measure AU among all mea\u00ad sures satisfying the proposed requirements.", "targets": "Toward a Characterization of Uncertainty Measure for the Dempster-Shafer Theory"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8aa7c4c51f6b4b5b9ea890d7d1ff93e3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Estimators of information theoretic measures such as entropy and mutual information are a basic workhorse for many downstream applications in modern data science. State of the art approaches have been either geometric (nearest neighbor (NN) based) or kernel based (with a globally chosen bandwidth). In this paper, we combine both these approaches to design new estimators of entropy and mutual information that outperform state of the art methods. Our estimator uses local bandwidth choices of k-NN distances with a finite k, independent of the sample size. Such a local and data dependent choice improves performance in practice, but the bandwidth is vanishing at a fast rate, leading to a non-vanishing bias. We show that the asymptotic bias of the proposed estimator is universal; it is independent of the underlying distribution. Hence, it can be precomputed and subtracted from the estimate. As a byproduct, we obtain a unified way of obtaining both kernel and NN estimators. The corresponding theoretical contribution relating the asymptotic geometry of nearest neighbors to order statistics is of independent mathematical interest.", "targets": "Breaking the Bandwidth Barrier: Geometrical Adaptive Entropy Estimation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-29c6a144f3644b2e84024d9bd1c4e0b5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The basic features of some of the most versatile and popular open source frameworks for machine learning (TensorFlow, Deep Learning4j, and H2O) are considered and compared. Their comparative analysis was performed and conclusions were made as to the advantages and disadvantages of these platforms. The performance tests for the de facto standard MNIST data set were carried out on H2O framework for deep learning algorithms designed for CPU and GPU platforms for single-threaded and multithreaded modes of operation. Keywords\u2014machine learning; deep learning; TensorFlow; Deep Learning4j; H2O; MNIST; multicore CPU; GPU.", "targets": "Comparative Analysis of Open Source Frameworks for Machine Learning with Use Case in Single- Threaded and Multi-Threaded Modes"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5a65673632194dee84c3c297cd626a79", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We study the problem of learning the best Bayesian network structure with respect to a decomposable score such as BDe, BIC or AIC. This problem is known to be NP-hard, which means that solving it becomes quickly infeasible as the number of variables increases. Nevertheless, in this paper we show that it is possible to learn the best Bayesian network structure with over 30 variables, which covers many practically interesting cases. Our algorithm is less complicated and more efficient than the techniques presented earlier. It can be easily parallelized, and offers a possibility for efficient exploration of the best networks consistent with different variable orderings. In the experimental part of the paper we compare the performance of the algorithm to the previous state-of-the-art algorithm. Free source-code and an online-demo can be found at http://b-course.hiit.fi/bene.", "targets": "A Simple Approach for Finding the Globally Optimal Bayesian Network Structure"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b313b960926245eba5596843820578a1", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "State-of-the-art answer set programming (ASP) solvers rely on a program called a grounder to convert non-ground programs containing variables into variable-free, propositional programs. The size of this grounding depends heavily on the size of the non-ground rules, and thus, reducing the size of such rules is a promising approach to improve solving performance. To this end, in this paper we announce lpopt, a tool that decomposes large logic programming rules into smaller rules that are easier to handle for current solvers. The tool is specifically tailored to handle the standard syntax of the ASP language (ASP-Core) and makes it easier for users to write efficient and intuitive ASP programs, which would otherwise often require significant hand-tuning by expert ASP engineers. It is based on an idea proposed by Morak and Woltran (2012) that we extend significantly in order to handle the full ASP syntax, including complex constructs like aggregates, weak constraints, and arithmetic expressions. We present the algorithm, the theoretical foundations on how to treat these constructs, as well as an experimental evaluation showing the viability of our approach.", "targets": "lpopt: A Rule Optimization Tool for Answer Set Programming Author=Manuel Bichler, Michael Morak, and Stefan Woltran"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9ca8eaa6cf814a9eb7feb9681a2bc61c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We introduce AI rationalization, an approach for generating explanations of autonomous system behavior as if a human had done the behavior. We describe a rationalization technique that uses neural machine translation to translate internal state-action representations of the autonomous agent into natural language. We evaluate our technique in the Frogger game environment. The natural language is collected from human players thinking out loud as they play the game. We motivate the use of rationalization as an approach to explanation generation, show the results of experiments on the accuracy of our rationalization technique, and describe future research agenda.", "targets": "Rationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-11e1da6761c345b7ad20be120a49e043", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Cumulative prospect theory (CPT) is known to model human decisions well, with substantial empirical evidence supporting this claim. CPT works by distorting probabilities and is more general than the classic expected utility and coherent risk measures. We bring this idea to a risk-sensitive reinforcement learning (RL) setting and design algorithms for both estimation and control. The estimation scheme that we propose uses the empirical distribution in order to estimate the CPT-value of a random variable. We then use this scheme in the inner loop of policy optimization procedures for a Markov decision process (MDP). We propose both gradient-based as well as gradient-free policy optimization algorithms. The former includes both first-order and second-order methods that are based on the well-known simulation optimization idea of simultaneous perturbation stochastic approximation (SPSA), while the latter is based on a reference distribution that concentrates on the global optima. Using an empirical distribution over the policy space in conjunction with Kullback-Leibler (KL) divergence to the reference distribution, we get a global policy optimization scheme. We provide theoretical convergence guarantees for all the proposed algorithms.", "targets": "Cumulative Prospect Theory Meets Reinforcement Learning: Estimation and Control"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-549c48e7e22e41f585800d8cf7c63a86", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "5", "targets": "Leveraging over priors for boosting control of prosthetic hands"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8e99ced076a24016b60c90df3b5d7d58", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We study the problem of adaptive control of a high dimensional linear quadratic(LQ) system. Previous work established the asymptotic convergence to an optimalcontroller for various adaptive control schemes. More recently, for the averagecost LQ problem, a regret bound of O(\u221aT ) was shown, apart form logarithmicfactors. However, this bound scales exponentially with p, the dimension of thestate space. In this work we consider the case where the matrices describing thedynamic of the LQ system are sparse and their dimensions are large. We presentan adaptive control scheme that achieves a regret bound of O(p\u221aT ), apart fromlogarithmic factors. In particular, our algorithm has an average cost of (1 + \u01eb)times the optimum cost after T = polylog(p)O(1/\u01eb). This is in comparison toprevious work on the dense dynamics where the algorithm requires time that scalesexponentially with dimension in order to achieve regret of \u01eb times the optimal cost.We believe that our result has prominent applications in the emerging area ofcomputational advertising, in particular targeted online advertising and advertisingin social networks.", "targets": "Efficient Reinforcement Learning for High Dimensional Linear Quadratic Systems"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-587368394f2949fe97b94b4c3ebe0f21", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Along with data on the web increasing dramatically, hashing is becoming more and more popular as a method of approximate nearest neighbor search. Previous supervised hashing methods utilized similarity/dissimilarity matrix to get semantic information. But the matrix is not easy to construct for a new dataset. Rather than to reconstruct the matrix, we proposed a straightforward CNN-based hashing method, i.e. binarilizing the activations of a fully connected layer with threshold 0 and taking the binary result as hash codes. This method achieved the best performance on CIFAR-10 and was comparable with the state-ofthe-art on MNIST. And our experiments on CIFAR-10 suggested that the signs of activations may carry more information than the relative values of activations between samples, and that the co-adaption between feature extractor and hash functions is important for hashing.", "targets": "CNN Based Hashing for Image Retrieval"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d1f2de787f344d129360a76ad2a890b0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Separable Bayesian Networks, or the Influence Model, are dynamic Bayesian Networks in which the conditional probability distribution can be separated into a function of only the marginal distribution of a node\u2019s parents, instead of the joint distributions. We describe the connection between an arbitrary Conditional Probability Table (CPT) and separable systems using linear algebra. We give an alternate proof to [Pfeffer00] on the equivalence of sufficiency and separability. We present a computational method for testing whether a given CPT is separable.", "targets": "Linear Algebra Approach to Separable Bayesian Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b0ddad2a938045888a8eef35d98a6964", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this thesis, we study the problem of recognizing video sequences of fingerspelled letters in American Sign Language (ASL). Fingerspelling comprises a significant but relatively understudied part of ASL, and recognizing it is challenging for a number of reasons: It involves quick, small motions that are often highly coarticulated; it exhibits significant variation between signers; and there has been a dearth of continuous fingerspelling data collected. In this work, we propose several types of recognition approaches, and explore the signer variation problem. Our best-performing models are segmental (semi-Markov) conditional random fields using deep neural network-based features. In the signer-dependent setting, our recognizers achieve up to about 8% letter error rates. The signer-independent setting is much more challenging, but with neural network adaptation we achieve up to 17% letter error rates. Thesis Supervisor: Karen Livescu Title: Assistant Professor", "targets": "American Sign Language fingerspelling recognition from video: Methods for unrestricted recognition and signer-independence"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f266720ef1ab4cd2a76b14133fdbbc15", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Finite chase, or alternatively chase termination, is an important condition to ensure the decidability of existential rule languages. In the past few years, a number of rule languages with finite chase have been studied. In this work, we propose a novel approach for classifying the rule languages with finite chase. Using this approach, a family of decidable rule languages, which extend the existing languages with the finite chase property, are naturally defined. We then study the complexity of these languages. Although all of them are tractable for data complexity, we show that their combined complexity can be arbitrarily high. Furthermore, we prove that all the rule languages with finite chase that extend the weakly acyclic language are of the same expressiveness as the weakly acyclic one, while rule languages with higher combined complexity are in general more succinct than those with lower combined complexity.", "targets": "Existential Rule Languages with Finite Chase: Complexity and Expressiveness"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ba90e1c2121d4cc39c4983a9200018e2", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Derivational morphology is a fundamental and complex characteristic of language. In this paper we propose the new task of predicting the derivational form of a given base-form lemma that is appropriate for a given context. We present an encoder\u2013 decoder style neural network to produce a derived form character-by-character, based on its corresponding character-level representation of the base form and the context. We demonstrate that our model is able to generate valid context-sensitive derivations from known base forms, but is less accurate under a lexicon agnostic setting.", "targets": "Context-Aware Prediction of Derivational Word-forms"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3519a685880d42bc9399c0c4698be355", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The large and growing amounts of online scholarly data present both challenges and opportunities to enhance knowledge discovery. One such challenge is to automatically extract a small set of keyphrases from a document that can accurately describe the document\u2019s content and can facilitate fast information processing. In this paper, we propose PositionRank, an unsupervised model for keyphrase extraction from scholarly documents that incorporates information from all positions of a word\u2019s occurrences into a biased PageRank. Our model obtains remarkable improvements in performance over PageRank models that do not take into account word positions as well as over strong baselines for this task. Specifically, on several datasets of research papers, PositionRank achieves improvements as high as 29.09%.", "targets": "PositionRank: An Unsupervised Approach to Keyphrase Extraction from Scholarly Documents"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ad6503b23a59475d88b9aa2a41bbe6b2", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Generative model has been one of the most common approaches for solving the Dialog State Tracking Problem with the capabilities to model the dialog hypotheses in an explicit manner. The most important task in such Bayesian networks models is constructing the most reliable user models by learning and reflecting the training data into the probability distribution of user actions conditional on networks\u2019 states. This paper provides an overall picture of the learning process in a Bayesian framework with an emphasize on the state-of-the-art theoretical analyses of the Expectation Maximization learning algorithm.", "targets": "The Dialog State Tracking Challenge with Bayesian Approach"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-68393b5ecf4d4f26b9de0ef7f0cb64df", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The problem of sparse rewards is one of the hardest challenges in contemporary reinforcement learning. Hierarchical reinforcement learning (HRL) tackles this problem by using a set of temporally-extended actions, or options, each of which has its own subgoal. These subgoals are normally handcrafted for specific tasks. Here, though, we introduce a generic class of subgoals with broad applicability in the visual domain. Underlying our approach (in common with work using \u201cauxiliary tasks\u201d) is the hypothesis that the ability to control aspects of the environment is an inherently useful skill to have. We incorporate such subgoals in an end-to-end hierarchical reinforcement learning system and test two variants of our algorithm on a number of games from the Atari suite. We highlight the advantage of our approach in one of the hardest games \u2013 Montezuma\u2019s revenge \u2013 for which the ability to handle sparse rewards is key. Our agent learns several times faster than the current state-of-the-art HRL agent in this game, reaching a similar level of performance.", "targets": "Feature Control as Intrinsic Motivation for Hierarchical Reinforcement Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-de19b3b9f027427f905b78d09975a2a3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Modeling spike firing assumes that spiking statistics is Poisson, but real data violates this assumption. To capture non-Poissonian features, in order to fix the inevitable inherent irregularity, researchers rescale the time axis with tedious computational overhead instead of searching for another distribution! Spikes or action potentials are precisely-timed changes in the ionic transport through synapses adjusting the synaptic weight, successfully modeled and developed as a memristor. Memristance value is multiples of initial resistance. This reminds us with the foundations of quantum mechanics. We try to quantize potential and resistance, as done with energy. After reviewing Planck curve for blackbody radiation, we propose the quantization equations. We introduce and prove a theorem that quantizes the resistance. Then we define the tyke showing its basic characteristics. Finally we give the basic transformations to model spiking and link an energy quantum to a tyke. Investigation shows how this perfectly models the neuron spiking., with over 97% match. All MATLA codes used are supplemented in the appendix.", "targets": "Spike and Tyke, the Quantized Neuron Model"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-94ec42701a804e0385714ed126e89234", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Quality assurance remains a key topic in human computation research. Prior work indicates that majority voting is effective for low difficulty tasks, but has limitations for harder tasks. This paper explores two methods of addressing this problem: tournament selection and elimination selection, which exploit 2-, 3and 4-way comparisons between different answers to human computation tasks. Our experimental results and statistical analyses show that both methods produce the correct answer in noisy human computation environment more often than majority voting. Furthermore, we find that the use of 4-way comparisons can significantly reduce the cost of quality assurance relative to the use of 2-way comparisons.", "targets": "WHEN MAJORITY VOTING FAILS: COMPARINGQUALITY ASSURANCE METHODS FOR NOISY HUMAN COMPUTATION ENVIRONMENT"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4bc568b01a9f429eb7c8c8b438464e02", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In the a posteriori approach of multiobjective optimization the Pareto front is approximated by a finite set of solutions in the objective space. The quality of the approximation can be measured by different indicators that take into account the approximation\u2019s closeness to the Pareto front and its distribution along the Pareto front. In particular, the averaged Hausdorff indicator prefers an almost uniform distribution. An observed drawback of multiobjective estimation of distribution algorithms (MEDAs) is that as common for randomized metaheuristics the final population usually is not uniformly distributed along the Pareto front. Therefore, we propose a postprocessing strategy which consists of applying the averaged Hausdorff indicator to the complete archive of generated solutions after optimization in order to select a uniformly distributed subset of nondominated solutions from the archive. In this paper, we put forward a strategy for extracting the above described subset. The effectiveness of the proposal is contrasted in a series of experiments that involve different MEDAs and filtering techniques. ar X iv :1 50 3. 07 84 5v 1 [ cs .A I] 2 6 M ar 2 01 5", "targets": "Averaged Hausdorff Approximations of Pareto Fronts based on Multiobjective Estimation of Distribution Algorithms 2015"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-71ffe28cc07d4531b01a7d8ae3ba4cb4", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In machine learning, there is a fundamental trade-off between ease of optimization and expressive power. Neural Networks, in particular, have enormous expressive power and yet are notoriously challenging to train. The nature of that optimization challenge changes over the course of learning. Traditionally in deep learning, one makes a static trade-off between the needs of early and late optimization. In this paper, we investigate a novel framework, GradNets, for dynamically adapting architectures during training to get the benefits of both. For example, we can gradually transition from linear to non-linear networks, deterministic to stochastic computation, shallow to deep architectures, or even simple downsampling to fully differentiable attention mechanisms. Benefits include increased accuracy, easier convergence with more complex architectures, solutions to test-time execution of batch normalization, and the ability to train networks of up to 200 layers.", "targets": "GRADNETS: DYNAMIC INTERPOLATION BETWEEN NEURAL ARCHITECTURES"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-989015f2063d48ec80e5ae81928887b0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The vocabulary mismatch problem is a long-standing problem in information retrieval. Semantic matching holds the promise of solving the problem. Recent advances in language technology have given rise to unsupervised neural models for learning representations of words as well as bigger textual units. Such representations enable powerful semantic matching methods. This survey is meant as an introduction to the use of neural models for semantic matching. To remain focused we limit ourselves to web search. We detail the required background and terminology, a taxonomy grouping the rapidly growing body of work in the area, and then survey work on neural models for semantic matching in the context of three tasks: query suggestion, ad retrieval, and document retrieval. We include a section on resources and best practices that we believe will help readers who are new to the area. We conclude with an assessment of the state-of-the-art and suggestions for future work.", "targets": "Getting Started with Neural Models for Semantic Matching in Web Search"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-495519577ce44e3db44483bfbd9e0ddd", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This software project based paper is for a vision of the near future in which computer interaction is characterised by natural face-to-face conversations with lifelike characters that speak, emote, and gesture. The first step is speech. The dream of a true virtual reality, a complete human-computer interaction system will not come true unless we try to give some perception to machine and make it perceive the outside world as humans communicate with each other. This software project is under development for \u201clistening and replying machine (Computer) through speech\u201d. The Speech interface is developed to convert speech input into some parametric form (Speech-to-Text) for further processing and the results, text output to speech synthesis (Text-to-Speech)", "targets": "Speech_Urmila"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-37ca63b5c86d4fd794e97bca0c592041", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Empirical risk minimization (ERM) is a fundamental learning rule for statistical learning problems where the data is generated according to some unknown distribution P and returns a hypothesis f chosen from a fixed class F with small loss `. In the parametric setting, depending upon (`,F ,P) ERM can have slow (1/ \u221a n) or fast (1/n) rates of convergence of the excess risk as a function of the sample size n. There exist several results that give sufficient conditions for fast rates in terms of joint properties of `, F , and P, such as the margin condition and the Bernstein condition. In the non-statistical prediction with expert advice setting, there is an analogous slow and fast rate phenomenon, and it is entirely characterized in terms of the mixability of the loss ` (there being no role there for F or P). The notion of stochastic mixability builds a bridge between these two models of learning, reducing to classical mixability in a special case. The present paper presents a direct proof of fast rates for ERM in terms of stochastic mixability of (`,F ,P), and in so doing provides new insight into the fast-rates phenomenon. The proof exploits an old result of Kemperman on the solution to the general moment problem. We also show a partial converse that suggests a characterization of fast rates for ERM in terms of stochastic mixability is possible.", "targets": "From Stochastic Mixability to Fast Rates"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a02be19c44754ff3be64afbb76f43342", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present a novel neural model HyperVec to learn hierarchical embeddings for hypernymy detection and directionality. While previous embeddings have shown limitations on prototypical hypernyms, HyperVec represents an unsupervised measure where embeddings are learned in a specific order and capture the hypernym\u2013hyponym distributional hierarchy. Moreover, our model is able to generalize over unseen hypernymy pairs, when using only small sets of training data, and by mapping to other languages. Results on benchmark datasets show that HyperVec outperforms both state-of-theart unsupervised measures and embedding models on hypernymy detection and directionality, and on predicting graded lexical entailment.", "targets": "Hierarchical Embeddings for Hypernymy Detection and Directionality"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4fcda61fa20e4f16913f00d8c0a49e8f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We train a generative convolutional neural networkwhich is able to generate images of objects given objecttype, viewpoint, and color. We train the network in a su-pervised manner on a dataset of rendered 3D chair mod-els. Our experiments show that the network does not merelylearn all images by heart, but rather finds a meaningfulrepresentation of a 3D chair model allowing it to assessthe similarity of different chairs, interpolate between givenviewpoints to generate the missing ones, or invent new chairstyles by interpolating between chairs from the training set.We show that the network can be used to find correspon-dences between different chairs from the dataset, outper-forming existing approaches on this task.", "targets": "Learning to Generate Chairs with Convolutional Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e35c42c5c4274d2a91f43091256158ab", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, the continuity and strong continuity in domain-free information algebras and labeled information algebras are introduced respectively. A more general concept of continuous function which is defined between two domain-free continuous information algebras is presented. It is shown that, with the operations combination and focusing, the set of all continuous functions between two domain-free s-continuous information algebras forms a new s-continuous information algebra. By studying the relationship between domain-free information algebras and labeled information algebras, it is demonstrated that they do correspond to each other on s-compactness.", "targets": "CONTINUITY IN INFORMATION ALGEBRAS: A SURVEY ON THE RELATIONSHIP BETWEEN TWO TYPES OF INFORMATION ALGEBRAS"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3c860ab81ba74ec994cd316fb29e6406", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present a language complexity analysis of World of Warcraft (WoW) community texts, which we compare to texts from a general corpus of web English. Results from several complexity types are presented, including lexical diversity, density, readability and syntactic complexity. The language of WoW texts is found to be comparable to the general corpus on some complexity measures, yet more specialized on other measures. Our findings can be used by educators willing to include game-related activities into school curricula.", "targets": "An investigation into language complexity of World-of-Warcraft game-external texts"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-600395fb736b48f895a6567400e18772", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The sparse, hierarchical and modular processing of natural signals are characteristics that relate to the ability of humans to recognise objects with high accuracy. In this paper, we report a sparse feature processing and encoding method targeted at improving the recognition performance of automated object recognition system. Randomly distributed selection of localised gradient enhanced features followed by the application of aggregate functions represents a modular and hierarchical approach to detect the object features. These object features, in combination with minimum distance classifier, results in object recognition system accuracies of 93% using ALOI, 92% using COIL-100 databases and 69% using PASCAL visual object challenge 2007 database, respectively. Robustness of object recognition performance is tested for variations in noise, object scaling and object shifts. Finally, a comparison with 8 existing object recognition methods indicated an improvement in recognition accuracy of 10% in ALOI, 8% in case of COIL-100 databases and 10% in PASCAL visual object challenge 2007 database.", "targets": "Sparse distributed localised gradient fused features of objects"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7ed4265567b7442bbaff00eefe84d415", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This is an overview paper written in style of research proposal. In recent years we introduced a general framework for large-scale unconstrained optimization \u2013 Sequential Subspace Optimization (SESOP) and demonstrated its usefulness for sparsity-based signal/image denoising, deconvolution, compressive sensing, computed tomography, diffraction imaging, support vector machines. We explored its combination with Parallel Coordinate Descent and Separable Surrogate Function methods, obtaining state of the art results in above-mentioned areas. There are several methods, that are faster than plain SESOP under specific conditions: Trust region Newton method for problems with easily invertible Hessian matrix; Truncated Newton method when fast multiplication by Hessian is available; Stochastic optimization methods for problems with large stochastic-type data; Multigrid methods for problems with nested multilevel structure. Each of these methods can be further improved by merge with SESOP. One can also accelerate Augmented Lagrangian method for constrained optimization problems and Alternating Direction Method of Multipliers for problems with separable objective function and non-separable constraints.", "targets": "Speeding-Up Convergence via Sequential Subspace Optimization: Current State and Future Directions"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-cb2a0e74b82f4cf4b047a39b5d8e7435", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper presents a new 3D point cloud classification benchmark data set with over four billion manually labelled points, meant as input for data-hungry (deep) learning methods. We also discuss first submissions to the benchmark that use deep convolutional neural networks (CNNs) as a work horse, which already show remarkable performance improvements over state-of-the-art. CNNs have become the de-facto standard for many tasks in computer vision and machine learning like semantic segmentation or object detection in images, but have no yet led to a true breakthrough for 3D point cloud labelling tasks due to lack of training data. With the massive data set presented in this paper, we aim at closing this data gap to help unleash the full potential of deep learning methods for 3D labelling tasks. Our semantic3D.net data set consists of dense point clouds acquired with static terrestrial laser scanners. It contains 8 semantic classes and covers a wide range of urban outdoor scenes: churches, streets, railroad tracks, squares, villages, soccer fields and castles. We describe our labelling interface and show that our data set provides more dense and complete point clouds with much higher overall number of labelled points compared to those already available to the research community. We further provide baseline method descriptions and comparison between methods submitted to our online system. We hope semantic3D.net will pave the way for deep learning methods in 3D point cloud labelling to learn richer, more general 3D representations, and first submissions after only a few months indicate that this might indeed be the case.", "targets": "SEMANTIC3D.NET: A NEW LARGE-SCALE POINT CLOUD CLASSIFICATION BENCHMARK"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4f6d6a7fd01a4e2780a50ecda4acc256", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Bound Founded Answer Set Programming (BFASP) is an extension of Answer Set Programming (ASP) that extends stable model semantics to numeric variables. While the theory of BFASP is defined on ground rules, in practice BFASP programs are written as complex non-ground expressions. Flattening of BFASP is a technique used to simplify arbitrary expressions of the language to a small and well defined set of primitive expressions. In this paper, we first show how we can flatten arbitrary BFASP rule expressions, to give equivalent BFASP programs. Next, we extend the bottom-up grounding technique and magic set transformation used by ASP to BFASP programs. Our implementation shows that for BFASP problems, these techniques can significantly reduce the ground program size, and improve subsequent solving.", "targets": "Grounding Bound Founded Answer Set Programs"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e16577736b5349ddbc1c14fbaae27712", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We introduce an online neural sequence to sequence model that learns to alternate between encoding and decoding segments of the input as it is read. By independently tracking the encoding and decoding representations our algorithm permits exact polynomial marginalization of the latent segmentation during training, and during decoding beam search is employed to find the best alignment path together with the predicted output sequence. Our model tackles the bottleneck of vanilla encoder-decoders that have to read and memorize the entire input sequence in their fixedlength hidden states before producing any output. It is different from previous attentive models in that, instead of treating the attention weights as output of a deterministic function, our model assigns attention weights to a sequential latent variable which can be marginalized out and permits online generation. Experiments on abstractive sentence summarization and morphological inflection show significant performance gains over the baseline encoder-decoders.", "targets": "Online Segment to Segment Neural Transduction"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-cb8ad0b30e7a409c9d5c287d88c7dbf4", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "There is an increasing consensus among researchers that making a computer emotionally intelligent with the ability to decode human affective states would allow a more meaningful and natural way of human-computer interactions (HCIs). One unobtrusive and non-invasive way of recognizing human affective states entails the exploration of how physiological signals vary under different emotional experiences. In particular, this paper explores the correlation between autonomically-mediated changes in multimodal body signals and discrete emotional states. In order to fully exploit the information in each modality, we have provided an innovative classification approach for three specific physiological signals including Electromyogram (EMG), Blood Volume Pressure (BVP) and Galvanic Skin Response (GSR). These signals are analyzed as inputs to an emotion recognition paradigm based on fusion of a series of weak learners. Our proposed classification approach showed 88.1% recognition accuracy, which outperformed the conventional Support Vector Machine (SVM) classifier with 17% accuracy improvement. Furthermore, in order to avoid information redundancy and the resultant over-fitting, a feature reduction method is proposed based on a correlation analysis to optimize the number of features required for training and validating each weak learner. Results showed that despite the feature space dimensionality reduction from 27 to 18 features, our methodology preserved the recognition accuracy of about 85.0%. This reduction in complexity will get us one step closer towards embedding this human emotion encoder in the wireless and wearable HCI platforms.", "targets": "Decoding Emotional Experience through Physiological Signal Processing"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2a4aed0ebce14ea2ba33731dba1f02bf", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Hybrid Probabilistic Programs (HPPs) are logic programs that allow the programmer to explicitly encode his knowledge of the de\u00ad pendencies between events being described in the program. In this paper, we classify HPPs into three classes called H P P1, H P P2 and H P Pr, r 2: 3. For these classes, we pro\u00ad vide three types of results for HPPs. First, we develop algorithms to compute the set of all ground consequences of an HPP. Then we provide algorithms and complexity results for the problems of entailment (\"Given an HPP P and a query Q as input, is Q a logical con\u00ad sequence of P?\") and consistency (\"Given an HPP Pas input, is P consistent?\"). Our re\u00ad sults provide a fine characterization of when polynomial algorithms exist for the above problems, and when these problems become intractable.", "targets": "Hybrid Probabilistic Programs: Algorithms and Complexity"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8d057725f6ce4af29a8fcb71f6849833", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "How to build a machine learning method that can continuously gain structured visual knowledge by learning structured facts? Our goal in this paper is to address this question by proposing a problem setting, where training data comes as structured facts in images with different types including (1) objects(e.g., ), (2) attributes (e.g., ), (3) actions (e.g., ), (4) interactions (e.g., ). Each structured fact has a semantic language view (e.g., < boy, playing>) and a visual view (an image with this fact). A human is able to efficiently gain visual knowledge by learning facts in a never ending process, and as we believe in a structured way (e.g., understanding \u201cplaying\u201d is the action part of < boy, playing>, and hence can generalize to recognize if just learn additionally). Inspired by human visual perception, we propose a model that is (1) able to learn a representation, we name as wild-card, which covers different types of structured facts, (2) could flexibly get fed with structured fact language-visual view pairs in a never ending way to gain more structured knowledge, (3) could generalize to unseen facts, and (4) allows retrieval of both the fact language view given the visual view (i.e., image) and vice versa. We also propose a novel method to generate hundreds of thousands of structured fact pairs from image caption data, which are necessary to train our model and can be useful for other applications.", "targets": "SHERLOCK: MODELING STRUCTURED KNOWLEDGE IN IMAGES"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d341db7f48dc4f66b8906f1ee12f0de9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The paper introduces a new modular action language, ALM, and illustrates the methodology of its use. It is based on the approach of Gelfond and Lifschitz (1993; 1998) in which a high-level action language is used as a front end for a logic programming system description. The resulting logic programming representation is used to perform various computational tasks. The methodology based on existing action languages works well for small and even medium size systems, but is not meant to deal with larger systems that require structuring of knowledge. ALM is meant to remedy this problem. Structuring of knowledge in ALM is supported by the concepts of module (a formal description of a specific piece of knowledge packaged as a unit), module hierarchy, and library, and by the division of a system description of ALM into two parts: theory and structure. A theory consists of one or more modules with a common theme, possibly organized into a module hierarchy based on a dependency relation. It contains declarations of sorts, attributes, and properties of the domain together with axioms describing them. Structures are used to describe the domain\u2019s objects. These features, together with the means for defining classes of a domain as special cases of previously defined ones, facilitate the stepwise development, testing, and readability of a knowledge base, as well as the creation of knowledge representation libraries.", "targets": "Modular Action Language ALM"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e2995ae9cda94df498f340e8e237106e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Modern distributed cyber-physical systems encounter a large variety of anomalies and in many cases, they are vulnerable to catastrophic fault propagation scenarios due to strong connectivity among the sub-systems. In this regard, root-cause analysis becomes highly intractable due to complex fault propagation mechanisms in combination with diverse operating modes. This paper presents a new data-driven framework for root-cause analysis for addressing such issues. The framework is based on a spatiotemporal feature extraction scheme for multivariate time series built on the concept of symbolic dynamics for discovering and representing causal interactions among subsystems of a complex system. We propose sequential state switching (S) and artificial anomaly association (A) methods to implement rootcause analysis in an unsupervised and semi-supervised manner respectively. Synthetic data from cases with failed pattern(s) and anomalous node are simulated to validate the proposed approaches, then compared with the performance of vector autoregressive (VAR) model-based root-cause analysis. The results show that: (1) S andA approaches can obtain high accuracy in root-cause analysis and successfully handle multiple nominal operation modes, and (2) the proposed tool-chain is shown to be scalable while maintaining high accuracy.", "targets": "Root-cause analysis for time-series anomalies via spatiotemporal causal graphical modeling"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-78b6b6504ab94b88ac658b699c39f550", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "To improve user satisfaction, mobile app developers are interested in relevant user opinions such as complaints or suggestions. An important source for such opinions is user reviews on online app markets. However, manual review analysis for useful opinions is often challenging due to the large amount and the noisy-nature of user reviews. To address this problem, we propose M.A.R.K, a keyword-based framework for semiautomated review analysis. The key task of M.A.R.K is to analyze reviews for keywords of potential interest which developers can use to search for useful opinions. We have developed several techniques for that task including: 1) keyword extracting with customized regularization algorithms; 2) keyword grouping with distributed representation; and 3) keyword ranking with ratings and frequencies analysis. Our empirical evaluation and case studies show that M.A.R.K can identify keywords of high interest and provide developers with useful user opinions. Keywords\u2014App Review, Opinion Mining, Keyword", "targets": "Mining User Opinions in Mobile App Reviews: A Keyword-based Approach"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b1077466d0f14ff38ff0d025289f7288", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper develops automated testing and debugging techniques for answer set solver development. We describe a flexible grammar-based black-box ASP fuzz testing tool which is able to reveal various defects such as unsound and incomplete behavior, i.e. invalid answer sets and inability to find existing solutions, in state-of-the-art answer set solver implementations. Moreover, we develop delta debugging techniques for shrinking failureinducing inputs on which solvers exhibit defective behavior. In particular, we develop a delta debugging algorithm in the context of answer set solving, and evaluate two different elimination strategies for the algorithm.", "targets": "Testing and Debugging Techniques for Answer Set Solver Development"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-81d00ac0ae754b91b3b550bb37f44064", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The key issues pertaining to collection of epidemic disease data for our analysis purposes are that it is a labour intensive, time consuming and expensive process resulting in availability of sparse sample data which we use to develop prediction models. To address this sparse data issue, we present novel Incremental Transductive methods to circumvent the data collection process by applying previously acquired data to provide consistent, confidence-based labelling alternatives to field survey research. We investigated various reasoning approaches for semisupervised machine learning including Bayesian models for labelling data. The results show that using the proposed methods, we can label instances of data with a class of vector density at a high level of confidence. By applying the Liberal and Strict Training Approaches, we provide a labelling and classification alternative to standalone algorithms. The methods in this paper are components in the process of reducing the proliferation of the Schistosomiasis disease and its effects.", "targets": "Incremental Transductive Learning Approaches to Schistosomiasis Vector Classification"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-219a6e8b615c466bbe7fe93714ef2ef5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Assessing uncertainty is an important step towards ensuring the safety and reliability of machine learning systems. Existing uncertainty estimation techniques may fail when their modeling assumptions are not met, e.g. when the data distribution differs from the one seen at training time. Here, we propose techniques that assess a classification algorithm\u2019s uncertainty via calibrated probabilities (i.e. probabilities that match empirical outcome frequencies in the long run) and which are guaranteed to be reliable (i.e. accurate and calibrated) on out-of-distribution input, including input generated by an adversary. This represents an extension of classical online learning that handles uncertainty in addition to guaranteeing accuracy under adversarial assumptions. We establish formal guarantees for our methods, and we validate them on two real-world problems: question answering and medical diagnosis from genomic data.", "targets": "Estimating Uncertainty Online Against an Adversary"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-14a51577e706408890dbdbcf715f0f8d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The goal of argumentation mining, an evolving research field in computational linguistics, is to design methods capable of analyzing people\u2019s argumentation. In this article, we go beyond the state of the art in several ways. (i) We deal with actual Web data and take up the challenges given by the variety of registers, multiple domains, and unrestricted noisy user-generated Web discourse. (ii) We bridge the gap between normative argumentation theories and argumentation phenomena encountered in actual data by adapting an argumentation model tested in an extensive annotation study. (iii) We create a new gold standard corpus (90k tokens in 340 documents) and experiment with several machine learning methods to identify argument components. We offer the data, source codes, and annotation guidelines to the community under free licenses. Our findings show that argumentation mining in user-generated Web discourse is a feasible but challenging task.", "targets": "Argumentation Mining in User-Generated Web Discourse"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-efdae6f7317849db9a1341aef15fbeac", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Researches have shown accent classification can be improved by integrating semantic information into pure acoustic approach. In this work, we combine phonetic knowledge, such as vowels, with enhanced acoustic features to build an improved accent classification system. The classifier is based on Gaussian Mixture Model-Universal Background Model (GMM-UBM), with normalized Perceptual Linear Predictive (PLP) features. The features are further optimized by Principle Component Analysis (PCA) and Hetroscedastic Linear Discriminant Analysis (HLDA). Using 7 major types of accented speech from the Foreign Accented English (FAE) corpus, the system achieves classification accuracy 54% with input test data as short as 20 seconds, which is competitive to the state of the art in this field.", "targets": "Improved Accent Classification Combining Phonetic Vowels with Acoustic Features"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-cec49fb48ebf4acc881015fc818a0f33", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Solving sequential decision making problems, such as text parsing, roboticcontrol, and game playing, requires a combination of planning policies and gen-eralisation of those plans. In this paper, we present Expert Iteration, a novel al-gorithm which decomposes the problem into separate planning and generalisationtasks. Planning new policies is performed by tree search, while a deep neural net-work generalises those plans. In contrast, standard Deep Reinforcement Learningalgorithms rely on a neural network not only to generalise plans, but to discoverthem too. We show that our method substantially outperforms Policy Gradients inthe board game Hex, winning 84.4% of games against it when trained for equaltime.", "targets": "Thinking Fast and Slow with Deep Learning and Tree Search"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8f45a16c58104cbe9f846685c94e8d8c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Structured learning has found many applications in computer vision recently. Analogues to structured support vector machines (SSVM), here we propose boosting algorithms for predicting multivariate or structured outputs, which is referred to as StructBoost. As SSVM generalizes SVM, our StructBoost generalizes standard boosting such as AdaBoost, or LPBoost to structured learning. AdaBoost, LPBoost and many other conventional boosting methods arise as special cases of StructBoost. The resulting optimization problem of StructBoost is more challenging than SSVM in the sense that the problem of StructBoost can involve exponentially many variables and constraints. In contrast, for SSVM one usually has an exponential number of constraints and a cutting-plane method is used. In order to efficiently solve StructBoost, we propose an equivalent 1-slack formulation and solve it using a combination of cutting planes and column generation. We show the versatility and usefulness of StructBoost on a few problems such as hierarchical multi-class classification, robust visual tracking and image segmentation. In particular, we train a tracking-by-detection based object tracker using the proposed structured boosting. Tracking is implemented as structured output prediction by maximizing the Pascal image area overlap criterion. We show that the structural tracker not only significantly outperforms conventional classification based trackers that do not directly optimize the Pascal image overlap criterion, but also outperforms many other state-of-the-art trackers on the tested videos.", "targets": "StructBoost: Boosting Methods for Predicting Structured Output Variables"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-685ee749a5d7459482ff8e95a855a266", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In situ hybridisation gene expression information helps biologists identify where a gene is expressed. However, the databases that republish the experimental information are often both incomplete and inconsistent. This paper examines a system, Argudas, designed to help tackle these issues. Argudas is an evolution of an existing system, and so that system is reviewed as a means of both explaining and justifying the behaviour of Argudas. Throughout the discussion of Argudas a number of issues will be raised including the appropriateness of argumentation in biology and the challenges faced when integrating apparently similar online biological databases.", "targets": "Argudas: arguing with gene expression information"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5caf74f763274ee6a8e787123a5b3c82", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Intrusion detection systems (IDSs) fall into two high-level categories: network-based systems (NIDS) that monitor network behaviors, and host-based systems (HIDS) that monitor system calls. In this work, we present a general technique for both systems. We use anomaly detection, which identifies patterns not conforming to a historic norm. In both types of systems, the rates of change vary dramatically over time (due to burstiness) and over components (due to service difference). To efficiently model such systems, we use continuous time Bayesian networks (CTBNs) and avoid specifying a fixed update interval common to discrete-time models. We build generative models from the normal training data, and abnormal behaviors are flagged based on their likelihood under this norm. For NIDS, we construct a hierarchical CTBN model for the network packet traces and use Rao-Blackwellized particle filtering to learn the parameters. We illustrate the power of our method through experiments on detecting real worms and identifying hosts on two publicly available network traces, the MAWI dataset and the LBNL dataset. For HIDS, we develop a novel learning method to deal with the finite resolution of system log file time stamps, without losing the benefits of our continuous time model. We demonstrate the method by detecting intrusions in the DARPA 1998 BSM dataset.", "targets": "Intrusion Detection using Continuous Time Bayesian Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2075d92d561743e5b35810b16a71e579", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Motivated by applications in computational advertising and systems biology, we consider the problem of identifying the best out of several possible soft interventions at a source node V in an acyclic causal directed graph, to maximize the expected value of a target node Y (located downstream of V ). Our setting imposes a fixed total budget for sampling under various interventions, along with cost constraints on different types of interventions. We pose this as a best arm identification bandit problem with K arms where each arm is a soft intervention at V, and leverage the information leakage among the arms to provide the first gap dependent error and simple regret bounds for this problem. Our results are a significant improvement over the traditional best arm identification results. We empirically show that our algorithms outperform the state of the art in the Flow Cytometry data-set, and also apply our algorithm for model interpretation of the Inception-v3 deep net that classifies images.", "targets": "Identifying Best Interventions through Online Importance Sampling"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-84b3a122e1bf41f39ae66ae261b7861f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Semantic parsing has made significant progress, but most current semantic parsers are extremely slow (CKY-based) and rather primitive in representation. We introduce three new techniques to tackle these problems. First, we design the first linear-time incremental shift-reduce-style semantic parsing algorithm which is more efficient than conventional cubic-time bottom-up semantic parsers. Second, our parser, being type-driven instead of syntax-driven, uses type-checking to decide the direction of reduction, which eliminates the need for a syntactic grammar such as CCG. Third, to fully exploit the power of type-driven semantic parsing beyond simple types (such as entities and truth values), we borrow from programming language theory the concepts of subtype polymorphism and parametric polymorphism to enrich the type system in order to better guide the parsing. Our system learns very accurate parses in GEOQUERY, JOBS and ATIS domains.", "targets": "Type-Driven Incremental Semantic Parsing with Polymorphism"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-bcef4e0abb0043e5b2d0e2e391b39eb5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Minimum vertex cover problem is an NP-Hard problem with the aim of finding minimum number of vertices to cover graph. In this paper, a learning automaton based algorithm is proposed to find minimum vertex cover in graph. In the proposed algorithm, each vertex of graph is equipped with a learning automaton that has two actions in the candidate or noncandidate of the corresponding vertex cover set. Due to characteristics of learning automata, this algorithm significantly reduces the number of covering vertices of graph. The proposed algorithm based on learning automata iteratively minimize the candidate vertex cover through the update its action probability. As the proposed algorithm proceeds, a candidate solution nears to optimal solution of the minimum vertex cover problem. In order to evaluate the proposed algorithm, several experiments conducted on DIMACS dataset which compared to conventional methods. Experimental results show the major superiority of the proposed algorithm over the other methods. Keywords\u2014 Minimum Vertex Cover; NP-Hard problems; Learning Automata; Distributed learning automata.", "targets": "Solving Minimum Vertex Cover Problem Using Learning Automata"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1e7c0143fb1d4a7f9ec213986fcb12b6", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Citation texts are sometimes not very informative or in some cases inaccurate by themselves; they need the appropriate context from the referenced paper to re ect its exact contributions. To address this problem, we propose an unsupervised model that uses distributed representation of words as well as domain knowledge to extract the appropriate context from the reference paper. Evaluation results show the e ectiveness of our model by signi cantly outperforming the state-of-the-art. We furthermore demonstrate how an e ective contextualization method results in improving citation-based summarization of the scienti c articles.", "targets": "Contextualizing Citations for Scientific Summarization using Word Embeddings and Domain Knowledge"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6492baf7ce264b9f9184a5c02ac446d5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper presents complexity analysis and variational methods for inference in probabilistic description logics featuring Boolean operators, quantification, qualified number restrictions, nominals, inverse roles and role hierarchies. Inference is shown to be PEXP-complete, and variational methods are designed so as to exploit logical inference whenever possible.", "targets": "Complexity Analysis and Variational Inference for Interpretation-based Probabilistic Description Logics"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f1256fe76da243cab7efac36850f9523", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The majority of big data is unstructured and of this majority the largest chunk is text. While data mining techniques are well developed and standardized for structured, numerical data, the realm of unstructured data is still largely unexplored. The general focus lies on \u201cinformation extraction\u201d, which attempts to retrieve known information from text. The \u201cHoly Grail\u201d, however is \u201cknowledge discovery\u201d, where machines are expected to unearth entirely new facts and relations that were not previously known by any human expert. Indeed, understanding the meaning of text is often considered as one of the main characteristics of human intelligence. The ultimate goal of semantic artificial intelligence is to devise software that can \u201cunderstand\u201d the meaning of free text, at least in the practical sense of providing new, actionable information condensed out of a body of documents. As a stepping stone on the road to this vision I will introduce a totally new approach to drug research, namely that of identifying relevant information by employing a self-organizing semantic engine to text mine large repositories of biomedical research papers, a technique pioneered by Merck with the InfoCodex software. I will describe the methodology and a first successful experiment for the discovery of new biomarkers and phenotypes for diabetes and obesity on the basis of PubMed abstracts, public clinical trials and Merck internal documents. The reported approach shows much promise and has potential to impact fundamentally pharmaceutical research as a way to shorten time-to-market of novel drugs, and for early recognition of dead ends. Big data: challenges and opportunities Rivers of ink have been poured to describe the data deluge that is increasingly defining our information society. While I do not want to dwell too long on something we all are experiencing daily, the concrete numbers are nonetheless staggering [1]. Here are some examples: \u00d8\uf0d8 In 2007 more data have been accumulated than can fit on all of the world\u2019s available storage. \u00d8\uf0d8 In 2011 this number has reached the limit of twice as much data as can be stored on all of the world\u2019s storage i.e. 1200 billions gigabytes. \u00d8\uf0d8 The CMS detector at the CERN LHC accelerator accumulates data at a rate of 320 terabits/s, which makes it necessary to filter data by hardware \u201con the way\u201d to reduce to flux to \u201conly\u201d 800 Gbp/s. \u00d8\uf0d8 Wal-Mart feeds 1 million customer transaction/hour into its databases . \u00d8\uf0d8 Internet: 1 trillion unique URLs have been indexed by Google. \u00d8\uf0d8 12.8 million blogs have been recently recorded, not counting Asia, this number is growing exponentially. \u00d8\uf0d8 The number of emails sent per day in 2010 was 294 billion. \u00d8\uf0d8 In 2008 Google received 85\u2019000 CVs for the one single post of software engineer. These numbers pose huge challenges on both hardware and software. However, as is usually the case, challenges and opportunities go hand in hand. In this paper I shall concentrate on the opportunity side of the equation. Data come in two flavours: structured and unstructured. Structured data consist typically of numbers organized in structures, like tables, charts or series. Unstructured data are essentially everything else and make up around 85% [2] of the data deluge. Of these, the vast majority is text, the rest being pictures, video and sound tracks. In this paper I shall concentrate on text data. There is only one thing you can do with numbers: analyze them to discover relationships and dependencies. The basic method to do this is statistical analysis, whose development was initiated in the 17 century with the works of Pascal, Fermat, de Moivre, Laplace and Legendre and got new impetus in the late 19 and early 20 century from Sir Francis Galton and Karl Pearson [3]. Today, statistical analysis if often complemented by methods from computer science and information theory to detect unsuspected patterns and anomalies in very large databases, a technique that goes under the name of data mining [4]. While statistical analysis and data mining are complex and require trained specialists, unstructured data pose even bigger challenges. First of all there are two things you can do with text: teach machines to understand what the text in a given document means and have them \u201cread\u201d large quantities of text documents to uncover hidden, previously unnoticed correlations pointing to entire new knowledge. Both are very difficult but the latter is far more difficult than the former. Information extraction and knowledge discovery in research papers Understanding written language is a key component of human intelligence. Correspondingly, doing something useful with large quantities of text documents that are out of reach for human analysis requires, unavoidably some form of artificial intelligence [5]. This is why handling unstructured data is harder than analyzing their numerical counterpart, for which well-defined and developed mathematical methods are readily available. Indeed, there is as yet no standard approach to text mining, the unstructured counterpart to data mining. There are several approaches to teach a machine to comprehend text [6-8]. The vast bulk of research and applications focuses on natural language processing (NLP) techniques for information extraction (IE). Information extraction aims to identify mentions of named entities (e.g. \u201cgenes\u201d in life science applications) and relationships between these entities (as in \u201cis a\u201d or \u201cis caused by\u201d). Entities and their relations are often called \u201ctriples\u201d and databases of identified triples \u201ctriple stores\u201d. Such triple stores are the basis of the Web 3.0 vision, in which machines will be able to automatically recognize the meaning of online documents and, correspondingly, interact intelligently with human end users. IE techniques are also the main tool used to curate domain-specific terminologies and ontologies extracted from large document corpora. Information extraction, however, is not thought for discovery. By its very design, it is limited to identifying semantic relationships that are explicitly lexicalized in a document: by definition these relations are known to the human expert who formulated them. The \u201cHoly Grail\u201d [9] of the text mining, instead is knowledge discovery from large corpora of text. Here one expects machines to generate novel hypotheses by uncovering previously unnoticed correlations from information distributed over very large pools of documents. These hypotheses must then be tested experimentally. Knowledge discovery is about unearthing implicit information versus the explicit relations recovered by information extraction. The present paper is about machine knowledge discovery in the biomedical and pharmacogenomics literature. 21 century challenges for pharmaceutical research Pharmaceutical research is undergoing a profound change. The deluge of molecular data and the advent of computational approaches to analyze them have revolutionized the traditional process of discovering drugs by happenstance in natural products or synthetizing and screening large libraries of small molecule compounds. Today, computational methods permeate so many aspects of pharmaceutical research that one can say that drugs are \u201cdesigned\u201d rather than \u201cdiscovered\u201d [10,11]. Molecular data found in genomics and proteomics databases are typically structured data. As in other domains, the bulk of the computational effort in the pharmaceutical industry goes into crunching structured molecular data. There is, however another, even larger source of valuable information that can potentially be tapped for discoveries: repositories of research documents. One of the best known of these repositories, PubMed, contains already more than 20 millions citations and these are growing at a once inconceivable rate of almost 2 papers/minute [12]. The value of the information in these repositories of research is huge. Each paper by itself constitutes typically a very focused study on one particular biomedical subject that can be easily comprehended by other experts in the same field. It is to be expected, however that there are also far-reaching correlations between the results of different papers or different groups of papers. Uncovering such hidden correlations by hand borders on the impossible since, first, the quantity of such papers is by now far beyond the reach of human analysis and, secondly, the expertise to understand papers in different areas of research is very hard to find in the same individual in today\u2019s era of ever increasing specialization. The potential competitive advantage for the first companies to succeed in the task of discovering new scientific knowledge this way is considerable, both in speeding up research and in cutting costs. This is why machine knowledge discovery, if successful, has the potential to revolutionize pharmaceutical research. Not only could one test hypotheses in silico but the actual generation of these hypotheses would be in silico, with obvious disruptive advantages. Discovering biomarkers and phenotypes by text mining? To explore if this vision of a new way to generate scientific discovery by machine intelligence is feasible, Merck, in collaboration with Thomson Reuters, devised a pilot experiment in which the InfoCodex semantic engine was used for the specific and concrete task to discover unknown/novel biomarkers and phenotypes for diabetes and/or obesity (D&O) by text mining diverse and numerous biomedical research texts [13]. Here I will summarize the key points of the methods and the main results. The choice fell on biomarkers and phenotypes since these play a paramount role in modern medicine. Drugs of the future will be targeted to populations and groups of individuals with common biological characteristics predictive of drug efficacy and/or toxicity. This practice is called \u201cindividualized medicine\u201d or \u201cpersonalized medicine\u201d [10]. The revealing features are called \u201cbiomarkers\u201d and \u201cphenotypes\u201d. A biomarker is a characteristic that is objectively measured and evaluated as an indicator of normal biologic processes, pathogenic processes, or pharmacologic responses to a therapeutic intervention. In other words, a biomarker is any biological or biochemical entity or signal that is predictive, prognostic, or indicative of another entity, in this case, diabetes and/or obesity. A phenotype is an anatomical, physiological and behavioral characteristic observed as an identifiable structure or functional attribute of an organism. Phenotypes are important because phenotype-specific proteins are relevant targets in basic pharmaceutical research. Biomarkers and phenotypes constitute one of the \u201chot threads\u201d of diagnostic and drug development in pharmaceutical and biomedical research, with applications in early disease identification, identification of potential drug targets, prediction of the response of patients to medications, help in accelerating clinical trials and personalized medicine. The biomarker market generated $13.6 billion in 2011 and is expected to grow to $25 billion by 2016 [14]. The object of the experiment was for the InfoCodex semantic engine to discover unknown/novel biomarkers and phenotypes for diabetes and/or obesity (D&O) by text mining a diverse and sizable corpus of unstructured, free-text biomedical research documents constituted by: \u2022 PubMed [15] abstracts with titles: 115,273 documents \u2022 Clinical Trials [16] summaries: 8,960 summaries \u2022 Internal Merck research documents, about one page in length: 500 documents. The output D&O related biomarkers and phenotypes proposed by the machine were then compared with Merck internal and external vocabularies/databases including UMLS [17], GenBank [18], Gene Ontology [19], OMIM [20], and the Thomson Reuters [21] D&O biomarker databases. By design, the experiment was handled strictly as a \u201cblind experiment\u201d: no expert input about D&O biomarkers/phenotypes was provided and no feedback from preliminary results was used to improve the machine-generated results. The InfoCodex semantic engine InfoCodex is a semantic machine intelligence software designed specifically to analyze very large document collections as a whole and thereby unearth associative, implicit and lexically unspecified relationships. It does so by unsupervised semantic clustering and matching of multi-lingual documents. Its technology is based on a combination of an embedded universal knowledge repository (the InfoCodex Linguistic Database, ILD), statistical analysis and information theory [22], and self-organizing neural networks (SOM) [23]. The ILD contains multi-lingual entries (words/phrases) collected into cross-lingual synonym groups (semantic clouds) and systematically linked to a hypernym (taxon) in a universal 7level taxonomy. With its almost 4 million classified entries, the ILD corresponds to a very large multi-lingual thesaurus (for comparison, the Historical Thesaurus of the English Oxford Dictionary, often considered the largest in the world, has 920,000 entries). Information theory and statistics [22] are used to establish a 100-dimensional content space defined on the ILD that describes the documents in an optimal way. Documents are then modeled as 100-dimensional vectors in this optimal semantic space. Information-theoretic concepts such as entropy and mutual entropy are used together with the ILD to disambiguate the meaning of polysemous words based both on the document-specific context and the collection-wide environment. Finally, the fully automatic, unsupervised categorization on the optimal semantic space is achieved by a proprietary variant of Kohonen\u2019s self-organizing map [23]. In particular, prior to starting the unsupervised learning procedure, a coarse group rebalancing technique is used to construct a reliable initial guess for the SOM. This is a generalization of coarse mesh rebalancing [24] to general iterative procedures, with no reference to spatial equations as in the original application to neutron diffusion and general transport theory in finite element analysis. This procedure considerably accelerates the iteration process and minimizes the risk of getting stuck in a sub-optimal configuration. The SOM creates a thematic landscape according to and optimized for the thematic volume of the entire document collection. Essentially, the combination of the embedded ILD with the self-organized categorization on an automatically determined optimal semantic space correspond to a dynamic ontology, in which vertical \u201cis-a\u201d relations are encoded and horizontal relations like \u201cis-correlated-with\u201d are determined dynamically depending on content. For the comparison of the content of different documents with each other and with queries, a similarity measure is used which is composed of the scalar product of the document vectors in the 100-dimensional semantic space, the reciprocal Kullback\u2013Leibler distance [25] from the main topics, and the weighted score-sum of common synonyms, common hypernyms and common nodes on higher taxonomy levels. As a final result, a document collection is grouped into a two-dimensional array of neurons called an information map. Each neuron corresponds to a semantic class; i.e., documents assigned to the same class are semantically similar. The classes are arranged in such a way that the thematically similar classes are nearby (Figure 1). Figure 1 InfoCodex information map. InfoCodex information map obtained for the approximately 115,000 documents of the PubMed repository used for the present experiment. The size of the dots in the center of each class indicate the number of documents assigned to it. The described InfoCodex algorithm is able to categorize unstructured information. In a recent benchmark, testing the classification of multi-lingual, \u201cnoisy\u201d Web pages, InfoCodex reached the high clustering accuracy score F1 = 88% [26]. Moreover, it extracts relevant facts not only from single documents at hand, but it considers document collections as a whole and identifies dispersed and seemingly unrelated facts and relationships like assembling the scattered pieces of a puzzle. Text mining with InfoCodex in search of new biomarkers/phenotypes The text mining procedure involved four steps: Generation of reference models: in this step the software had to determine the meaning of the concept \u201cbiomarker/phenotype for D&O\u201d. Since no input by human experts was allowed in the experiment, the only way to do this was by a generic literature search via the autonomous InfoCodex spider agents: 224 reference biomarkers/phenotypes were found. The documents containing these reference terms were then clustered by InfoCodex and for each group a representative feature vector in the optimal semantic space was established. These feature vectors constitute mathematical models on semantic space of what, e.g. \u201cbiomarker for diabetes\u201d means. Determination of the meaning of unknown terms: the ILD contained at the time of the experiment about 20,000 genes and proteins (up to around 100\u2019000 presently). Nonetheless it was not guaranteed to identify all possibly relevant candidates by a simple database look-up. Fortunately, the architecture of InfoCodex allows to infer the meaning of unknown terms by combining its \u201chard-wired\u201d internal knowledge base with the association power of neural networks. Some examples of the meanings inferred by InfoCodex are presented in Table 1. Table 1: InfoCodex computed meanings Unknown Term Constructed Hypernym Associated Descriptor 1 Nn1250 clinical study insuline glargine Tolterodine cavity overactive bladder Ranibizumab drug macular edema Nn5401 clinical study insulin aspart Duloxetine antidepressant personal physician Endocannabinoid receptor enzyme Becaplermin pathology ulcer Candesartan cardiovascular disease high blood pressure Srt2104 medicine placebo Olmesartan cardiovascular medicine amlodipine Hctz diuretic drug hydrochlorothiazide Eslicarbazepine anti nervous Zebinix Zonisamide anti nervous Topiramate Capsules Mk0431 antidiabetic sitagliptin Ziprasidone tranquilizer major tranquilizer Psicofarmcolagia motivation incentive Medoxomil cardiovascular medicine amlodipine InfoCodex computed meanings of some unknown terms from the experimental PubMed collection. The meaning of unknown terms is estimated fully automatically; i.e., no human intervention was necessary and no context-specific vocabularies had to be provided as in most related approaches [27]. The meaning had to be inferred by the semantic engine only based on machine intelligence and its internal generic knowledge base, and this automatism is one of the main innovations of the presented approach. Some of the estimated hypernyms are completely correct: \u201cHctz\u201d is a diuretic drug and is associated to \u201chydrochlorothiazide\u201d (actually a synonym). Clearly, not all inferred semantic relations are of the same quality. Generation of a list of potential biomarkers and phenotypes: most of the reference biomarkers and phenotypes found in the literature (see Step 1) were linked to one of the following nodes of the ILD: genes, proteins, causal agents, hormones, phenotypes, metabolic disorders, diabetes, obesity, symptoms. The initial pool of candidates was constructed by considering each term appearing in the experimental document base that points to one of the same taxonomy nodes, whether via explicit hypernym relations in the ILD or via inferred hypernyms. For each of these candidates a group of experimental documents was formed by choosing those documents that contain a synonym of the candidate together with synonyms of \u201cdiabetes\u201d or \u201cobesity\u201d and for each of these groups the InfoCodex feature vector in semantic space was constructed. The document group corresponding to one particular initial candidate is compared with the previously derived reference models for D&O biomarkers/phenotypes by computing the semantic distances to the feature vectors of the reference models. A term qualifies as a final candidate for a D&O biomarker or phenotype if the semantic similarity deviation from at least one of the corresponding reference clusters is below a certain threshold. Establishing confidence levels: not all the biomarker/phenotype candidates established this way have the same probability of being relevant. In order to rank the final candidates established in Step 3 an empirical score was devised, representing the confidence level of each term. This confidence measure is based on the average semantic deviation of the feature vector assigned to the candidate from the feature vector of the corresponding reference model and additional information-theoretic measures. Results of the experiment The output of the experiment was a list of potential D&O biomarkers/phenotypes as shown in Table 2. The candidate terms are listed in column A, with their relation to either diabetes or obesity in columns B and C. Columns D and E display the confidence level and the number of documents on which the identification of the candidate was based. Finally, the last columns contain the detailed IDs to these documents so that they can be retrieved and used by human experts for assessment. Note that human expert assessment is actually the only meaningful evaluation of the experiment as far as the novelty aspect of the proposed D&O biomarkers/phenotypes is concerned. Table 2: typical output of the experiment Row Term (A) Relationship (B) Object (C) Conf% (D) #Docs (E) PMIDs (F) 1 glycemic control BiomarkerFor Diabetes 70.3 1122 20110333, 20128112, 20149122, 2 Insulin PhenoTypeOf Diabetes 68.3 5000 19995096, 20017431, 20043582, 3 Proinsulin BiomarkerFor Diabetes 67.8 105 16108846, 9405904, 20139232, 4 TNF alpha inhibitor PhenoTypeOf Diabetes 67.1 245 9506740, 20025835, 20059414, 5 anhydroglucitol BiomarkerFor Diabetes 67.1 10 20424541, 20709052, 21357907, 6 linoleic acid BiomarkerFor Diabetes 67.1 61 20861175, 20846914, 15284064, 7 palmitic acid BiomarkerFor Diabetes 67.1 24 20861175, 20846914, 21437903, 8 pentosidine BiomarkerFor Diabetes 67.1 13 21447665, 21146883, 17898696, 9 uric acid BiomarkerFor Obesity 66.8 433 10726195, 19428063, 10904462, 10 proatrial natriuretic peptide BiomarkerFor Obesity 66.6 4 14769680, 18931036, 17351376, 11 ALT values BiomarkerFor Diabetes 66.3 2 20880180, 19010326 12 adrenomedullin BiomarkerFor Diabetes 64.3 7 21075100, 21408188, 20124980, 13 fructosamin BiomarkerFor Diabetes 64.2 59 20424541, 21054539, 18688079, 14 TNF alpha inhibitor BiomarkerFor Diabetes 62.1 245 9506740, 20025835, 20059414, 15 uric acid BiomarkerFor Diabetes 61.8 259 21431449, 20002472, 20413437, 16 monoclonal antibody BiomarkerFor Obesity 61.7 41 14715842, 21136440, 21042773, 17 Insulin level QTL PhenoTypeOf Obesity 61.2 1167 16614055, 19393079, 11093286, 18 stimulant BiomarkerFor Obesity 61.2 646 18407040, 18772043, 10082070, 19 IL-10 BiomarkerFor Obesity 60.9 120 19798061, 19696761, 20190550, 20 central obesity PhenoTypeOf Diabetes 59.5 530 16099342, 17141913, 15942464, 21 lipid BiomarkerFor Obesity 59.5 4279 11596664, 12059988, 12379160, 22 urine albumin screening BiomarkerFor Diabetes 59.0 95 20886205, 19285607, 20299482, 23 tyrosine kinase inhibitor BiomarkerFor Obesity 58.8 83 18814184, 9538268, 15235125, 24 TNF alpha inhibitor BiomarkerFor Obesity 58.0 785 20143002, 20173393, 10227565, 25 fas BiomarkerFor Obesity 57.7 179 12716789, 17925465, 19301503, 26 leptin PhenoTypeOf Diabetes 57.6 870 11987032, 17372717, 18414479, 27 ALT values BiomarkerFor Obesity 57.4 8 16408483, 19010326, 17255837, 28 lipase BiomarkerFor Obesity 56.8 356 16752181, 17609260, 20512427, 29 insulin resistance PhenoTypeOf Obesity 55.8 5000 20452774, 20816595, 21114489, 30 chronic inflammation PhenoTypeOf Diabetes 55.7 154 15643475, 18673007, 18801863, The details of the evaluation have been published elsewhere [13] and are beyond the scope of the present review. Here I would like to retain the two major conclusions that can be drawn. The negative aspect of the experiment is that too much noise was generated as exemplified by the obviously implausible or incomplete candidates proposed in Table 3. Table 3: implausible and/or incomplete D&O biomarker/phenotype candidates Term Relationship Object Target Conf% #Docs wenqing BiomarkerFor Obesity Obesity 53.5 29 proteomic BiomarkerFor Obesity Obesity 40.8 128 gene expression BiomarkerFor Obesity Obesity 38.9 62 Mouse model BiomarkerFor Obesity Obesity 19.8 17 muise BiomarkerFor Obesity Obesity 17.5 20 atheroBiomarkerFor Obesity Obesity 16.5 6 shrna BiomarkerFor Obesity Obesity 9.6 4 inflammation BiomarkerFor Obesity Obesity 8.2 4 TBD BiomarkerFor Obesity Obesity 7.4 3 body weight PhenoTypeOf Diabetes MGAT2 1 cell line BiomarkerFor Diabetes MGAT2 1 The very positive result, however, is that several candidates of very high quality were proposed by the software. These were considered as \u201cneedles in the haystack\u201d by the Merck experts. While the plausibility of these candidates has been judged very high by human experts, a Google search of these terms in conjunction with \u201cdiabetes\u201d and/or \u201cobesity\u201d produced extremely low hit rates, near or at zero, compared with hundreds of thousands for known D&O biomarkers/phenotypes. Unfortunately, these terms are considered as valuable proprietary information by Merck and cannot be shown openly, Table 4. Table 4: plausible, novel and very valuable D&O biomarker/phenotype candidates (hidden since considered valuable proprietary information by Merck) Term Relat. Object Target Conf% #Docs xxxxxx PhenoTypeOf\t\r Obesity Obesity 7.7 4 xxxxxx PhenoTypeOf\t\r Obesity Obesity 7 6 xxxxxx BiomarkerFor\t\r Obesity Obesity 4.9 1 xxxxxx BiomarkerFor\t\r Obesity Obesity 4.9 1 xxxxxx BiomarkerFor\t\r Obesity Obesity 2.9 2 xxxxxx BiomarkerFor\t\r Obesity Obesity 2.2 1 xxxxxx BiomarkerFor\t\r Obesity Obesity 2.2 1 xxxxxx BiomarkerFor\t\r Obesity Obesity 2.2 1 xxxxxx BiomarkerFor\t\r Diabetes Diabetes 14.5 1 xxxxxx BiomarkerFor\t\r Diabetes Diabetes 2.8 2 Compared with recent studies [28-31] aimed at the extraction of drug\u2013gene relations from the pharmacogenomic literature, this experiment introduced three novelties. First, while most related work is based on high-quality, manually curated knowledge bases such as PharmGKB [30] to train the recognition of connections between specific drugs and genes, this experiment\u2019s reference/training set (Step 1) was assembled in an ad hoc way by na\u00efve (nonexpert) PubMed search. Second, aside from the generic taxonomy in the ILD, no contextspecific vocabularies (e.g., UMLS) were provided to inform the semantic engine. The meaning of unrecognized words had to be inferred by the InfoCodex engine based only on its universal internal linguistic database and its association power. Third, the text mining algorithms used here do not use rule-based approaches, or analyze co-occurrences sentence by sentence, or section by section, but rather they extract knowledge from entire documents and their relations with semantically related documents. In view of the requirement of no human assistance, the demonstrated capability of automatically identifying high quality candidates is extremely encouraging and could prove an entirely new way to speed-up pharmaceutical research, with high potential to shorten timeto-market of novel drugs, or for early recognition of dead ends such as prohibitive sideeffects through targeted extraction of relevant information.", "targets": "Scientific Discovery by Machine Intelligence: A New Avenue for Drug Research"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-dc6936a212c44b12bf7aaea17cc6f4b0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The ability to train large-scale neural networks has resulted in state-of-the-art performance in many areas of computer vision. These results have largely come from computational break throughs of two forms: model parallelism, e.g. GPU accelerated training, which has seen quick adoption in computer vision circles, and data parallelism, e.g. A-SGD, whose large scale has been used mostly in industry. We report early experiments with a system that makes use of both model parallelism and data parallelism, we call GPU A-SGD. We show using GPU A-SGD it is possible to speed up training of large convolutional neural networks useful for computer vision. We believe GPU A-SGD will make it possible to train larger networks on larger training sets in a reasonable amount of time.", "targets": "GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3d4c1a28bec24a7ea9ec50483cff7b5f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Deep learning approaches have been widely used in Automatic Speech Recognition (ASR) and they have achieved a significant accuracy improvement. Especially, Convolutional Neural Networks (CNNs) have been revisited in ASR recently. However, most CNNs used in existing work have less than 10 layers which may not be deep enough to capture all human speech signal information. In this paper, we propose a novel deep and wide CNN architecture denoted as RCNN-CTC, which has residual connections and Connectionist Temporal Classification (CTC) loss function. RCNN-CTC is an endto-end system which can exploit temporal and spectral structures of speech signals simultaneously. Furthermore, we introduce a CTC-based system combination, which is different from the conventional frame-wise senone-based one. The basic subsystems adopted in the combination are different types and thus mutually complementary to each other. Experimental results show that our proposed single system RCNN-CTC can achieve the lowest word error rate (WER) on WSJ and Tencent Chat data sets, compared to several widely used neural network systems in ASR. In addition, the proposed system combination can offer a further error reduction on these two data sets, resulting in relative WER reductions of 14.91% and 6.52% on WSJ dev93 and Tencent Chat data sets respectively. \u2217 Equal contribution.", "targets": "Residual Convolutional CTC Networks for Automatic Speech Recognition"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b71f8bc65e6f4657b3bcbd12264c69ee", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We show how to estimate a model\u2019s test error from unlabeled data, on distributions very different from the training distribution, while assuming only that certain conditional independencies are preserved between train and test. We do not need to assume that the optimal predictor is the same between train and test, or that the true distribution lies in any parametric family. We can also efficiently differentiate the error estimate to perform unsupervised discriminative learning. Our technical tool is the method of moments, which allows us to exploit conditional independencies in the absence of a fully-specified model. Our framework encompasses a large family of losses including the log and exponential loss, and extends to structured output settings such as hidden Markov models.", "targets": "Unsupervised Risk Estimation Using Only Conditional Independence Structure"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-edc0a00aaaec421bb6b9c842e680e4cb", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Instant messaging is one of the major channels of computer mediated communication. However, humans are known to be very limited in understanding others\u2019 emotions via textbased communication. Aiming on introducing emotion sensing technologies to instant messaging, we developed EmotionPush, a system that automatically detects the emotions of the messages end-users received on Facebook Messenger and provides colored cues on their smartphones accordingly. We conducted a deployment study with 20 participants during a time span of two weeks. In this paper, we revealed five challenges, along with examples, that we observed in our study based on both user\u2019s feedback and chat logs, including (i) the continuum of emotions, (ii) multi-user conversations, (iii) different dynamics between different users, (iv) misclassification of emotions, and (v) unconventional content. We believe this discussion will benefit the future exploration of affective computing for instant messaging, and also shed light on research of conversational emotion sensing.", "targets": "Challenges in Providing Automatic Affective Feedback in Instant Messaging Applications"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-318177eb622642bc9b5e6f6ce460689c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Multiple different approaches of generating adversarial examples have been proposed to attack deep neural networks. These approaches involve either directly computing gradients with respect to the image pixels, or directly solving an optimization on the image pixels. In this work, we present a fundamentally new method for generating adversarial examples that is fast to execute and provides exceptional diversity of output. We efficiently train feed-forward neural networks in a self-supervised manner to generate adversarial examples against a target network or set of networks. We call such a network an Adversarial Transformation Network (ATN). ATNs are trained to generate adversarial examples that minimally modify the classifier\u2019s outputs given the original input, while constraining the new classification to match an adversarial target class. We present methods to train ATNs and analyze their effectiveness targeting a variety of MNIST classifiers as well as the latest state-of-the-art ImageNet classifier Inception ResNet v2.", "targets": "Adversarial Transformation Networks: Learning to Generate Adversarial Examples "} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a5468a537a1240ad9b69a97f8b5d0f06", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present a probabilistic generative model for inferring a description of coordinated, recursively structured group activities at multiple levels of temporal granularity based on observations of individuals\u2019 trajectories. The model accommodates: (1) hierarchically structured groups, (2) activities that are temporally and compositionally recursive, (3) component roles assigning different subactivity dynamics to subgroups of participants, and (4) a nonparametric Gaussian Process model of trajectories. We present an MCMC sampling framework for performing joint inference over recursive activity descriptions and assignment of trajectories to groups, integrating out continuous parameters. We demonstrate the model\u2019s expressive power in several simulated and complex real-world scenarios from the VIRAT and UCLA Aerial Event video data sets.", "targets": "Bayesian Inference of Recursive Sequences of Group Activities from Tracks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a55aa7849c974f76bd4392e5bb39e051", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In practice, pattern recognition applications often suffer from imbalanced data distributions between classes, which may vary during operations w.r.t. the design data. Two-class classification systems designed using imbalanced data tend to recognize the majority (negative) class better, while the class of interest (positive class) often has the smaller number of samples. Several data-level techniques have been proposed to alleviate this issue, where classifier ensembles are designed with balanced data subsets by up-sampling positive samples or under-sampling negative samples. However, some informative samples may be neglected by random under-sampling and adding synthetic positive samples through up-sampling adds to training complexity. In this paper, a new ensemble learning algorithm called Progressive Boosting (PBoost) is proposed that progressively inserts uncorrelated groups of samples into a Boosting procedure to avoid loosing information while generating a diverse pool of classifiers. Base classifiers in this ensemble are generated from one iteration to the next, using subsets from a validation set that grows gradually in size and imbalance. Consequently, PBoost is more robust when the operational data may have unknown and variable levels of skew. In addition, the computation complexity of PBoost is lower than Boosting ensembles in literature that use under-sampling for learning from imbalanced data because not all of the base classifiers are validated on all negative samples. In PBoost algorithm, a new loss factor is proposed to avoid bias of performance towards the negative class. Using this loss factor, the weight update of samples and classifier contribution in final predictions are set based on the ability to recognize both classes. Using the proposed loss factor instead of standard accuracy can avoid biasing performance in any Boosting ensemble. The proposed approach was validated and compared using synthetic data, videos from the Faces In Action dataset that emulates face re-identification applications, and KEEL collection of datasets. Results show that PBoost can outperform state of the art techniques in terms of both accuracy and complexity over different levels of imbalance and overlap between classes.", "targets": "Progressive Boosting for Class Imbalance"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-255d121a3dac4e74a493eadd6dc4f454", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "\u00a9 Scott A. Hale 2016. This is the author\u2019s version of the work. It is posted here for your personal use. Not for redistribution. The definitive version was published in CHI EA 2016, http://dx.doi.org/10.1145/2851581.2892466. Abstract The number of user reviews of tourist attractions, restaurants, mobile apps, etc. is increasing for all languages; yet, research is lacking on how reviews in multiple languages should be aggregated and displayed. Speakers of different languages may have consistently different experiences, e.g., different information available in different languages at tourist attractions or different user experiences with software due to internationalization/localization choices. This paper assesses the similarity in the ratings given by speakers of different languages to London tourist attractions on TripAdvisor. The correlations between different languages are generally high, but some language pairs are more correlated than others. The results question the common practice of computing average ratings from reviews in many languages.", "targets": "User Reviews and Language: How Language Influences Ratings"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-07fb35dc62394bbfb9139a2c75c43f31", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In order for robots to be integrated effectively into human work-flows, it is not enough to address the question of autonomy but also how their actions or plans are being perceived by their human counterparts. When robots generate task plans without such considerations, they may often demonstrate what we refer to as inexplicable behavior from the point of view of humans who may be observing it. This problem arises due to the human observer\u2019s partial or inaccurate understanding of the robot\u2019s deliberative process and/or the model (i.e. capabilities of the robot) that informs it. This may have serious implications on the human-robot work-space, from increased cognitive load and reduced trust in the robot from the human, to more serious concerns of safety in human-robot interactions. In this paper, we propose to address this issue by learning a distance function that can accurately model the notion of explicability, and develop an anytime search algorithm that can use this measure in its search process to come up with progressively explicable plans. As the first step, robot plans are evaluated by human subjects based on how explicable they perceive the plan to be, and a scoring function called explicability distance based on the different plan distance measures is learned. We then use this explicability distance as a heuristic to guide our search in order to generate explicable robot plans, by minimizing the plan distances between the robot\u2019s plan and the human\u2019s expected plans. We conduct our experiments in a toy autonomous car domain, and provide empirical evaluations that demonstrate the usefulness of the approach in making the planning process of an autonomous agent conform to human expectations.", "targets": "Explicable Robot Planning as Minimizing Distance from Expected Behavior"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d67f09fffa934b0f902048c0229e4867", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a technique to augment network layers by adding a linear gating mechanism, which provides a way to learn identity mappings by optimizing only one parameter. We also introduce a new metric which served as basis for the technique. It captures the difficulty involved in learning identity mappings for different types of network models, and provides a new theoretical intuition for the increased depths of models such as Highway and Residual Networks. We propose a new model, the Gated Residual Network, which is the result when augmenting Residual Networks. Experimental results show that augmenting layers grants increased performance, less issues with depth, and more layer independence \u2013 fully removing them does not cripple the model. We evaluate our method on MNIST using fully-connected networks and on CIFAR-10 using Wide ResNets, achieving a relative error reduction of more than 8% in the latter when compared to the original model.", "targets": "LEARNING IDENTITY MAPPINGS WITH RESIDUAL GATES"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-71b85e37e87b46c29ee7f6c0d2bbb58e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We consider learning a sequence classifier without labeled data by using sequential output statistics. The problem is highly valuable since obtaining labels in training data is often costly, while the sequential output statistics (e.g., language models) could be obtained independently of input data and thus with low or no cost. To address the problem, we propose an unsupervised learning cost function and study its properties. We show that, compared to earlier works, it is less inclined to be stuck in trivial solutions and avoids the need for a strong generative model. Although it is harder to optimize in its functional form, a stochastic primal-dual gradient method is developed to effectively solve the problem. Experiment results on real-world datasets demonstrate that the new unsupervised learning method gives drastically lower errors than other baseline methods. Specifically, it reaches test errors about twice of those obtained by fully supervised learning.", "targets": "Unsupervised Sequence Classification using Sequential Output Statistics"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5a6dbea51db44a5681563827bd434b45", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Until now, error type performance for Grammatical Error Correction (GEC) systems could only be measured in terms of recall because system output is not annotated. In this paper, we overcome this problem by using a linguisticallyenhanced alignment to automatically extract the edits between parallel original and corrected sentences and then classify them using a new dataset-independent rule-based classifier. As human experts rated the predicted error types as \u201cGood\u201d or \u201cAcceptable\u201d in at least 95% of cases, we applied our approach to the system output produced in the CoNLL-2014 shared task to carry out a detailed analysis of system error type performance for the first time.", "targets": "Automatic Annotation and Evaluation of Error Types for Grammatical Error Correction"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5b2f87f17acf4de4824c50dfa1cd5a54", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper emphasizes the significance to jointly exploit the problem structure and the parameter structure, in the context of deep modeling. As a specific and interesting example, we describe the deep double sparsity encoder (DDSE), which is inspired by the double sparsity model for dictionary learning. DDSE simultaneously sparsities the output features and the learned model parameters, under one unified framework. In addition to its intuitive model interpretation, DDSE also possesses compact model size and low complexity. Extensive simulations compare DDSE with several carefully-designed baselines, and verify the consistently superior performance of DDSE. We further apply DDSE to the novel application domain of brain encoding, with promising preliminary results achieved.", "targets": "Deep Double Sparsity Encoder: Learning to Sparsify Not Only Features But Also Parameters"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4abd6330dff94de880bca0ef7a13ba4d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "During the last years, Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance in image classification. Their architectures have largely drawn inspiration by models of the primate visual system. However, while recent research results of neuroscience prove the existence of non-linear operations in the response of complex visual cells, little effort has been devoted to extend the convolution technique to non-linear forms. Typical convolutional layers are linear systems, hence their expressiveness is limited. To overcome this, various non-linearities have been used as activation functions inside CNNs, while also many pooling strategies have been applied. We address the issue of developing a convolution method in the context of a computational model of the visual cortex, exploring quadratic forms through the Volterra kernels. Such forms, constituting a more rich function space, are used as approximations of the response profile of visual cells. Our proposed second-order convolution is tested on CIFAR-10 and CIFAR-100. We show that a network which combines linear and non-linear filters in its convolutional layers, can outperform networks that use standard linear filters with the same architecture, yielding results competitive with the state-of-the-art on these datasets.", "targets": "Non-linear Convolution Filters for CNN-based Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-58977a559317436397921c3b76f351d2", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we present the results obtained by our DKP-AOM system within the OAEI 2015 campaign. DKPAOM is an ontology merging tool designed to merge heterogeneous ontologies. In OAEI, we have participated with its ontology mapping component which serves as a basic module capable of matching large scale ontologies before their merging. This is our first successful participation in the Conference, OA4QA and Anatomy track of OAEI. DKP-AOM is participating with two versions (DKP-AOM and DKP-AOM_lite), DKP-AOM performs coherence analysis. In OA4QA track, DKPAOM out-performed in the evaluation and generated accurate alignments allowed to answer all the queries of the evaluation. We can also see its competitive results for the conference track in the evaluation initiative among other reputed systems. In the anatomy track, it has produced alignments within an allocated time and appeared in the list of systems which produce coherent results. Finally, we discuss some future work towards the development of DKP-AOM.", "targets": "Initial results for Ontology Matching workshop 2015 DKP-AOM: results for OAEI 2015"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-bf85d4645c494a7493b10866ba93ffbd", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We introduce a parametric nonlinear transformation that is well-suited for Gaussianizing data from natural images. After a linear transformation of the data, each component is normalized by a pooled activity measure, computed by exponentiating a weighted sum of rectified and exponentiated components and an additive constant. We optimize the parameters of this transformation (linear transform, exponents, weights, constant) over a database of natural images, directly minimizing the negentropy of the responses. We find that the optimized transformation successfully Gaussianizes the data, achieving a significantly smaller mutual information between transformed components than previous methods including ICA and radial Gaussianization. The transformation is differentiable and can be efficiently inverted, and thus induces a density model on images. We show that samples of this model are visually similar to samples of natural image patches. We also demonstrate the use of the model as a prior density in removing additive noise. Finally, we show that the transformation can be cascaded, with each layer optimized (unsupervised) using the same Gaussianization objective, to capture additional probabilistic structure.", "targets": "GENERALIZED NORMALIZATION TRANSFORMATION"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4a18c5cf56d84dee835d27f7d0803343", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We introduce the problem of Task Assignment and Sequencing (TAS), which adds the timeline perspective to expert crowdsourcing optimization. Expert crowdsourcing involves macrotasks, like document writing, product design, or web development, which take more time than typical binary microtasks, require expert skills, assume varying degrees of knowledge over a topic, and require crowd workers to build on each other\u2019s contributions. Current works usually assume offline optimization models, which consider worker and task arrivals known and do not take into account the element of time. Realistically however, time is critical: tasks have deadlines, expert workers are available only at specific time slots, and worker/task arrivals are not known a-priori. Our work is the first to address the problem of optimal task sequencing for online, heterogeneous, time-constrained macrotasks. We propose tas-online, an online algorithm that aims to complete as many tasks as possible within budget, required quality and a given timeline, without future input information regarding job release dates or worker availabilities. Results, comparing tas-online to four typical benchmarks, show that it achieves more completed jobs, lower flow times and higher job quality. This work has practical implications for improving the Quality of Service of current crowdsourcing platforms, allowing them to offer cost, quality and time improvements for expert tasks.", "targets": "It\u2019s about time: Online Macrotask Sequencing in Expert Crowdsourcing"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-54ff6cab8abd46539b5d5474810f168f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Metric learning seeks a transformation of the feature space that enhances prediction quality for the given task at hand. In this work we provide PAC-style sample complexity rates for supervised metric learning. We give matching lowerand upper-bounds showing that the sample complexity scales with the representation dimension when no assumptions are made about the underlying data distribution. However, by leveraging the structure of the data distribution, we show that one can achieve rates that are fine-tuned to a specific notion of intrinsic complexity for a given dataset. Our analysis reveals that augmenting the metric learning optimization criterion with a simple norm-based regularization can help adapt to a dataset\u2019s intrinsic complexity, yielding better generalization. Experiments on benchmark datasets validate our analysis and show that regularizing the metric can help discern the signal even when the data contains high amounts of noise.", "targets": "Sample Complexity of Learning Mahalanobis Distance Metrics"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5ebb0caa3dcb4a9bbca6704794e4a848", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this work, we propose a novel recurrent neural network (RNN) architecture. The proposed RNN, gated-feedback RNN (GF-RNN), extends the existing approach of stacking multiple recurrent layers by allowing and controlling signals flowing from upper recurrent layers to lower layers using a global gating unit for each pair of layers. The recurrent signals exchanged between layers are gated adaptively based on the previous hidden states and the current input. We evaluated the proposed GF-RNN with different types of recurrent units, such as tanh, long short-term memory and gated recurrent units, on the tasks of character-level language modeling and Python program evaluation. Our empirical evaluation of different RNN units, revealed that in both tasks, the GF-RNN outperforms the conventional approaches to build deep stacked RNNs. We suggest that the improvement arises because the GFRNN can adaptively assign different layers to different timescales and layer-to-layer interactions (including the top-down ones which are not usually present in a stacked RNN) by learning to gate these interactions.", "targets": "Gated Feedback Recurrent Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3544051dd22140cc950e5a7195d57a73", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recent work in learning vector-space embeddings for multi-relational data has focused on combining relational information derived from knowledge bases with distributional information derived from large text corpora. We propose a simple approach that leverages the descriptions of entities or phrases available in lexical resources, in conjunction with distributional semantics, in order to derive a better initialization for training relational models. Applying this initialization to the TransE model results in significant new stateof-the-art performances on the WordNet dataset, decreasing the mean rank from the previous best of 212 to 51. It also results in faster convergence of the entity representations. We find that there is a tradeoff between improving the mean rank and the hits@10 with this approach. This illustrates that much remains to be understood regarding performance improvements in relational models.", "targets": "Leveraging Lexical Resources for Learning Entity Embeddings in Multi-Relational Data"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ef4be55473ac4b068988dec30128a219", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Cross-lingual embedding models allow us to project words from different languages into a shared embedding space. This allows us to apply models trained on languages with a lot of data, e.g. English to low-resource languages. In the following, we will survey models that seek to learn cross-lingual embeddings. We will discuss them based on the type of approach and the nature of parallel data that they employ. Finally, we will present challenges and summarize how to evaluate cross-lingual embedding models. In recent years, driven by the success of word embeddings, many models that learn accurate representations of words haven been proposed [Mikolov et al., 2013a, Pennington et al., 2014]. However, these models are generally restricted to capture representations of words in the language they were trained on. The availability of resources, training data, and benchmarks in English leads to a disproportionate focus on the English language and a negligence of the plethora of other languages that are spoken around the world. In our globalised society, where national borders increasingly blur, where the Internet gives everyone equal access to information, it is thus imperative that we do not only seek to eliminate bias pertaining to gender or race [Bolukbasi et al., 2016] inherent in our representations, but also aim to address our bias towards language. To remedy this and level the linguistic playing field, we would like to leverage our existing knowledge in English to equip our models with the capability to process other languages. Perfect machine translation (MT) would allow this. However, we do not need to actually translate examples, as long as we are able to project examples into a common subspace such as the one in Figure 1. Figure 1: A shared embedding space between two languages [Luong et al., 2015] \u2217This article originally appeared as a blog post at http://sebastianruder.com/ cross-lingual-embeddings/index.html on 28 November 2016. ar X iv :1 70 6. 04 90 2v 1 [ cs .C L ] 1 5 Ju n 20 17 Ultimately, our goal is to learn a shared embedding space between words in all languages. Equipped with such a vector space, we are able to train our models on data in any language. By projecting examples available in one language into this space, our model simultaneously obtains the capability to perform predictions in all other languages (we are glossing over some considerations here; for these, refer to Section 7. This is the promise of cross-lingual embeddings. Over the course of this survey, we will give an overview of models and algorithms that have been used to come closer to the elusive goal of capturing the relations between words in multiple languages in a common embedding space. Note that while neural MT approaches implicitly learn a shared cross-lingual embedding space by optimizing for the MT objective, we will focus on models that explicitly learn cross-lingual word representations throughout this blog post. These methods generally do so at a much lower cost than MT and can be considered to be to MT what word embedding models [Mikolov et al., 2013a, Pennington et al., 2014] are to language modelling. 1 Types of cross-lingual embedding models In recent years, various models for learning cross-lingual representations have been proposed. In the following, we will order them by the type of approach that they employ. Note that while the nature of the parallel data used is equally discriminatory and has been shown to account for inter-model performance differences [Levy et al., 2017], we consider the type of approach more conducive to understanding the assumptions a model makes and \u2013 consequently \u2013 its advantages and deficiencies. Cross-lingual embedding models generally use four different approaches: 1. Monolingual mapping: These models initially train monolingual word embeddings on large monolingual corpora. They then learn a linear mapping between monolingual representations in different languages to enable them to map unknown words from the source language to the target language. 2. Pseudo-cross-lingual: These approaches create a pseudo-cross-lingual corpus by mixing contexts of different languages. They then train an off-the-shelf word embedding model on the created corpus. The intuition is that the cross-lingual contexts allow the learned representations to capture cross-lingual relations. 3. Cross-lingual training: These models train their embeddings on a parallel corpus and optimize a cross-lingual constraint between embeddings of different languages that encourages embeddings of similar words to be close to each other in a shared vector space. 4. Joint optimization: These approaches train their models on parallel (and optionally monolingual data). They jointly optimise a combination of monolingual and cross-lingual losses. In terms of parallel data, methods may use different supervision signals that depend on the type of data used. These are, from most to least expensive: 1. Word-aligned data: A parallel corpus with word alignments that is commonly used for machine translation; this is the most expensive type of parallel data to use. 2. Sentence-aligned data: A parallel corpus without word alignments. If not otherwise specified, the model uses the Europarl corpus2 consisting of sentence-aligned text from the proceedings of the European parliament that is generally used for training Statistical Machine Translation models. 3. Document-aligned data: A corpus containing documents in different languages. The documents can be topic-aligned (e.g. Wikipedia) or label/class-aligned (e.g. sentiment analysis and multi-class classification datasets). 4. Lexicon: A bilingual or cross-lingual dictionary with pairs of translations between words in different languages. 5. No parallel data: No parallel data whatsoever. Learning cross-lingual representations from only monolingual resources would enable zero-shot learning across languages. http://www.statmt.org/europarl/", "targets": "A survey of cross-lingual embedding models\u2217"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-211785cf25f348048fb9bed0447f38b8", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Aiming to unify known results about clustering mixtures of distributions under separation conditions, Kumar and Kannan [KK10] introduced a deterministic condition for clustering datasets. They showed that this single deterministic condition encompasses many previously studied clustering assumptions. More specifically, their proximity condition requires that in the target k-clustering, the projection of a point x onto the line joining its cluster center \u03bc and some other center \u03bc, is a large additive factor closer to \u03bc than to \u03bc. This additive factor can be roughly described as k times the spectral norm of the matrix representing the differences between the given (known) dataset and the means of the (unknown) target clustering. Clearly, the proximity condition implies center separation \u2013 the distance between any two centers must be as large as the above mentioned bound. In this paper we improve upon the work of Kumar and Kannan [KK10] along several axes. First, we weaken the center separation bound by a factor of \u221a k, and secondly we weaken the proximity condition by a factor of k (in other words, the revised separation condition is independent of k). Using these weaker bounds we still achieve the same guarantees when all points satisfy the proximity condition. Under the same weaker bounds, we achieve even better guarantees when only (1\u2212\u01eb)-fraction of the points satisfy the condition. Specifically, we correctly cluster all but a (\u01eb + O(1/c))-fraction of the points, compared to O(k\u01eb)-fraction of [KK10], which is meaningful even in the particular setting when \u01eb is a constant and k = \u03c9(1). Most importantly, we greatly simplify the analysis of Kumar and Kannan. In fact, in the bulk of our analysis we ignore the proximity condition and use only center separation, along with the simple triangle and Markov inequalities. Yet these basic tools suffice to produce a clustering which (i) is correct on all but a constant fraction of the points, (ii) has k-means cost comparable to the k-means cost of the target clustering, and (iii) has centers very close to the target centers. Our improved separation condition allows us to match the results of the Planted Partition Model of McSherry [McS01], improve upon the results of Ostrovsky et al [ORSS06], and improve separation results for mixture of Gaussian models in a particular setting.", "targets": "Improved Spectral-Norm Bounds for Clustering"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3590b7e23f904c33a74a6fd7ed27b574", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Abstract. Random Forest (RF) is a powerful ensemble method for classification and regression tasks. It consists of decision trees set. Although, a single tree is well interpretable for human, the ensemble of trees is a black-box model. The popular technique to look inside the RF model is to visualize a RF proximity matrix obtained on data samples with Multidimensional Scaling (MDS) method. Herein, we present a novel method based on Self-Organising Maps (SOM) for revealing intrinsic relationships in data that lay inside the RF used for classification tasks. We propose an algorithm to learn the SOM with the proximity matrix obtained from the RF. The visualization of RF proximity matrix with MDS and SOM is compared. What is more, the SOM learned with the RF proximity matrix has better classification accuracy in comparison to SOM learned with Euclidean distance. Presented approach enables better understanding of the RF and additionally improves accuracy of the SOM.", "targets": "Visualizing Random Forest with Self-Organising Map"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9dadb58b55b042ddacc8e7e3dd730685", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this age of information technology, information access in a convenient manner has gained importance. Since speech is a primary mode of communication among human beings, it is natural for people to expect to be able to carry out spoken dialogue with computer [1]. Speech recognition system permits ordinary people to speak to the computer to retrieve information. It is desirable to have a human computer dialogue in local language. Hindi being the most widely spoken Language in India is the natural primary human language candidate for human machine interaction. There are five pairs of vowels in Hindi languages; one member is longer than the other one. This paper describes an overview of speech recognition system. How speech is produced and the properties and characteristics of Hindi", "targets": "AN OVERVIEW OF HINDI SPEECH RECOGNITION"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-da75f00d31f84af79bf01822014df3cc", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Traditional generative adversarial networks (GAN) and many of its variants are trained by minimizing the KL or JS-divergence loss that measures how close the generated data distribution is from the true data distribution. A recent advance called the WGAN based on Wasserstein distance can improve on the KL and JS-divergence based GANs, and alleviate the gradient vanishing, instability, and mode collapse issues that are common in the GAN training. In this work, we aim at improving on the WGAN by first generalizing its discriminator loss to a margin-based one, which leads to a better discriminator, and in turn a better generator, and then carrying out a progressive training paradigm involving multiple GANs to contribute to the maximum margin ranking loss so that the GAN at later stages will improve upon early stages. We call this method Gang of GANs (GoGAN). We have shown theoretically that the proposed GoGAN can reduce the gap between the true data distribution and the generated data distribution by at least half in an optimally trained WGAN. We have also proposed a new way of measuring GAN quality which is based on image completion tasks. We have evaluated our method on four visual datasets: CelebA, LSUN Bedroom, CIFAR-10, and 50K-SSFF, and have seen both visual and quantitative improvement over baseline WGAN.", "targets": "Gang of GANs: Generative Adversarial Networks with Maximum Margin Ranking"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-55830b4c88f549d5923f6e4e1817984e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We discuss the feasibility of the following learning problem: given unmatched samples from two domains and nothing else, learn a mapping between the two, which preserves semantics. Due to the lack of paired samples and without any definition of the semantic information, the problem might seem ill-posed. Specifically, in typical cases, it seems possible to build infinitely many alternative mappings from every target mapping. This apparent ambiguity stands in sharp contrast to the recent empirical success in solving this problem. A theoretical framework for measuring the complexity of compositions of functions is developed in order to show that the target mapping is of lower complexity than all other mappings. The measured complexity is directly related to the depth of the neural networks being learned and the semantic mapping could be captured simply by learning using architectures that are not much bigger than the minimal architecture.", "targets": "Unsupervised Learning of Semantic Mappings"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4e825e7f409049f298e0bf4c1529584d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recent work exhibited that distributed word representations are good at capturing linguistic regularities in language. This allows vector-oriented reasoning based on simple linear algebra between words. Since many different methods have been proposed for learning document representations, it is natural to ask whether there is also linear structure in these learned representations to allow similar reasoning at document level. To answer this question, we design a new document analogy task for testing the semantic regularities in document representations, and conduct empirical evaluations over several state-of-theart document representation models. The results reveal that neural embedding based document representations work better on this analogy task than conventional methods, and we provide some preliminary explanations over these observations.", "targets": "Semantic Regularities in Document Representations"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f77bb7d258a0483e86f4b5cf4563be8f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper presents Centre for Development of Advanced Computing Mumbai\u2019s (CDACM) submission to NLP Tools Contest on Statistical Machine Translation in Indian Languages (ILSMT) 2015 (collocated with ICON 2015). The aim of the contest was to collectively explore the effectiveness of Statistical Machine Translation (SMT) while translating within Indian languages and between English and Indian languages. In this paper, we report our work on all five language pairs, namely Bengali-Hindi (bn-hi), Marathi-Hindi (mrhi), Tamil-Hindi (ta-hi), Telugu-Hindi (tehi), and English-Hindi (en-hi) for Health, Tourism and General domains. We have used suffix separation, compound splitting and preordering prior to SMT training and testing.", "targets": "Statistical Machine Translation for Indian Languages: Mission Hindi 2"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5f36de3940ae44a4be3978e80009d9a4", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Rohit and I go back a long way. We started talking about Dynamic Logic back when I was a graduate student, when we would meet at seminars at MIT (my advisor Albert Meyer was at MIT, although I was at Harvard, and Rohit was then at Boston University). Right from the beginning I appreciated Rohit\u2019s breadth, his quick insights, his wit, and his welcoming and gracious style. Rohit has been interested in the interplay between logic, philosophy, and language ever since I\u2019ve known him. Over the years, both of us have gotten interested in game theory. I would like to dedicate this short note, which discusses issues at the intersection of all these areas, to him.", "targets": "Why Bother With Syntax?"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3b92a2280aed4b40b7bd0fd2673786b4", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The rapid growth of scientific literature has made it difficult for the researchers to quickly learn about the developments in their respective fields. Scientific document summarization addresses this challenge by providing summaries of the important contributions of scientific papers. We present a framework for scientific summarization which takes advantage of the citations and the scientific discourse structure. Citation texts often lack the evidence and context to support the content of the cited paper and are even sometimes inaccurate. We first address the problem of inaccuracy of the citation texts by finding the relevant context from the cited paper. We propose three approaches for contextualizing citations which are based on query reformulation, word embeddings, and supervised learning. We then train a model to identify the discourse facets for each citation. We finally propose a method for summarizing scientific papers by leveraging the faceted citations and their corresponding contexts. We evaluate our proposed method on two scientific summarization datasets in the biomedical and computational linguistics domains. Extensive evaluation results show that our methods can improve over the state of the art by large margins. \u2217 This is a pre-print of an article published on IJDL. The final publication is available at Springer via http://dx.doi.org/10.1007/s00799-017-0216-8 Arman Cohan E-mail: arman@ir.cs.georgetown.edu Nazli Goharian E-mail: nazli@ir.cs.georgetown.edu 1 Information Retrieval Lab, Department of Computer Science, Georgetown University, Washington DC, USA", "targets": "Scientific document summarization via citation contextualization and scientific discourse"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9a2eddc1af054c098f9b143c790165bf", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The Region Connection Calculus (RCC) [41] is a well-known calculus for representing part-whole and topological relations. It plays an important role in qualitative spatial reasoning, geographical information science, and ontology. The computational complexity of reasoning with RCC5 and RCC8 (two fragments of RCC) as well as other qualitative spatial/temporal calculi has been investigated in depth in the literature. Most of these works focus on the consistency of qualitative constraint networks. In this paper, we consider the important problem of redundant qualitative constraints. For a set \u0393 of qualitative constraints, we say a constraint (xRy) in \u0393 is redundant if it is entailed by the rest of \u0393. A prime subnetwork of \u0393 is a subset of \u0393 which contains no redundant constraints and has the same solution set as \u0393. It is natural to ask how to compute such a prime subnetwork, and when it is unique. In this paper, we show that this problem is in general intractable, but becomes tractable if \u0393 is over a tractable subalgebra S of a qualitative calculus. Furthermore, if S is a subalgebra of RCC5 or RCC8 in which weak composition distributes over nonempty intersections, then \u0393 has a unique prime subnetwork, which can be obtained in cubic time by removing all redundant constraints simultaneously from \u0393. As a byproduct, we show that any path-consistent network over such a distributive subalgebra is weakly globally consistent and minimal. A thorough empirical analysis of the prime subnetwork upon real geographical data sets demonstrates the approach is able to identify significantly more redundant con\u2217Corresponding Author Email addresses: sanjiang.li@uts.edu.au (Sanjiang Li), zhiguo.long@student.uts.edu.au (Zhiguo Long), liuweiming@baidu.com (Weiming Liu), matt@duckham.org (Matt Duckham), aboth@student.unimelb.edu.au (Alan Both) Preprint submitted to Elsevier February 16, 2015 ar X iv :1 40 3. 06 13 v2 [ cs .A I] 1 3 Fe b 20 15 straints than previously proposed algorithms, especially in constraint networks with larger proportions of partial overlap relations.", "targets": "On Redundant Topological Constraints"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3e814ea1f04d48ef8bd9ff534fe89a63", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Delay discounting, a behavioral measure of impulsivity, is often used to quantify the human tendency to choose a smaller, sooner reward (e.g., $1 today) over a larger, later reward ($2 tomorrow). Delay discounting and its relation to human decision making is a hot topic in economics and behavior science since pitting the demands of long-term goals against short term desires is among the most difficult tasks in human decision making [Hirsh et al., 2008]. Previously, small-scale studies based on questionnaires were used to analyze an individual\u2019s delay discounting rate (DDR) and his/her realworld behavior (e.g., substance abuse) [Kirby et al., 1999]. In this research, we employ large-scale social media analytics to study DDR and its relation to people\u2019s social media behavior (e.g., Facebook Likes). We also build computational models to automatically infer DDR from Social Media Likes. Our investigation has revealed interesting results.", "targets": "$1 Today or $2 Tomorrow? The Answer is in Your Facebook Likes"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-51aeeccc70344474a0ba4d1be0eb741c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We carefully study how well minimizing convex surrogate loss functions corresponds to minimizing the misclassification error rate for the problem of binary classification with linear predictors. We consider the agnostic setting, and investigate guarantees on the misclassification error of the loss-minimizer in terms of the margin error rate of the best predictor. We show that, aiming for such a guarantee, the hinge loss is essentially optimal among all convex losses.", "targets": "Minimizing The Misclassification Error Rate Using a Surrogate Convex Loss"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-48e72cd2608741448fb0a23f4b18c780", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper studies convolutional networks that require limited computational resources at test time. We develop a new network architecture that performs on par with state-of-the-art convolutional networks, whilst facilitating prediction in two settings: (1) an anytime-prediction setting in which the network\u2019s prediction for one example is progressively updated, facilitating the output of a prediction at any time; and (2) a batch computational budget setting in which a fixed amount of computation is available to classify a set of examples that can be spent unevenly across \u201ceasier\u201d and \u201charder\u201d examples. Our network architecture uses multi-scale convolutions and progressively growing feature representations, which allows for the training of multiple classifiers at intermediate layers of the network. Experiments on three image-classification datasets demonstrate the efficacy of our architecture, in particular, when measured in terms of classification accuracy as a function of the amount of compute available.", "targets": "Multi-Scale Dense Convolutional Networks for Efficient Prediction"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5053987bf1704c4a83948be5aaf7778b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We investigate the use of sparse coding and dictionary learning in the context of multitask and transfer learning. The central assumption of our learning method is that the tasks parameters are well approximated by sparse linear combinations of the atoms of a dictionary on a high or infinite dimensional space. This assumption, together with the large quantity of available data in the multitask and transfer learning settings, allows a principled choice of the dictionary. We provide bounds on the generalization error of this approach, for both settings. Numerical experiments on one synthetic and two real datasets show the advantage of our method over single task learning, a previous method based on orthogonal and dense representation of the tasks and a related method learning task grouping.", "targets": "Sparse coding for multitask and transfer learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2b43b96f79ff4e77bfd4e8fee4eec30d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Classical Decision Theory provides a norma\u00ad tive framework for representing and reason\u00ad ing about complex preferences. Straightfor\u00ad ward application of this theory to automate decision making is difficult due to high elic\u00ad itation cost. In response to this problem, researchers have recently developed a num\u00ad ber of qualitative, logic-oriented approaches for representing and reasoning about pref\u00ad erences. While effectively addressing some expressiveness issues, these logics have not proven powerful enough for building practical automated decision making systems. In this paper we present a hybrid approach to pref\u00ad erence elicitation and decision making that is grounded in class ical multi-attribute util\u00ad ity theory, but can make effective use of the expressive power of qualitative approaches. Specifically, assuming a partially specified multilinear utility function, we show how comparative statements about class es of deci\u00ad sion alternatives can be used to further con\u00ad strain the utility function and thus identify sup-optimal alternatives. This work demon\u00ad strates that quantitative and qualitative ap\u00ad proaches can be synergistically integrated to provide effective and flexible decision sup\u00ad port.", "targets": "A Hybrid Approach to Reasoning with Partially Elicited Preference Models"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9bb1677fec814a4d852004a287b3fa00", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We analyze and evaluate an online gradient descent algorithm with adaptive per-coordinate adjustment of learning rates. Our algorithm can be thought of as an online version of batch gradient descent with a diagonal preconditioner. This approach leads to regret bounds that are stronger than those of standard online gradient descent for general online convex optimization problems. Experimentally, we show that our algorithm is competitive with state-of-the-art algorithms for large scale machine learning problems.", "targets": "Less Regret via Online Conditioning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f85c741113da43f7836d6458ecfff5a2", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We derive bounds on the sample complexity of empirical risk minimization (ERM) in the context of minimizing non-convex risks that admit the strict saddle property. Recent progress in non-convex optimization has yielded efficient algorithms for minimizing such functions. Our results imply that these efficient algorithms are statistically stable and also generalize well. In particular, we derive fast rates which resemble the bounds that are often attained in the strongly convex setting. We specify our bounds to Principal Component Analysis and Independent Component Analysis. Our results and techniquesmay pave the way for statistical analyses of additional strict saddle problems.", "targets": "Fast Rates for Empirical Risk Minimization of Strict Saddle Problems"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f72dd8a4ab5143b7beda54b851306614", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "User preference integration is of great importance in multi-objective optimization, in particular in many objective optimization. Preferences have long been considered in traditional multicriteria decision making (MCDM) which is based on mathematical programming. Recently, it is integrated in multi-objective metaheuristics (MOMH), resulting in focus on preferred parts of the Pareto front instead of the whole Pareto front. The number of publications on preference-based multiobjective metaheuristics has increased rapidly over the past decades. There already exist various preference handling methods and MOMH methods, which have been combined in diverse ways. This article proposes to use the Web Ontology Language (OWL) to model and systematize the results developed in this field. A review of the existing work is provided, based on which an ontology is built and instantiated with state-of-the-art results. The OWL ontology is made public and open to future extension. Moreover, the usage of the ontology is exemplified for different usecases, including querying for methods that match an engineering application, bibliometric analysis, checking existence of combinations of preference models and MOMH techniques, and discovering opportunities for new research and open research questions.", "targets": "An Ontology of Preference-Based Multi-objective Metaheuristics"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d3cd3e178dbd4e0ca5b8bcc7f5a08d21", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Machine Learning has been a big success story during the AI resurgence. One particular stand out success relates to unsupervised learning from a massive amount of data, albeit much of it relates to one modality/type of data at a time. In spite of early assertions of the unreasonable effectiveness of data, there is increasing recognition of utilizing knowledge whenever it is available or can be created purposefully. In this paper, we focus on discussing the indispensable role of knowledge for deeper understanding of complex text and multimodal data in situations where (i) large amounts of training data (labeled/unlabeled) are not available or labour intensive to create, (ii) the objects (particularly text) to be recognized are complex (i.e., beyond simple entity \u2013 person/location/organization names), such as implicit entities and highly subjective content, and (iii) applications need to use complementary or related data in multiple modalities/media. What brings us to the cusp of rapid progress is our ability to (a) create knowledge, varying from comprehensive or cross domain to domain or application specific, and (b) carefully exploit the knowledge to further empower or extend the applications of ML/NLP techniques. Using the early results in several diverse situations \u2013 both in data types and applications \u2013 we seek to foretell unprecedented progress in our ability for deeper understanding and exploitation of multimodal data.", "targets": "Knowledge will Propel Machine Understanding of Content: Extrapolating from Current Examples"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-32c3c330f49046b0a3518eb416898cae", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In a composite-domain task-completion dialogue system, a conversation agent often switches among multiple sub-domains before it successfully completes the task. Given such a scenario, a standard deep reinforcement learning based dialogue agent may suffer to find a good policy due to the issues such as: increased state and action spaces, high sample complexity demands, sparse reward and long horizon, etc. In this paper, we propose to use hierarchical deep reinforcement learning approach which can operate at different temporal scales and is intrinsically motivated to attack these problems. Our hierarchical network consists of two levels: the top-level meta-controller for subgoal selection and the low-level controller for dialogue policy learning. Subgoals selected by metacontroller and intrinsic rewards can guide the controller to effectively explore in the state-action space and mitigate the spare reward and long horizon problems. Experiments on both simulations and human evaluation show that our model significantly outperforms flat deep reinforcement learning agents in terms of success rate, rewards and user rating.", "targets": "Composite Task-Completion Dialogue System via Hierarchical Deep Reinforcement Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-bb32e2fc0bce46b3bbf0afdf41bac42b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose multi-way, multilingual neural machine translation. The proposed approach enables a single neural translation model to translate between multiple languages, with a number of parameters that grows only linearly with the number of languages. This is made possible by having a single attention mechanism that is shared across all language pairs. We train the proposed multiway, multilingual model on ten language pairs from WMT\u201915 simultaneously and observe clear performance improvements over models trained on only one language pair. In particular, we observe that the proposed model significantly improves the translation quality of low-resource language pairs.", "targets": "Multi-Way, Multilingual Neural Machine Translation with a Shared Attention Mechanism"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a1a90ccf75ad4e7491616924d288ca1d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present Confidence-Based Autonomy (CBA), an interactive algorithm for policy learning from demonstration. The CBA algorithm consists of two components which take advantage of the complimentary abilities of humans and computer agents. The first component, Confident Execution, enables the agent to identify states in which demonstration is required, to request a demonstration from the human teacher and to learn a policy based on the acquired data. The algorithm selects demonstrations based on a measure of action selection confidence, and our results show that using Confident Execution the agent requires fewer demonstrations to learn the policy than when demonstrations are selected by a human teacher. The second algorithmic component, Corrective Demonstration, enables the teacher to correct any mistakes made by the agent through additional demonstrations in order to improve the policy and future task performance. CBA and its individual components are compared and evaluated in a complex simulated driving domain. The complete CBA algorithm results in the best overall learning performance, successfully reproducing the behavior of the teacher while balancing the tradeoff between number of demonstrations and number of incorrect actions during learning.", "targets": "Interactive Policy Learning through Confidence-Based Autonomy"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-070218d9cfca4a098a344d23ade024d8", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper we present an interface between a symbolic planner and a geometric task planner, which is different to a standard trajectory planner in that the former is able to perform geometric reasoning on abstract entities\u2014tasks. We believe that this approach facilitates a more principled interface to symbolic planning, while also leaving more room for the geometric planner to make independent decisions. We show how the two planners could be interfaced, and how their planning and backtracking could be interleaved. We also provide insights for a methodology for using the combined system, and experimental results to use as a benchmark with future extensions to both the combined system, as well as to the geometric task planner.", "targets": "Towards Combining HTN Planning and Geometric Task Planning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-eeadd87c0a584b03a33ad3e7665b06fb", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "To cope with changing environments, recent developments in online learning have introduced the concepts of adaptive regret and dynamic regret independently. In this paper, we illustrate an intrinsic connection between these two concepts by showing that the dynamic regret can be expressed in terms of the adaptive regret and the functional variation. This observation implies that strongly adaptive algorithms can be directly leveraged to minimize the dynamic regret. As a result, we present a series of strongly adaptive algorithms whose dynamic regrets are minimax optimal for convex functions, exponentially concave functions, and strongly convex functions, respectively. To the best of our knowledge, this is the first time that such kind of dynamic regret bound is established for exponentially concave functions. Moreover, all of those adaptive algorithms do not need any prior knowledge of the functional variation, which is a significant advantage over previous specialized methods for minimizing dynamic regret.", "targets": "Strongly Adaptive Regret Implies Optimally Dynamic Regret"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-89b844de77d141139f4960e8045529e5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper proposes CF-NADE, a neural autoregressive architecture for collaborative filtering (CF) tasks, which is inspired by the Restricted Boltzmann Machine (RBM) based CF model and the Neural Autoregressive Distribution Estimator (NADE). We first describe the basic CF-NADE model for CF tasks. Then we propose to improve the model by sharing parameters between different ratings. A factored version of CF-NADE is also proposed for better scalability. Furthermore, we take the ordinal nature of the preferences into consideration and propose an ordinal cost to optimize CF-NADE, which shows superior performance. Finally, CF-NADE can be extended to a deep model, with only moderately increased computational complexity. Experimental results show that CF-NADE with a single hidden layer beats all previous state-of-the-art methods on MovieLens 1M, MovieLens 10M, and Netflix datasets, and adding more hidden layers can further improve the performance.", "targets": "A Neural Autoregressive Approach to Collaborative Filtering"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-23b27502242c4743999cd67bee630c04", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Assessing network security is a complex and difficult task. Attack graphs have been proposed as a tool to help network administrators understand the potential weaknesses of their networks. However, a problem has not yet been addressed by previous work on this subject; namely, how to actually execute and validate the attack paths resulting from the analysis of the attack graph. In this paper we present a complete PDDL representation of an attack model, and an implementation that integrates a planner into a penetration testing tool. This allows to automatically generate attack paths for penetration testing scenarios, and to validate these attacks by executing the corresponding actions -including exploitsagainst the real target network. We present an algorithm for transforming the information present in the penetration testing tool to the planning domain, and we show how the scalability issues of attack graphs can be solved using current planners. We include an analysis of the performance of our solution, showing how our model scales to medium-sized networks and the number of actions available in current penetration testing tools.", "targets": "Attack Planning in the Real World"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9b1e8251ea4e412a82d4fa7eeab3a5b7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Heuristics used for solving hard real-time search problems have regions with depressions. Such regions are bounded areas of the search space in which the heuristic function is inaccurate compared to the actual cost to reach a solution. Early real-time search algorithms, like LRTA\u2217, easily become trapped in those regions since the heuristic values of their states may need to be updated multiple times, which results in costly solutions. State-of-the-art real-time search algorithms, like LSS-LRTA\u2217 or LRTA\u2217(k), improve LRTA\u2217\u2019s mechanism to update the heuristic, resulting in improved performance. Those algorithms, however, do not guide search towards avoiding depressed regions. This paper presents depression avoidance, a simple real-time search principle to guide search towards avoiding states that have been marked as part of a heuristic depression. We propose two ways in which depression avoidance can be implemented: mark-and-avoid and move-to-border. We implement these strategies on top of LSS-LRTA\u2217 and RTAA\u2217, producing 4 new real-time heuristic search algorithms: aLSS-LRTA\u2217, daLSS-LRTA\u2217, aRTAA\u2217, and daRTAA\u2217. When the objective is to find a single solution by running the real-time search algorithm once, we show that daLSS-LRTA\u2217 and daRTAA\u2217 outperform their predecessors sometimes by one order of magnitude. Of the four new algorithms, daRTAA\u2217 produces the best solutions given a fixed deadline on the average time allowed per planning episode. We prove all our algorithms have good theoretical properties: in finite search spaces, they find a solution if one exists, and converge to an optimal after a number of trials.", "targets": "Avoiding and Escaping Depressions in Real-Time Heuristic Search"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-77b77fdabe864e06938bac12199a239a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The margin of victory is easy to compute for many election schemes but difficult for Instant Runoff Voting (IRV). This is important because arguments about the correctness of an election outcome usually rely on the size of the electoral margin. For example, risk-limiting audits require a knowledge of the margin of victory in order to determine how much auditing is necessary. This paper presents a practical branch-and-bound algorithm for exact IRV margin computation that substantially improves on the current best-known approach. Although exponential in the worst case, our algorithm runs efficiently in practice on all the real examples we could find. We can efficiently discover exact margins on election instances that cannot be solved by the current state-of-the-art.", "targets": "Efficient Computation of Exact IRV Margins"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-020d5a21386e4c038a1ec8bd76f8f8ca", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Most state-of-the-art named entity recognition (NER) systems rely on the use of handcrafted features and on the output of other NLP tasks such as part-of-speech (POS) tagging and text chunking. In this work we propose a language-independent NER system that uses automatically learned features only. Our approach is based on the CharWNN deep neural network, which uses word-level and character-level representations (embeddings) to perform sequential classification. We perform an extensive number of experiments using two annotated corpora in two different languages: HAREM I corpus, which contains texts in Portuguese; and the SPA CoNLL-2002, which contains texts in Spanish. Our experimental results shade light on the contribution of neural character embeddings for NER. Moreover, we demonstrate that the same neural network which has been successfully applied for POS tagging can also achieve state-of-the-art results for language-independet NER, using the same hyper-parameters, and without any handcrafted features. For the HAREM I corpus, CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score for the total scenario (ten NE classes), and by 7.2 points in the F1 for the selective scenario (five NE classes).", "targets": "Boosting Named Entity Recognition with Neural Character Embeddings"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-58f020aefab744629183b718c8435df4", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The World Wide Web no longer consists just of HTML pages. Our work sheds light on a number of trends on the Internet that go beyond simple Web pages. The hidden Web provides a wealth of data in semi-structured form, accessible through Web forms and Web services. These services, as well as numerous other applications on the Web, commonly use XML, the eXtensible Markup Language. XML has become the lingua franca of the Internet that allows customized markups to be defined for specific domains. On top of XML, the Semantic Web grows as a common structured data source. In this work, we first explain each of these developments in detail. Using real-world examples from scientific domains of great interest today, we then demonstrate how these new developments can assist the managing, harvesting, and organization of data on the Web. On the way, we also illustrate the current research avenues in these domains. We believe that this effort would help bridge multiple database tracks, thereby attracting researchers with a view to extend database technology.", "targets": "The Hidden Web, XML and the Semantic Web: Scientific Data Management Perspectives"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f72a1b8304064177be20f1138a3c24d0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Designing an e-commerce recommender system that serves hundreds of millions of active users is a daunting challenge. Ranking strategy as the key module needs to be more carefully designed. We find two key factors that affect users\u2019 behaviors: attractive item content and compatibility with users\u2019 interests. To extract these factors, a ranking model needs to understand users from a human vision perspective. This paper proposes Telepath, a vision-based architecture that simulates the human vision system to extract the key visual signals that attract users to a displayed item and generate vision activations and simulates cerebral cortex to understand users\u2019 interest based on the captured activations from browsed items. Telepath is a combination of CNN, RNN and DNN. In practice, the Telepath model has been launched to JD\u2019s online recommender system and advertising system. For one of the major item recommendation blocks on the JD app, CTR, GMV and orders have increased 1.59%, 8.16% and 8.71% respectively. For several major advertising publishers of JD DSP, CTR, GMV and ROI have increased 6.58%, 61.72% and 65.57% respectively by the first launch, and further increased 2.95%, 41.75% and 41.37% respectively by the second launch.", "targets": "Telepath: Understanding Users from a Human Vision Perspective in Large-Scale Recommender Systems"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5b7b369018e8437f8e019c2089059ae0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Learning commonsense knowledge from natural language text is nontrivial due to reporting bias: people rarely state the obvious, e.g., \u201cmy house is bigger than me\u201d. However, while rarely stated explicitly, this trivial everyday knowledge does influence the way people talk about the world, which provides indirect clues to reason about the world. For example, a statement like \u201cJohn entered his house\u201d implies that his house is bigger than John. In this paper, we present an approach to infer relative physical knowledge of actions and objects along six dimensions (e.g., size, weight, and strength) from unstructured natural language text. We frame knowledge acquisition as joint inference over two closely related problems: learning (1) relative physical knowledge of object pairs and (2) physical implications of actions when applied to those object pairs. Empirical results demonstrate that it is possible to extract knowledge of actions and objects from language and that joint inference over different knowledge types improves performance.", "targets": "Verb Physics: Relative Physical Knowledge of Actions and Objects"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1f196173f930416a933c94285b5c5958", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a general information-theoretic approach called SERAPH (SEmi-supervised metRic leArning Paradigm with Hyper-sparsity) for metric learning that does not rely upon the manifold assumption. Given the probability parameterized by a Mahalanobis distance, we maximize the entropy of that probability on labeled data and minimize it on unlabeled data following entropy regularization, which allows the supervised and unsupervised parts to be integrated in a natural and meaningful way. Furthermore, SERAPH is regularized by encouraging a low-rank projection induced from the metric. The optimization of SERAPH is solved efficiently and stably by an EMlike scheme with the analytical E-Step and convex M-Step. Experiments demonstrate that SERAPH compares favorably with many well-known global and local metric learning methods.", "targets": "Information-theoretic Semi-supervised Metric Learningvia Entropy Regularization"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b600d32d17174833bb6d5a68b53e8d75", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The use of M-estimators in generalized linear regression models in high dimensional settings requires risk minimization with hard L0 constraints. Of the known methods, the class of projected gradient descent (also known as iterative hard thresholding (IHT)) methods is known to offer the fastest and most scalable solutions. However, the current state-of-the-art is only able to analyze these methods in extremely restrictive settings which do not hold in high dimensional statistical models. In this work we bridge this gap by providing the first analysis for IHT-style methods in the high dimensional statistical setting. Our bounds are tight and match known minimax lower bounds. Our results rely on a general analysis framework that enables us to analyze several popular hard thresholding style algorithms (such as HTP, CoSaMP, SP) in the high dimensional regression setting. We also extend our analysis to a large family of \u201cfully corrective methods\u201d that includes two-stage and partial hard-thresholding algorithms. We show that our results hold for the problem of sparse regression, as well as low-rank matrix recovery.", "targets": "On Iterative Hard Thresholding Methods for High-dimensional M-Estimation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e43ca89b2bbb43bfb3cee38a412fa3d9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A major task in systematic reviews is abstract screening, i.e., excluding, often hundreds or thousand of, irrelevant citations returned from a database search based on titles and abstracts. Thus, a systematic review platform that can automate the abstract screening process is of huge importance. Several methods have been proposed for this task. However, it is very hard to clearly understand the applicability of these methods in a systematic review platform because of the following challenges: (1) the use of non-overlapping metrics for the evaluation of the proposed methods, (2) usage of features that are very hard to collect, (3) using a small set of reviews for the evaluation, and (4) no solid statistical testing or equivalence grouping of the methods. In this paper, we use feature representation that can be extracted per citation. We evaluate SVM based methods (commonly used) on a large set of reviews (61) and metrics (11) to provide equivalence grouping of methods based on a solid statistical test. Our analysis also includes a strong variability of the metrics using 500x2 cross validation. While some methods shine for different metrics and for different datasets, there is no single method that dominates the pack. Furthermore, we observe that in some cases relevant (included) citations can be found after screening only 15-20% of them via a certainty based sampling. A few included citations present outlying characteristics and can only be found after a very large number of screening steps. Finally, we present an ensemble algorithm for producing a 5star rating of citations based on their relevance. Such algorithm combines the best methods from our evaluation and through its 5-star rating outputs a more easy-to-consume prediction.", "targets": "A large scale study of SVM based methods for abstract screening in systematic reviews*"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-98155329628f4c9e8f7832b462484a9d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Neural networks have recently been proposed for multi-label classification because they are able to capture and model label dependencies in the output layer. In this work, we investigate limitations of BP-MLL, a neural network (NN) architecture that aims at minimizing pairwise ranking error. Instead, we propose to use a comparably simple NN approach with recently proposed learning techniques for large-scale multi-label text classification tasks. In particular, we show that BP-MLL\u2019s ranking loss minimization can be efficiently and effectively replaced with the commonly used cross entropy error function, and demonstrate that several advances in neural network training that have been developed in the realm of deep learning can be effectively employed in this setting. Our experimental results show that simple NN models equipped with advanced techniques such as rectified linear units, dropout, and AdaGrad perform as well as or even outperform state-of-the-art approaches on six large-scale textual datasets with diverse characteristics.", "targets": "Large-scale Multi-label Text Classification \u2014 Revisiting Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2255c235d57448d1b529012bf544cba2", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The cutting plane method is an augmentative constrained optimization procedure that is often used with continuous-domain optimization techniques such as linear and convex programs. We investigate the viability of a similar idea within message passing \u2013 which produces integral solutions in the context of two combinatorial problems: 1) For Traveling Salesman Problem (TSP), we propose a factor-graph based on Held-Karp formulation, with an exponential number of constraint factors, each of which has an exponential but sparse tabular form. 2) For graph-partitioning (a.k.a. community mining) using modularity optimization, we introduce a binary variable model with a large number of constraints that enforce formation of cliques. In both cases we are able to derive surprisingly simple message updates that lead to competitive solutions on benchmark instances. In particular for TSP we are able to find near-optimal solutions in the time that empirically grows with N, demonstrating that augmentation is practical and efficient.", "targets": "Augmentative Message Passing for Traveling Salesman Problem and Graph Partitioning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4340121953f74b94862d7435a2ff813e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a framework grounded in Logic Programming for representing and reasoning about business processes from both the procedural and ontological point of views. In particular, our goal is threefold: (1) define a logical language and a formal semantics for process models enriched with ontology-based annotations; (2) provide an effective inference mechanism that supports the combination of reasoning services dealing with the structural definition of a process model, its behavior, and the domain knowledge related to the participating business entities; (3) implement such a theoretical framework into a process modeling and reasoning platform. To this end we define a process ontology coping with a relevant fragment of the popular BPMN modeling notation. The behavioral semantics of a process is defined as a state transition system by following an approach similar to the Fluent Calculus, and allows us to specify state change in terms of preconditions and effects of the enactment of activities. Then we show how the procedural process knowledge can be seamlessly integrated with the domain knowledge specified by using the OWL 2 RL rule-based ontology language. Our framework provides a wide range of reasoning services, including CTL model checking, which can be performed by using standard Logic Programming inference engines through a goal-oriented, efficient, sound and complete evaluation procedure. We also present a software environment implementing the proposed framework, and we report on an experimental evaluation of the system, whose results are encouraging and show the viability of the approach.", "targets": "Ontology-based Representation and Reasoning on Process Models: A Logic Programming Approach"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-63e9a41631b34837a37fe6b2a6e58ffa", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This short paper concerns discretization schemes for representing and computing approximate Nash equilibria, with emphasis on graphical games, but briefly touching on normal-form and poly-matrix games. The main technical contribution is a representation theorem that informally states that to account for every exact Nash equilibrium using a nearby approximate Nash equilibrium on a grid over mixed strategies, a uniform discretization size linear on the inverse of the approximation quality and natural game-representation parameters suffices. For graphical games, under natural conditions, the discretization is logarithmic in the game-representation size, a substantial improvement over the linear dependency previously required. The paper has five other objectives: (1) given the venue, to highlight the important, but often ignored, role that work on constraint networks in AI has in simplifying the derivation and analysis of algorithms for computing approximate Nash equilibria; (2) to summarize the state-of-the-art on computing approximate Nash equilibria, with emphasis on relevance to graphical games; (3) to help clarify the distinction between sparse-discretization and sparse-support techniques; (4) to illustrate and advocate for the deliberate mathematical simplicity of the formal proof of the representation theorem; and (5) to list and discuss important open problems, emphasizing graphical-game generalizations, which the AI community is most suitable to solve.", "targets": "On Sparse Discretization for Graphical Games"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-63eec7df3a7643b6bbbdf21e048f9ae9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Automatic description generation from natural images is a challenging problem that has recently received a large amount of interest from the computer vision and natural language processing communities. In this survey, we classify the existing approaches based on how they conceptualize this problem, viz., models that cast description as either generation problem or as a retrieval problem over a visual or multimodal representational space. We provide a detailed review of existing models, highlighting their advantages and disadvantages. Moreover, we give an overview of the benchmark image datasets and the evaluation measures that have been developed to assess the quality of machine-generated image descriptions. Finally we extrapolate future directions in the area of automatic image description generation.", "targets": "Automatic Description Generation from Images: A Survey of Models, Datasets, and Evaluation Measures"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-91e0cd88a2144728a461a176b70f5bd9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We consider the stochastic approximation problem where a convex function has to be minimized, given only the knowledge of unbiased estimates of its gradients at certain points, a framework which includes machine learning methods based on the minimization of the empirical risk. We focus on problems without strong convexity, for which all previously known algorithms achieve a convergence rate for function values of O(1/ \u221a", "targets": "Non-strongly-convex smooth stochastic approximation with convergence rate O(1/n)"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-dd4457cb09504580bae91f7cce2ff26c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In distributed classification, each learner observes its environment and deduces a classifier. As a learner has only a local view of its environment, classifiers can be exchanged among the learners and integrated, or merged, to improve accuracy. However, the operation of merging is not defined for most classifiers. Furthermore, the classifiers that have to be merged may be of different types in settings such as ad-hoc networks in which several generations of sensors may be creating classifiers. We introduce decision spaces as a framework for merging possibly different classifiers. We formally study the merging operation as an algebra, and prove that it satisfies a desirable set of properties. The impact of time is discussed for the two main data mining settings. Firstly, decision spaces can naturally be used with non-stationary distributions, such as the data collected by sensor networks, as the impact of a model decays over time. Secondly, we introduce an approach for stationary distributions, such as homogeneous databases partitioned over different learners, which ensures that all models have the same impact. We also present a method that uses storage flexibly to achieve different types of decay for non-stationary distributions. Finally, we show that the algebraic approach developed for merging can also be used to analyze the behaviour of other operators.", "targets": "An Algebra to Merge Heterogeneous Classifiers"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6febae80305648b1ba564e50cf6b00bb", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In the present paper we use principles of fuzzy logic to develop a general model representing several processes in a system\u2019s operation characterized by a degree of vagueness and/or uncertainty. For this, the main stages of the corresponding process are represented as fuzzy subsets of a set of linguistic labels characterizing the system\u2019s performance at each stage. We also introduce three alternative measures of a fuzzy system\u2019s effectiveness connected to our general model. These measures include the system\u2019s total possibilistic uncertainty, the Shannon\u2019s entropy properly modified for use in a fuzzy environment and the \u201ccentroid\u201d method in which the coordinates of the center of mass of the graph of the membership function involved provide an alternative measure of the system\u2019s performance. The advantages and disadvantages of the above measures are discussed and a combined use of them is suggested for achieving a worthy of credit mathematical analysis of the corresponding situation. An application is also developed for the Mathematical Modelling process illustrating the use of our results in practice.", "targets": "A Study on Fuzzy Systems"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-61759a44f1fe45068767675cf8731626", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Rapid increase of digitized document give birth to high demand of document image retrieval. While conventional document image retrieval approaches depend on complex OCR-based text recognition and text similarity detection, this paper proposes a new content-based approach, in which more attention is paid to features extraction and fusion. In the proposed approach, multiple features of document images are extracted by different CNN models. After that, the extracted CNN features are reduced and fused into weighted average feature. Finally, the document images are ranked based on feature similarity to a provided query image. Experimental procedure is performed on a group of document images that transformed from academic papers, which contain both English and Chinese document, the results show that the proposed approach has good ability to retrieve document images with similar text content, and the fusion of CNN features can effectively improve the retrieval accuracy.", "targets": "Content-based similar document image retrieval using fusion of CNN features"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-72f6970c0cdb4c0d99b9966205c95bb2", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we describe a dataset relating to cellular and physical conditions of patients who are operated upon to remove colorectal tumours. This data provides a unique insight into immunological status at the point of tumour removal, tumour classification and post-operative survival. We build on existing research on clustering and machine learning facets of this data to demonstrate a role for an ensemble approach to highlighting patients with clearer prognosis parameters. Results for survival prediction using 3 different approaches are shown for a subset of the data which is most difficult to model. The performance of each model individually is compared with subsets of the data where some agreement is reached for multiple models. Significant improvements in model accuracy on an unseen test set can be achieved for patients where agreement between models is achieved. Keywords\u2014ensemble learning; anti-learning; colorectal cancer.", "targets": "Ensemble Learning of Colorectal Cancer Survival Rates"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-802d5179248c449db43470c53e3c2475", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A college student\u2019s life can be primarily categorized into domains such as education, health, social and other activities which may include daily chores and travelling time. Time management is crucial for every student. A self realisation of one\u2019s daily time expenditure in various domains is therefore essential to maximize one\u2019s effective output. This paper presents how a mobile application using Fuzzy Logic and Global Positioning System (GPS) analyzes a student\u2019s lifestyle and provides recommendations and suggestions based on the results. Keywords\u2014Fuzzy Logic, GPS, Android Application", "targets": "A Fuzzy Logic System to Analyze a Student\u2019s Lifestyle"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-fa2a664b9a204dfe983af43c75866877", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Object recognition and localization are important tasks in computer vision. The focus of this work is the incorporation of contextual information in order to improve object recognition and localization. For instance, it is natural to expect not to see an elephant to appear in the middle of an ocean. We consider a simple approach to encapsulate such common sense knowledge using co-occurrence statistics from web documents. By merely counting the number of times nouns (such as elephants, sharks, oceans, etc.) co-occur in web documents, we obtain a good estimate of expected co-occurrences in visual data. We then cast the problem of combining textual co-occurrence statistics with the predictions of image-based classifiers as an optimization problem. The resulting optimization problem serves as a surrogate for our inference procedure. Albeit the simplicity of the resulting optimization problem, it is effective in improving both recognition and localization accuracy. Concretely, we observe significant improvements in recognition and localization rates for both ImageNet Detection 2012 and Sun 2012 datasets.", "targets": "Using Web Co-occurrence Statistics for Improving Image Categorization"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f4c8462acb604468a5eefad3cbd9cc4b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We introduce Natural Neural Networks, a novel family of algorithms that speed up convergence by adapting their internal representation during training to improve conditioning of the Fisher matrix. In particular, we show a specific example that employs a simple and efficient reparametrization of the neural network weights by implicitly whitening the representation obtained at each layer, while preserving the feed-forward computation of the network. Such networks can be trained efficiently via the proposed Projected Natural Gradient Descent algorithm (PRONG), which amortizes the cost of these reparametrizations over many parameter updates and is closely related to the Mirror Descent online learning algorithm. We highlight the benefits of our method on both unsupervised and supervised learning tasks, and showcase its scalability by training on the large-scale ImageNet Challenge dataset.", "targets": "Natural Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a727d84adcf74d119d660156ce786e77", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We consider online learning of ensembles of portfolio selection algorithms and aim to regularize risk by encouraging diversification with respect to a predefined risk-driven grouping of stocks. Our procedure uses online convex optimization to control capital allocation to underlying investment algorithms while encouraging non-sparsity over the given grouping. We prove a logarithmic regret for this procedure with respect to the best-in-hindsight ensemble. We applied the procedure with known mean-reversion portfolio selection algorithms using the standard GICS industry sector grouping. Empirical Experimental results showed an impressive percentage increase of risk-adjusted return (Sharpe ratio).", "targets": "Online Learning of Portfolio Ensembles with Sector Exposure Regularization"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4fb50d9ebfe1425b92d613ff85fd97ad", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Chinese characters have a complex and hierarchical graphical structure carrying both semantic and phonetic information. We use this structure to enhance the text model and obtain better results in standard NLP operations. First of all, to tackle the problem of graphical variation we define allographic classes of characters. Next, the relation of inclusion of a subcharacter in a characters, provides us with a directed graph of allographic classes. We provide this graph with two weights: semanticity (semantic relation between subcharacter and character) and phoneticity (phonetic relation) and calculate \u201cmost semantic subcharacter paths\u201d for each character. Finally, adding the information contained in these paths to unigrams we claim to increase the efficiency of text mining methods. We evaluate our method on a text classification task on two corpora (Chinese and Japanese) of a total of 18 million characters and get an improvement of 3% on an already high baseline of 89.6% precision, obtained by a linear SVM classifier. Other possible applications and perspectives of the system are discussed.", "targets": "New Perspectives in Sinographic Language Processing Through the Use of Character Structure"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1611980a3ea543db9d91279cc6c32a46", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper we present architecture of a fuzzy expert system used for therapy of dyslalic children. With fuzzy approach we can create a better model for speech therapist decisions. A software interface was developed for validation of the system. The main objectives of this task are: personalized therapy (the therapy must be in according with child\u2019s problems level, context and possibilities), speech therapist assistant (the expert system offer some suggestion regarding what exercises are better for a specific moment and from a specific child), (self) teaching (when system\u2019s conclusion is different that speech therapist\u2019s conclusion the last one must have the knowledge base change possibility).", "targets": "ARCHITECTURE OF A FUZZY EXPERT SYSTEM USED FOR DYSLALIC CHILDREN THERAPY"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9ecfee6326024c7fa92cd670c2683d0b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In some domestic professional sports leagues, the home stadiums are located in cities connected by a common train line running in one direction. For these instances, we can incorporate this geographical information to determine optimal or nearly-optimal solutions to the n-team Traveling Tournament Problem (TTP), an NP-hard sports scheduling problem whose solution is a double round-robin tournament schedule that minimizes the sum total of distances traveled by all n teams. We introduce the Linear Distance Traveling Tournament Problem (LD-TTP), and solve it for n = 4 and n = 6, generating the complete set of possible solutions through elementary combinatorial techniques. For larger n, we propose a novel \u201cexpander construction\u201d that generates an approximate solution to the LD-TTP. For n \u2261 4 (mod 6), we show that our expander construction produces a feasible double round-robin tournament schedule whose total distance is guaranteed to be no worse than 4 3 times the optimal solution, regardless of where the n teams are located. This 4 3 -approximation for the LD-TTP is stronger than the currently best-known ratio of 5 3 + for the general TTP. We conclude the paper by applying this linear distance relaxation to general (nonlinear) n-team TTP instances, where we develop fast approximate solutions by simply \u201cassuming\u201d the n teams lie on a straight line and solving the modified problem. We show that this technique surprisingly generates the distance-optimal tournament on all benchmark sets on 6 teams, as well as close-to-optimal schedules for larger n, even when the teams are located around a circle or positioned in three-dimensional space.", "targets": "Generating Approximate Solutions to the Traveling Tournament Problem using a Linear Distance Relaxation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7bb59760be8746948bcdfec5cad43b9c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "I. Abstract This paper attempts multi-label classification by extending the idea of independent binary classification models for each output label, and exploring how the inherent correlation between output labels can be used to improve predictions. Logistic Regression, Naive Bayes, Random Forest, and SVM models were constructed, with SVM giving the best results: an improvement of 12.9% over binary models was achieved for hold out cross validation by augmenting with pairwise correlation probabilities of the labels.", "targets": "Exploring Correlation between Labels to improve Multi-Label Classification"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-44d84a6c5c8644569b9f968531bbfd16", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper presents a multilingual study on, per single post of microblog text, (a) how much can be said, (b) how much is written in terms of characters and bytes, and (c) how much is said in terms of information content in posts by different organizations in different languages. Focusing on three different languages (English, Chinese, and Japanese), this research analyses Weibo and Twitter accounts of major embassies and news agencies. We first establish our criterion for quantifying \u201chow much can be said\u201d in a digital text based on the openly available Universal Declaration of Human Rights and the translated subtitles from TED talks. These parallel corpora allow us to determine the number of characters and bits needed to represent the same content in different languages and character encodings. We then derive the amount of information that is actually contained in microblog posts authored by selected accounts on Weibo and Twitter. Our results confirm that languages with larger character sets such as Chinese and Japanese contain more information per character than English, but the actual information content contained within a microblog text varies depending on both the type of organization and the language of the post. We conclude with a discussion on the design implications of microblog text limits for different languages.", "targets": "How much is said in a microblog? A multilingual inquiry based on Weibo and Twitter"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c3331cecf5854d1c86ca7e8b550b7d4d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Classification and clustering have been studied separately in machine learning and computer vision. Inspired by the recent success of deep learning models in solving various vision problems (e.g., object recognition, semantic segmentation) and the fact that humans serve as the gold standard in assessing clustering algorithms, here, we advocate for a unified treatment of the two problems and suggest that hierarchical frameworks that progressively build complex patterns on top of the simpler ones (e.g., convolutional neural networks) offer a promising solution. We do not dwell much on the learning mechanisms in these frameworks as they are still a matter of debate, with respect to biological constraints. Instead, we emphasize on the compositionality of the real world structures and objects. In particular, we show that CNNs, trained end to end using back propagation with noisy labels, are able to cluster data points belonging to several overlapping shapes, and do so much better than the state of the art algorithms. The main takeaway lesson from our study is that mechanisms of human vision, particularly the hierarchal organization of the visual ventral stream should be taken into account in clustering algorithms (e.g., for learning representations in an unsupervised manner or with minimum supervision) to reach human level clustering performance. This, by no means, suggests that other methods do not hold merits. For example, methods relying on pairwise affinities (e.g., spectral clustering) have been very successful in many cases but still fail in some cases (e.g., overlapping clusters).", "targets": "A new look at clustering through the lens of deep convolutional neural networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-fc928ca4fa934e7bab53f5938940c6c5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We introduce Deep Neural Programs (DNP), a novel programming paradigm for writing adaptive controllers for cyber-physical systems (CPS). DNP replace if and while statements, whose discontinuity is responsible for undecidability in CPS analysis, intractability in CPS design, and frailness in CPS implementation, with their smooth, neural nif and nwhile counterparts. This not only makes CPS analysis decidable and CPS design tractable, but also allows to write robust and adaptive CPS code. In DNP the connection between the sigmoidal guards of the nif and nwhile statements has to be given as a Gaussian Bayesian network, which reflects the partial knowledge, the CPS program has about its environment. To the best of our knowledge, DNP are the first approach linking neural networks to programs, in a way that makes explicit the meaning of the network. In order to prove and validate the usefulness of DNP, we use them to write and learn an adaptive CPS controller for the parallel parking of the Pioneer rovers available in our CPS lab.", "targets": "Deep Neural Programs for Adaptive Control in Cyber-Physical Systems"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-dedde9ae19114f3c830a0b6c80df543e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Topic Models have been reported to be beneficial for aspect-based sentiment analysis. This paper reports a simple topic model for sarcasm detection, a first, to the best of our knowledge. Designed on the basis of the intuition that sarcastic tweets are likely to have a mixture of words of both sentiments as against tweets with literal sentiment (either positive or negative), our hierarchical topic model discovers sarcasm-prevalent topics and topic-level sentiment. Using a dataset of tweets labeled using hashtags, the model estimates topic-level, and sentiment-level distributions. Our evaluation shows that topics such as \u2018work\u2019, \u2018gun laws\u2019, \u2018weather\u2019 are sarcasm-prevalent topics. Our model is also able to discover the mixture of sentiment-bearing words that exist in a text of a given sentiment-related label. Finally, we apply our model to predict sarcasm in tweets. We outperform two prior work based on statistical classifiers with specific features, by around 25%.", "targets": "\u2018Who would have thought of that!\u2019: A Hierarchical Topic Model for Extraction of Sarcasm-prevalent Topics and Sarcasm Detection"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-095a7b3aecf042c19d1dd484ece125b8", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Deep reinforcement learning (RL) has achieved several high profile successes in difficult control problems. However, these algorithms typically require a huge amount of data before they reach reasonable performance. In fact, their performance during learning can be extremely poor. This may be acceptable for a simulator, but it severely limits the applicability of deep RL to many real-world tasks, where the agent must learn in the real environment. In this paper we study a setting where the agent may access data from previous control of the system. We present an algorithm, Deep Q-learning from Demonstrations (DQfD), that leverages this data to massively accelerate the learning process even from relatively small amounts of demonstration data. DQfD works by combining temporal difference updates with large-margin classification of the demonstrator\u2019s actions. We show that DQfD has better initial performance than Deep Q-Networks (DQN) on 40 of 42 Atari games and it receives more average rewards than DQN on 27 of 42 Atari games. We also demonstrate that DQfD learns faster than DQN even when given poor demonstration data.", "targets": "Learning from Demonstrations for Real World Reinforcement Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5772c558155c4056b307657d7aac9712", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In budget\u2013limited multi\u2013armed bandit (MAB) problems, the learner\u2019s actions are costly and constrained by a fixed budget. Consequently, an optimal exploitation policy may not be to pull the optimal arm repeatedly, as is the case in other variants of MAB, but rather to pull the sequence of different arms that maximises the agent\u2019s total reward within the budget. This difference from existing MABs means that new approaches to maximising the total reward are required. Given this, we develop two pulling policies, namely: (i) KUBE; and (ii) fractional KUBE. Whereas the former provides better performance up to 40% in our experimental settings, the latter is computationally less expensive. We also prove logarithmic upper bounds for the regret of both policies, and show that these bounds are asymptotically optimal (i.e. they only differ from the best possible regret by a constant factor).", "targets": "Knapsack based Optimal Policies for Budget\u2013Limited Multi\u2013Armed Bandits"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-fb0cbe5daecd495fa4ff31b49c851890", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Correct inference of genetic regulations inside a cell is one of the greatest challenges in post genomic era for the biologist and researchers. Several intelligent techniques and models were already proposed to identify the regulatory relations among genes from the biological database like time series microarray data. Recurrent Neural Network (RNN) is one of the most popular and simple approach to model the dynamics as well as to infer correct dependencies among genes. In this paper, Bat Algorithm (BA) was applied to optimize the model parameters of RNN model of Gene Regulatory Network (GRN). Initially the proposed method is tested against small artificial network without any noise and the efficiency was observed in term of number of iteration, number of population and BA optimization parameters. The model was also validated in presence of different level of random noise for the small artificial network and that proved its ability to infer the correct inferences in presence of noise like real world dataset. In the next phase of this research, BA based RNN is applied to real world benchmark time series microarray dataset of E. Coli. The results shown that it can able to identify the maximum true positive regulation but also include some false positive regulations. Therefore, BA is very suitable for identifying biological plausible GRN with the help RNN model.", "targets": "Recurrent Neural Network Based Modeling of Gene Regulatory Network Using Bat Algorithm"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-16aacbbf94514f389b9afff8a19afe84", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "With the acceptance of Western culture and science, Traditional Chinese Medicine (TCM) has become a controversial issue in China. So, it\u2019s important to study the public\u2019s sentiment and opinion on TCM. The rapid development of online social network, such as twitter, make it convenient and efficient to sample hundreds of millions of people for the aforementioned sentiment study. To the best of our knowledge, the present work is the first attempt that applies sentiment analysis to the domain of TCM on Sina Weibo (a twitter-like microblogging service in China). In our work, firstly we collect tweets topic about TCM from Sina Weibo, and label the tweets as supporting TCM and opposing TCM automatically based on user tag. Then, a support vector machine classifier has been built to predict the sentiment of TCM tweets without labels. Finally, we present a method to adjust the classifier result. The performance of F-measure attained with our method is 97%.", "targets": "Sentiment Analysis based on User Tag for Traditional Chinese Medicine in Weibo"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-707717acd759407da880270ae8c2f331", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We describe and analyze a simple and effective algorithm for sequence segmentation applied to speech processing tasks. We propose a neural architecture that is composed of two modules trained jointly: a recurrent neural network (RNN) module and a structured prediction model. The RNN outputs are considered as feature functions to the structured model. The overall model is trained with a structured loss function which can be designed to the given segmentation task. We demonstrate the effectiveness of our method by applying it to two simple tasks commonly used in phonetic studies: word segmentation and voice onset time segmentation. Results suggest the proposed model is superior to previous methods, obtaining state-of-the-art results on the tested datasets.", "targets": "SEQUENCE SEGMENTATION USING JOINT RNN AND STRUCTURED PREDICTION MODELS"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c3ae64d4250c4189a572699c04969473", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this work we aim to discover high quality speech features and linguistic units directly from unlabeled speech data in a zero resource scenario. The results are evaluated using the metrics and corpora proposed in the Zero Resource Speech Challenge organized at Interspeech 2015. A Multi-layered Acoustic Tokenizer (MAT) was proposed for automatic discovery of multiple sets of acoustic tokens from the given corpus. Each acoustic token set is specified by a set of hyperparameters that describe the model configuration. These sets of acoustic tokens carry different characteristics fof the given corpus and the language behind, thus can be mutually reinforced. The multiple sets of token labels are then used as the targets of a Multi-target Deep Neural Network (MDNN) trained on low-level acoustic features. Bottleneck features extracted from the MDNN are then used as the feedback input to the MAT and the MDNN itself in the next iteration. We call this iterative deep learning framework the Multi-layered Acoustic Tokenizing Deep Neural Network (MAT-DNN), which generates both high quality speech features for the Track 1 of the Challenge and acoustic tokens for the Track 2 of the Challenge. In addition, we performed extra experiments on the same corpora on the application of query-by-example spoken term detection. The experimental results showed the iterative deep learning framework of MAT-DNN improved the detection performance due to better underlying speech features and acoustic tokens.", "targets": "AN ITERATIVE DEEP LEARNING FRAMEWORK FOR UNSUPERVISED DISCOVERY OF SPEECH FEATURES AND LINGUISTIC UNITS WITH APPLICATIONS ON SPOKEN TERM DETECTION"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4414f1b3a4a542ac9683b1bdf74221e9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In real-time strategy games like StarCraft, skilled players often block the entrance to their base with buildings to prevent the opponent\u2019s units from getting inside. This technique, called \u201cwalling-in\u201d, is a vital part of player\u2019s skill set, allowing him to survive early aggression. However, current artificial players (bots) do not possess this skill, due to numerous inconveniences surfacing during its implementation in imperative languages like C++ or Java. In this text, written as a guide for bot programmers, we address the problem of finding an appropriate building placement that would block the entrance to player\u2019s base, and present a ready to use declarative solution employing the paradigm of answer set programming (ASP). We also encourage the readers to experiment with different declarative approaches to this problem.", "targets": "Implementing a Wall-In Building Placement in StarCraft with Declarative Programming"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8771acd885394ca8be2336458182a282", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Realizability for knowledge representation formalisms studies the following question: Given a semantics and a set of interpretations, is there a knowledge base whose semantics coincides exactly with the given interpretation set? We introduce a general framework for analyzing realizability in abstract dialectical frameworks (ADFs) and various of its subclasses. In particular, the framework applies to Dung argumentation frameworks, SETAFs by Nielsen and Parsons, and bipolar ADFs. We present a uniform characterization method for the admissible, complete, preferred and model/stable semantics. We employ this method to devise an algorithm that decides realizability for the mentioned formalisms and semantics; moreover the algorithm allows for constructing a desired knowledge base whenever one exists. The algorithm is built in a modular way and thus easily extensible to new formalisms and semantics. We have also implemented our approach in answer set programming, and used the implementation to obtain several novel results on the relative expressiveness of the abovementioned formalisms.", "targets": "Characterizing Realizability in Abstract Argumentation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a08ce95480244762b7fbb0bc370fec6b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We theoretically analyze and compare the following five popular multiclass classification methods: One vs. All, All Pairs, Tree-based classifiers, Error Correcting Output Codes (ECOC) with randomly generated code matrices, and Multiclass SVM. In the first four methods, the classification is based on a reduction to binary classification. We consider the case where the binary classifier comes from a class of VC dimension d, and in particular from the class of halfspaces over R. We analyze both the estimation error and the approximation error of these methods. Our analysis reveals interesting conclusions of practical relevance, regarding the success of the different approaches under various conditions. Our proof technique employs tools from VC theory to analyze the approximation error of hypothesis classes. This is in sharp contrast to most, if not all, previous uses of VC theory, which only deal with estimation error.", "targets": "Multiclass Learning Approaches: A Theoretical Comparison with Implications"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-11f868a2b42b482e94649be93020478b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Collaborative data consist of ratings relating two distinct sets of objects: users and items. Much of the work with such data focuses on filtering: predicting unknown ratings for pairs of users and items. In this paper we focus on the problem of visualizing the information. Given all of the ratings, our task is to embed all of the users and items as points in the same Euclidean space. We would like to place users near items that they have rated (or would rate) high, and far away from those they would give low ratings. We pose this problem as a real-valued non-linear Bayesian network and employ Markov chain Monte Carlo and expectation maximization to find an embedding. We present a metric by which to judge the quality of a visualization and compare our results to Eigentaste, locally linear embedding and cooccurrence data embedding on three real-world datasets.", "targets": "Visualization of Collaborative Data"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6ca6d68491904e84bc66dfd2b9bc4eba", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recently, resources and tasks were proposed to go beyond state tracking in dialogue systems. An example is the frame tracking task, which requires recording multiple frames, one for each user goal set during the dialogue. This allows a user, for instance, to compare items corresponding to different goals. This paper proposes a model which takes as input the list of frames created so far during the dialogue, the current user utterance as well as the dialogue acts, slot types, and slot values associated with this utterance. The model then outputs the frame being referenced by each triple of dialogue act, slot type, and slot value. We show that on the recently published Frames dataset, this model significantly outperforms a previously proposed rule-based baseline. In addition, we propose an extensive analysis of the frame tracking task by dividing it into sub-tasks and assessing their difficulty with respect to our model.", "targets": "A Frame Tracking Model for Memory-Enhanced Dialogue Systems"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1253f547c5e1495a9d9b26a2792c9566", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper we propose an extension to the Fuzzy Cognitive Maps (FCMs) that aims at aggregating a number of reasoning tasks into a one parallel run. The described approach consists in replacing real-valued activation levels of concepts (and further influence weights) by random variables. Such extension, followed by the implemented software tool, allows for determining ranges reached by concept activation levels, sensitivity analysis as well as statistical analysis of multiple reasoning results. We replace multiplication and addition operators appearing in the FCM state equation by appropriate convolutions applicable for discrete random variables. To make the model computationally feasible, it is further augmented with aggregation operations for discrete random variables. We discuss four implemented aggregators, as well as we report results of preliminary tests.", "targets": "Combining Fuzzy Cognitive Maps and Discrete Random Variables"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-99cf258b04a642c3b4a5e61d1de6b7ef", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Neural machine translation (NMT) models are able to partially learn syntactic information from sequential lexical information. Still, some complex syntactic phenomena such as prepositional phrase attachment are poorly modeled. This work aims to answer two questions: 1) Does explicitly modeling source or target language syntax help NMT? 2) Is tight integration of words and syntax better than multitask training? We introduce syntactic information in the form of CCG supertags either in the source as an extra feature in the embedding, or in the target, by interleaving the target supertags with the word sequence. Our results on WMT data show that explicitly modeling syntax improves machine translation quality for English\u2194German, a high-resource pair, and for English\u2194Romanian, a lowresource pair and also several syntactic phenomena including prepositional phrase attachment. Furthermore, a tight coupling of words and syntax improves translation quality more than multitask training.", "targets": "Syntax-aware Neural Machine Translation Using CCG"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c06788158fd34588bb910231bf556b59", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We investigate the problem of learning discrete, undirected graphical models in a differentially private way. We show that the approach of releasing noisy sufficient statistics using the Laplace mechanism achieves a good trade-off between privacy, utility, and practicality. A naive learning algorithm that uses the noisy sufficient statistics \u201cas is\u201d outperforms general-purpose differentially private learning algorithms. However, it has three limitations: it ignores knowledge about the data generating process, rests on uncertain theoretical foundations, and exhibits certain pathologies. We develop a more principled approach that applies the formalism of collective graphical models to perform inference over the true sufficient statistics within an expectationmaximization framework. We show that this learns better models than competing approaches on both synthetic data and on real human mobility data used as a case study.", "targets": "Differentially Private Learning of Undirected Graphical Models Using Collective Graphical Models"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4a5933ddd5b641659fd5d6fcadb8b152", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recommender systems play an increasingly important role in online applications to help users find what they need or prefer. Collaborative filtering algorithms that generate predictions by analyzing the user-item rating matrix perform poorly when the matrix is sparse. To alleviate this problem, this paper proposes a simple recommendation algorithm that fully exploits the similarity information among users and items and intrinsic structural information of the user-item matrix. The proposed method constructs a new representation which preserves affinity and structure information in the user-item rating matrix and then performs recommendation task. To capture proximity information about users and items, two graphs are constructed. Manifold learning idea is used to constrain the new representation to be smooth on these graphs, so as to enforce users and item proximities. Our model is formulated as a convex optimization problem, for which we need to solve the well known Sylvester equation only. We carry out extensive empirical evaluations on six benchmark datasets to show the effectiveness of this approach.", "targets": "Top-N Recommendation on Graphs"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-71d38a60fe614eb4820bf206fb0fa458", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Web Service is one of the most significant current discussions in information sharing technologies and one of the examples of service oriented processing. To ensure accurate execution of web services operations, it must be adaptable with policies of the social networks in which it signs up. This adaptation implements using controls called \u201cCommitment\u201d. This paper describes commitments structure and existing research about commitments and social web services, then suggests an algorithm for consistency of commitments in social web services. As regards the commitments may be executed concurrently, a key challenge in web services execution based on commitment structure is consistency ensuring in execution time. The purpose of this research is providing an algorithm for consistency ensuring between web services operations based on commitments structure.", "targets": "Consistency Ensuring in Social Web Services Based on Commitments Structure"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a4a3ea81f2a24984b1e32753a08b258b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Understanding physical phenomena is a key competence that enables humans and animals to act and interact under uncertain perception in previously unseen environments containing novel object and their configurations. Developmental psychology has shown that such skills are acquired by infants from observations at a very early stage. In this paper, we contrast a more traditional approach of taking a modelbased route with explicit 3D representations and physical simulation by an end-to-end approach that directly predicts stability and related quantities from appearance. We ask the question if and to what extent and quality such a skill can directly be acquired in a data-driven way\u2014 bypassing the need for an explicit simulation. We present a learning-based approach based on simulated data that predicts stability of towers comprised of wooden blocks under different conditions and quantities related to the potential fall of the towers. The evaluation is carried out on synthetic data and compared to human judgments on the same stimuli.", "targets": "To Fall Or Not To Fall: A Visual Approach to Physical Stability Prediction"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7757e6c7e47740899623d7ce2f1d92fd", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper describes the best first search strategy used by U-Plan (Mansell 1993a), a planning system that constructs quantitatively ranked plans given an incomplete description of an uncertain environment. U-Plan uses uncertain and incomplete evidence describing the environment, characterises it using a Dempster\u00ad Shafer interval, and generates a set of possible world states. Plan construction takes place in an abstraction hierarchy where strategic decisions are made before tactical decisions. Search through this abstraction hierarchy is guided by a quantitative measure (expected fulfilment) based on decision theory. The search strategy is best first with the provision to update expected fulfilments and review previous decisions in the light of planning developments. U-Pian generates multiple plans for multiple possible worlds, and attempts to use existing plans for new world situations. A super-plan is then constructed, based on merging the set of plans and appropriately timed knowledge acquisition operators, which are used to decide between plan alternatives during plan execution.", "targets": "Operator Selection While Planning Under Uncertainty"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-bbed1f3b64e64d6bbb4c0288dfa86a4f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The manuscript presents an experiment at implementation of a Machine Translation system in a MapReduce model. The empirical evaluation was done using fully implemented translation systems embedded into the MapReduce programming model. Two machine translation paradigms were studied: shallow transfer Rule Based Machine Translation and Statistical Machine Translation. The results show that the MapReduce model can be successfully used to increase the throughput of a machine translation system. Furthermore this method enhances the throughput of a machine translation system without decreasing the quality of the translation output. Thus, the present manuscript also represents a contribution to the seminal work in natural language processing, specifically Machine Translation. It first points toward the importance of the definition of the metric of throughput of translation system and, second, the applicability of the machine translation task to the MapReduce paradigm.", "targets": "Increasing the throughput of machine translation systems using clouds"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c33871bef392411e930e7aa9479d8d0e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In many machine learning applications, labeled data is scarce and obtaining more labels is expensive. We introduce a new approach to supervising neural networks by specifying constraints that should hold over the output space, rather than direct examples of input-output pairs. These constraints are derived from prior domain knowledge, e.g., from known laws of physics. We demonstrate the effectiveness of this approach on real world and simulated computer vision tasks. We are able to train a convolutional neural network to detect and track objects without any labeled examples. Our approach can significantly reduce the need for labeled training data, but introduces new challenges for encoding prior knowledge into appropriate loss functions.", "targets": "Label-Free Supervision of Neural Networks with Physics and Domain Knowledge"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0bf4bc51a31643fb8c9a3cb1ad74b670", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Holding commercial negotiations and selecting the best supplier in supply chain management systems are among weaknesses of producers in production process. Therefore, applying intelligent systems may have an effective role in increased speed and improved quality in the selections .This paper introduces a system which tries to trade using multi-agents systems and holding negotiations between any agents. In this system, an intelligent agent is considered for each segment of chains which it tries to send order and receive the response with attendance in negotiation medium and communication with other agents .This paper introduces how to communicate between agents, characteristics of multi-agent and standard registration medium of each agent in the environment. JADE (Java Application Development Environment) was used for implementation and simulation of agents cooperation. Keyword(s): e-Commerce, e-Business, Supply Chain Management System(SCM), eSCM, Intelligent Agents, JADE, Multi Agents", "targets": "An Intelligent Approach for Negotiating between chains in Supply Chain Management Systems"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-70a466ed17e649e6a6b321f33fd7d38f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper introduces an elemental building block which combines Dictionary Learning and Dimension Reduction (DRDL). We show how this foundational element can be used to iteratively construct a Hierarchical Sparse Representation (HSR) of a sensory stream. We compare our approach to existing models showing the generality of our simple prescription. We then perform preliminary experiments using this framework, illustrating with the example of an object recognition task using standard datasets. This work introduces the very first steps towards an integrated framework for designing and analyzing various computational tasks from learning to attention to action. The ultimate goal is building a mathematically rigorous, integrated theory of intelligence.", "targets": "Learning Hierarchical Sparse Representations using Iterative Dictionary Learning and Dimension Reduction"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7e12da476f984787baae6fdd3d7a4e4a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We start with an overview of a class of submodular functions called SCMMs (sums of concave composed with non-negative modular functions plus a final arbitrary modular). We then define a new class of submodular functions we call deep submodular functions or DSFs. We show that DSFs are a flexible parametric family of submodular functions that share many of the properties and advantages of deep neural networks (DNNs), including many-layered hierarchical topologies, representation learning, distributed representations, opportunities and strategies for training, and suitability to GPU-based matrix/vector computing. DSFs can be motivated by considering a hierarchy of descriptive concepts over ground elements and where one wishes to allow submodular interaction throughout this hierarchy. In machine learning and data science applications, where there is often either a natural or an automatically learnt hierarchy of concepts over data, DSFs therefore naturally apply. Results in this paper show that DSFs constitute a strictly larger class of submodular functions than SCMMs, thus justifying their mathematical and practical utility. Moreover, we show that, for any integer k > 0, there are k-layer DSFs that cannot be represented by a k\u2032-layer DSF for any k\u2032 < k. This implies that, like DNNs, there is a utility to depth, but unlike DNNs (which can be universally approximated by shallow networks), the family of DSFs strictly increase with depth. Despite this property, however, we show that DSFs, even with arbitrarily large k, do not comprise all submodular functions. We show this using a technique that \u201cbackpropagates\u201d certain requirements if it was the case that DSFs comprised all submodular functions. In offering the above results, we also define the notion of an antitone superdifferential of a concave function and show how this relates to submodular functions (in general), DSFs (in particular), negative second-order partial derivatives, continuous submodularity, and concave extensions. To further motivate our analysis, we provide various special case results from matroid theory, comparing DSFs with forms of matroid rank, in particular the laminar matroid. Lastly, we discuss strategies to learn DSFs, and define the classes of deep supermodular functions, deep difference of submodular functions, and deep multivariate submodular functions, and discuss where these can be useful in applications.", "targets": "Deep Submodular Functions"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-76ac039483544e5fbc19b5c1b2ffdd16", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Vector Symbolic Architectures (VSAs) are high-dimensional vector representations of objects (eg., words, image parts), relations (eg., sentence structures), and sequences for use with machine learning algorithms. They consist of a vector addition operator for representing a collection of unordered objects, a Binding operator for associating groups of objects, and a methodology for encoding complex structures. We first develop Constraints that machine learning imposes upon VSAs: for example, similar structures must be represented by similar vectors. The constraints suggest that current VSAs should represent phrases (\u201cThe smart Brazilian girl\u201d) by binding sums of terms, in addition to simply binding the terms directly. We show that matrix multiplication can be used as the binding operator for a VSA, and that matrix elements can be chosen at random. A consequence for living systems is that binding is mathematically possible without the need to specify, in advance, precise neuron-to-neuron connection properties for large numbers of synapses. A VSA that incorporates these ideas, MBAT (Matrix Binding of Additive Terms), is described that satisfies all Constraints. With respect to machine learning, for some types of problems appropriate VSA representations permit us to prove learnability, rather than relying on simulations. We also propose dividing machine (and neural) learning and representation into three Stages, with differing roles for learning in each stage. For neural modeling, we give \u201crepresentational reasons\u201d for nervous systems to have many recurrent connections, as well as for the importance of phrases in language processing. Sizing simulations and analyses suggest that VSAs in general, and MBAT in particular, are ready for real-world applications.", "targets": "Representing Objects, Relations, and Sequences"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-520bacf9d7ff40b5957370ef7cd72bd9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "It was recently shown that the problem of de\u00adcoding messages transmitted through a noisychannel can be formulated as a belief up\u00addating task over a probabilistic network (14).Moreover, it was observed that iterative ap\u00adplication of the (linear time) belief propa\u00adgation algorithm designed for polytrees (15)outperformed state of the art decoding algo\u00adrithms, even though the corresponding net\u00adworks may have many cycles.This paper demonstrates empirically thatan approximation algorithm approx-mpefor solving the most probable explana\u00adtion(MPE) problem, developed within therecently proposed mini-bucket eliminationframework (4), outperforms iterative beliefpropagation on classes of coding networksthat have bounded induced width.Our ex\u00adperiments suggest that approximate MPE de\u00adcoders can be good competitors to the ap\u00adproximate belief updating decoders.", "targets": "Empirical Evaluation of Approximation Algorithms for Probabilistic Decoding"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e9188bee8bd14f55ad4819a05faebc4c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Local consistency techniques such as k-consistency are a key component of specialised solvers for constraint satisfaction problems. In this paper we show that the power of using k-consistency techniques on a constraint satisfaction problem is precisely captured by using a particular inference rule, which we call negative-hyper-resolution, on the standard direct encoding of the problem into Boolean clauses. We also show that current clauselearning SAT-solvers will discover in expected polynomial time any inconsistency that can be deduced from a given set of clauses using negative-hyper-resolvents of a fixed size. We combine these two results to show that, without being explicitly designed to do so, current clause-learning SAT-solvers efficiently simulate k-consistency techniques, for all fixed values of k. We then give some experimental results to show that this feature allows clause-learning SAT-solvers to efficiently solve certain families of constraint problems which are challenging for conventional constraint-programming solvers.", "targets": "Local Consistency and SAT-Solvers"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3ebec859d3e040d9b6a67a22b26c1cae", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.", "targets": "OUTRAGEOUSLY LARGE NEURAL NETWORKS: THE SPARSELY-GATED MIXTURE-OF-EXPERTS LAYER"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0adae063cdd345089d8a229cbd928905", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Generative state estimators based on probabilistic filters and smoothers are one of the most popular classes of state estimators for robots and autonomous vehicles. However, generative models have limited capacity to handle rich sensory observations, such as camera images, since they must model the entire distribution over sensor readings. Discriminative models do not suffer from this limitation, but are typically more complex to train as latent variable models for state estimation. We present an alternative approach where the parameters of the latent state distribution are directly optimized as a deterministic computation graph, resulting in a simple and effective gradient descent algorithm for training discriminative state estimators. We show that this procedure can be used to train state estimators that use complex input, such as raw camera images, which must be processed using expressive nonlinear function approximators such as convolutional neural networks. Our model can be viewed as a type of recurrent neural network, and the connection to probabilistic filtering allows us to design a network architecture that is particularly well suited for state estimation. We evaluate our approach on tracking task with raw image inputs. The results show significant improvement over both standard generative approaches and regular recurrent neural networks.", "targets": "Backprop KF: Learning Discriminative Deterministic State Estimators"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d9543de07c0e49a6a17cf9e5b656dc47", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We describe an approach for unsupervised learning of a generic, distributed sentence encoder. Using the continuity of text from books, we train an encoderdecoder model that tries to reconstruct the surrounding sentences of an encoded passage. Sentences that share semantic and syntactic properties are thus mapped to similar vector representations. We next introduce a simple vocabulary expansion method to encode words that were not seen as part of training, allowing us to expand our vocabulary to a million words. After training our model, we extract and evaluate our vectors with linear models on 8 tasks: semantic relatedness, paraphrase detection, image-sentence ranking, question-type classification and 4 benchmark sentiment and subjectivity datasets. The end result is an off-the-shelf encoder that can produce highly generic sentence representations that are robust and perform well in practice. We will make our encoder publicly available.", "targets": "Skip-Thought Vectors"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f04f99ec0002421a840d07c026eda144", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The computational cost of many signal processing and machine learning techniques is often dominated by the cost of applying certain linear operators to high-dimensional vectors. This paper introduces an algorithm aimed at reducing the complexity of applying linear operators in high dimension by approximately factorizing the corresponding matrix into few sparse factors. The approach relies on recent advances in non-convex optimization. It is first explained and analyzed in details and then demonstrated experimentally on various problems including dictionary learning for image denoising, and the approximation of large matrices arising in inverse problems.", "targets": "Flexible Multi-layer Sparse Approximations of Matrices and Applications"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-06bce01a014a427595638c734b6669d5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Cyber-Physical Systems in general, and Intelligent Transport Systems (ITS) in particular use heterogeneous data sources combined with problem solving expertise in order to make critical decisions that may lead to some form of actions e.g., driver notifications, change of traffic light signals and braking to prevent an accident. Currently, a major part of the decision process is done by human domain experts, which is time-consuming, tedious and error-prone. Additionally, due to the intrinsic nature of knowledge possession this decision process cannot be easily replicated or reused. Therefore, there is a need for automating the reasoning processes by providing computational systems a formal representation of the domain knowledge and a set of methods to process that knowledge. In this paper, we propose a knowledge model that can be used to express both declarative knowledge about the systems\u2019 components, their relations and their current state, as well as procedural knowledge representing possible system behavior. In addition, we introduce a framework for knowledge management and automated reasoning (KMARF). The idea behind KMARF is to automatically select an appropriate problem solver based on formalized reasoning expertise in the knowledge base, and convert a problem definition to the corresponding format. This approach automates reasoning, thus reducing operational costs, and enables reusability of knowledge and methods across different domains. We illustrate the approach on a transportation", "targets": "A Framework for Knowledge Management and Automated Reasoning Applied on Intelligent Transport Systems"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f10b41bf93f4430b86fe6aea851c35e0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Multi-Agent Path Finding (MAPF) is an NP-hard problem well studied in artificial intelligence and robotics. It has many real-world applications for which existing MAPF solvers use various heuristics. However, these solvers are deterministic and perform poorly on \u201chard\u201d instances typically characterized by many agents interfering with each other in a small region. In this paper, we enhance MAPF solvers with randomization and observe that they exhibit heavy-tailed distributions of runtimes on hard instances. This leads us to develop simple rapid randomized restart (RRR) strategies with the intuition that, given a hard instance, multiple short runs have a better chance of solving it compared to one long run. We validate this intuition through experiments and show that our RRR strategies indeed boost the performance of state-ofthe-art MAPF solvers such as iECBS and M*.", "targets": "Rapid Randomized Restarts for Multi-Agent Path Finding Solvers"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4050f92da0424078b78be330c5121959", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recurrent Neural Networks (RNN) have obtained excellent result in many natural language processing (NLP) tasks. However, understanding and interpreting the source of this success remains a challenge. In this paper, we propose Recurrent Memory Network (RMN), a novel RNN architecture, that not only amplifies the power of RNN but also facilitates our understanding of its internal functioning and allows us to discover underlying patterns in data. We demonstrate the power of RMN on language modeling and sentence completion tasks. On language modeling, RMN outperforms Long Short-Term Memory (LSTM) network on three large German, Italian, and English dataset. Additionally we perform indepth analysis of various linguistic dimensions that RMN captures. On Sentence Completion Challenge, for which it is essential to capture sentence coherence, our RMN obtains 69.2% accuracy, surpassing the previous stateof-the-art by a large margin.", "targets": "Recurrent Memory Network for Language Modeling"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-92b71a260cea4439ac0529474cd1ac29", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper studies theoretically and empirically a method of turning machinelearning algorithms into probabilistic predictors that automatically enjoys a property of validity (perfect calibration) and is computationally efficient. The price to pay for perfect calibration is that these probabilistic predictors produce imprecise (in practice, almost precise for large data sets) probabilities. When these imprecise probabilities are merged into precise probabilities, the resulting predictors, while losing the theoretical property of perfect calibration, are consistently more accurate than the existing methods in empirical studies. The conference version of this paper is to appear in Advances in Neural Information Processing Systems 28, 2015.", "targets": "Large-scale probabilistic prediction with and without validity guarantees"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8e8d80205de5477893ae045c5efd913f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Reinforcement learning is a powerful technique to train an agent to perform a task. However, an agent that is trained using reinforcement learning is only capable of achieving the single task that is specified via its reward function. Such an approach does not scale well to settings in which an agent needs to perform a diverse set of tasks, such as navigating to varying positions in a room or moving objects to varying locations. Instead, we propose a method that allows an agent to automatically discover the range of tasks that it is capable of performing in its environment. We use a generator network to propose tasks for the agent to try to achieve, each task being specified as reaching a certain parametrized sub-set of the state-space. The generator network is optimized using adversarial training to produce tasks that are always at the appropriate level of difficulty for the agent. Our method thus automatically produces a curriculum of tasks for the agent to learn. We show that, by using this framework, an agent can efficiently and automatically learn to perform a wide set of tasks without requiring any prior knowledge of its environment. Our method can also learn to achieve tasks with sparse rewards, which traditionally pose significant challenges.", "targets": "Automatic Goal Generation for Reinforcement Learning Agents"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-396741e0e9d84954b742b0c7de602133", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Nearest neighbor (k-NN) graphs are widely used in machine learning and data mining applications, and our aim is to better understand what they reveal about the cluster structure of the unknown underlying distribution of points. Moreover, is it possible to identify spurious structures that might arise due to sampling variability? Our first contribution is a statistical analysis that reveals how certain subgraphs of a k-NN graph form a consistent estimator of the cluster tree of the underlying distribution of points. Our second and perhaps most important contribution is the following finite sample guarantee. We carefully work out the tradeoff between aggressive and conservative pruning and are able to guarantee the removal of all spurious cluster structures at all levels of the tree while at the same time guaranteeing the recovery of salient clusters. This is the first such finite sample result in the context of clustering.", "targets": "Pruning nearest neighbor cluster trees"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-13a21f3431b64ab6b3c72470e99295eb", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A new algorithm named EXPected Similarity Estimation (EXPoSE) was recently proposed to solve the problem of large-scale anomaly detection. It is a non-parametric and distribution free kernel method based on the Hilbert space embedding of probability measures. Given a dataset of n samples, EXPoSE needs only O(n) (linear time) to build a model and O(1) (constant time) to make a prediction. In this work we improve the linear computational complexity and show that an -accurate model can be estimated in constant time, which has significant implications for large-scale learning problems. To achieve this goal, we cast the original EXPoSE formulation into a stochastic optimization problem. It is crucial that this approach allows us to determine the number of iteration based on a desired accuracy , independent of the dataset size n. We will show that the proposed stochastic gradient descent algorithm works in general (possible infinite-dimensional) Hilbert spaces, is easy to implement and requires no additional step-size parameters.", "targets": "CONSTANT TIME EXPECTED SIMILARITY ESTIMATION USING STOCHASTIC OPTIMIZATION"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-860c8860bff342b182572bd36582a7f8", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper investigates how far a very deep neural network is from attaining close to saturating performance on existing 2D and 3D face alignment datasets. To this end, we make the following 5 contributions: (a) we construct, for the first time, a very strong baseline by combining a state-of-the-art architecture for landmark localization with a state-of-the-art residual block, train it on a very large yet synthetically expanded 2D facial landmark dataset and finally evaluate it on all other 2D facial landmark datasets. (b) We create a guided by 2D landmarks network which converts 2D landmark annotations to 3D and unifies all existing datasets, leading to the creation of LS3D-W, the largest and most challenging 3D facial landmark dataset to date (~230,000 images). (c) Following that, we train a neural network for 3D face alignment and evaluate it on the newly introduced LS3D-W. (d) We further look into the effect of all \u201ctraditional\u201d factors affecting face alignment performance like large pose, initialization and resolution, and introduce a \u201cnew\u201d one, namely the size of the network. (e) We show that both 2D and 3D face alignment networks achieve performance of remarkable accuracy which is probably close to saturating the datasets used. Training and testing code as well as the dataset can be downloaded from https: //www.adrianbulat.com/face-alignment/", "targets": "How far are we from solving the 2D & 3D Face Alignment problem? (and a dataset of 230,000 3D facial landmarks)"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-72f6ef8038144b6a85ce48d9bece8b7b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Exposing latent knowledge in geospatial trajectories has the potential to provide a better understanding of the movements of individuals and groups. Motivated by such a desire, this work presents the context tree, a new hierarchical data structure that summarises the context behind user actions in a single model. We propose a method for context tree construction that augments geospatial trajectories with land usage data to identify such contexts. Through evaluation of the construction method and analysis of the properties of generated context trees, we demonstrate the foundation for understanding and modelling behaviour a\u21b5orded. Summarising user contexts into a single data structure gives easy access to information that would otherwise remain latent, providing the basis for better understanding and predicting the actions and behaviours of individuals and groups. Finally, we also present a method for pruning context trees, for use in applications where it is desirable to reduce the size of the tree while retaining useful information.", "targets": "Context Trees: Augmenting Geospatial Trajectories with Context"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c5fa4f6f8f85406eb3e76f956113f712", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Problems at the intersection of vision and language are of significant importance both as challenging research questions and for the rich set of applications they enable. However, inherent structure in our world and bias in our language tend to be a simpler signal for learning than visual modalities, resulting in models that ignore visual information, leading to an inflated sense of their capability. We propose to counter these language priors for the task of Visual Question Answering (VQA) and make vision (the V in VQA) matter! Specifically, we balance the popular VQA dataset [3] by collecting complementary images such that every question in our balanced dataset is associated with not just a single image, but rather a pair of similar images that result in two different answers to the question. Our dataset is by construction more balanced than the original VQA dataset and has approximately twice the number of image-question pairs. Our complete balanced dataset is publicly available at http://visualqa.org/ as part of the 2nd iteration of the Visual Question Answering Dataset and Challenge (VQA v2.0). We further benchmark a number of state-of-art VQA models on our balanced dataset. All models perform significantly worse on our balanced dataset, suggesting that these models have indeed learned to exploit language priors. This finding provides the first concrete empirical evidence for what seems to be a qualitative sense among practitioners. Finally, our data collection protocol for identifying complementary images enables us to develop a novel interpretable model, which in addition to providing an answer to the given (image, question) pair, also provides a counterexample based explanation. Specifically, it identifies an image that is similar to the original image, but it believes has a different answer to the same question. This can help in building trust for machines among their users. \u2217The first two authors contributed equally. Who is wearing glasses? Where is the child sitting? Is the umbrella upside down? How many children are in the bed? woman man arms fridge", "targets": "Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-24e4a1642a90444ba7e2bc7efdc19457", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Feature reduction is an important concept which is used for reducing dimensions to decrease the computation complexity and time of classification. Since now many approaches have been proposed for solving this problem, but almost all of them just presented a fix output for each input dataset that some of them aren\u2019t satisfied cases for classification. In this we proposed an approach as processing input dataset to increase accuracy rate of each feature extraction methods. First of all, a new concept called dispelling classes gradually (DCG) is proposed to increase separability of classes based on their labels. Next, this method is used to process input dataset of the feature reduction approaches to decrease the misclassification error rate of their outputs more than when output is achieved without any processing. In addition our method has a good quality to collate with noise based on adapting dataset with feature reduction approaches. In the result part, two conditions (With process and without that) are compared to support our idea by using some of UCI datasets.", "targets": "Dispelling Classes Gradually to Improve Quality of Feature Reduction Approaches"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2da3afa5771f41f38d57dbd354b59d32", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Gaussian state space models have been used for decades as generative models of sequential data. They admit an intuitive probabilistic interpretation, have a simple functional form, and enjoy widespread adoption. We introduce a unified algorithm to efficiently learn a broad class of linear and non-linear state space models, including variants where the emission and transition distributions are modeled by deep neural networks. Our learning algorithm simultaneously learns a compiled inference network and the generative model, leveraging a structured variational approximation parameterized by recurrent neural networks to mimic the posterior distribution. We apply the learning algorithm to both synthetic and real-world datasets, demonstrating its scalability and versatility. We find that using the structured approximation to the posterior results in models with significantly higher held-out likelihood.", "targets": "Structured Inference Networks for Nonlinear State Space Models"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5379c91116904b21a8d10d6ed6897f56", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We address the problem of learning a ranking by using adaptively chosen pairwise comparisons. Our goal is to recover the ranking accurately but to sample the comparisons sparingly. If all comparison outcomes are consistent with the ranking, the optimal solution is to use an efficient sorting algorithm, such as Quicksort. But how do sorting algorithms behave if some comparison outcomes are inconsistent with the ranking? We give favorable guarantees for Quicksort for the popular Bradley\u2013Terry model, under natural assumptions on the parameters. Furthermore, we empirically demonstrate that sorting algorithms lead to a very simple and effective active learning strategy: repeatedly sort the items. This strategy performs as well as state-of-the-art methods (and much better than random sampling) at a minuscule fraction of the computational cost.", "targets": "Just Sort It! A Simple and Effective Approach to Active Preference Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c656d09d4304475aa0b846425f951b75", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "While recent neural machine translation approaches have delivered state-of-the-art performance for resource-rich language pairs, they suffer from the data scarcity problem for resource-scarce language pairs. Although this problem can be alleviated by exploiting a pivot language to bridge the source and target languages, the source-to-pivot and pivot-to-target translation models are usually independently trained. In this work, we introduce a joint training algorithm for pivot-based neural machine translation. We propose three methods to connect the two models and enable them to interact with each other during training. Experiments on Europarl and WMT corpora show that joint training of source-to-pivot and pivot-to-target models leads to significant improvements over independent training across various languages.", "targets": "Joint Training for Pivot-based Neural Machine Translation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4046ccd880be4adca9173f7cad81fa46", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We study the task of online boosting \u2014 combining online weak learners into an online strong learner. While batch boosting has a sound theoretical foundation, online boosting deserves more study from the theoretical perspective. In this paper, we carefully compare the differences between online and batch boosting, and propose a novel and reasonable assumption for the online weak learner. Based on the assumption, we design an online boosting algorithm with a strong theoretical guarantee by adapting from the offline SmoothBoost algorithm that matches the assumption closely. We further tackle the task of deciding the number of weak learners using established theoretical results for online convex programming and predicting with expert advice. Experiments on real-world data sets demonstrate that the proposed algorithm compares favorably with existing online boosting algorithms.", "targets": "An Online Boosting Algorithm with Theoretical Justifications"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7a22a412c458420e9782e219936dc271", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Gene and protein networks are very important to model complex large-scale systems in molecular biology. Inferring or reverseengineering such networks can be defined as the process of identifying gene/protein interactions from experimental data through computational analysis. However, this task is typically complicated by the enormously large scale of the unknowns in a rather small sample size. Furthermore, when the goal is to study causal relationships within the network, tools capable of overcoming the limitations of correlation networks are required. In this work, we make use of Bayesian Graphical Models to attach this problem and, specifically, we perform a comparative study of different state-of-the-art heuristics, analyzing their performance in inferring the structure of the Bayesian Network from breast cancer data.", "targets": "Combining Bayesian Approaches and Evolutionary Techniques for the Inference of Breast Cancer Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-857ebc553d8a4d1890993ce268c0cbef", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we study the problem of learning a monotone DNF with at most s terms of size (number of variables in each term) at most r (s term r-MDNF) from membership queries. This problem is equivalent to the problem of learning a general hypergraph using hyperedge-detecting queries, a problem motivated by applications arising in chemical reactions and genome sequencing. We first present new lower bounds for this problem and then present deterministic and randomized adaptive algorithms with query complexities that are almost optimal. All the algorithms we present in this paper run in time linear in the query complexity and the number of variables n. In addition, all of the algorithms we present in this paper are asymptotically tight for fixed r and/or s.", "targets": "On Exact Learning Monotone DNF from Membership Queries"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e4a95afd41414b3ea969feeb58e511ee", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Influence diagrams are decision theoretic extensions of Bayesian networks. They are applied to diverse decision problems. In this paper we apply influence diagrams to the optimization of a vehicle speed profile. We present results of computational experiments in which an influence diagram was used to optimize the speed profile of a Formula 1 race car at the Silverstone F1 circuit. The computed lap time and speed profiles correspond well to those achieved by test pilots. An extended version of our model that considers a more complex optimization function and diverse traffic constraints is currently being tested onboard a testing car by a major car manufacturer. This paper opens doors for new applications of influence diagrams.", "targets": "Influence diagrams for the optimization of a vehicle speed profile"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b7ebb66d040648388af46912dfee1cbe", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Reinforcement learning has been applied to many interesting problems such as the famous TD-gammon [1] and the inverted helicopter flight [2]. However little effort has been put into developing methods to learn policies for complex persistent tasks and tasks that are time-sensitive. In this paper we take a step towards solving this problem by using signal temporal logic (STL) as task specification, and taking advantage of the temporal abstraction feature that the options framework provide. We show via simulation that a relatively easy to implement algorithm that combines STL and options can learn a satisfactory policy with a small number of training cases.", "targets": "A Hierarchical Reinforcement Learning Method for Persistent Time-Sensitive Tasks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0bf7c55a806d427dbebdbb8ab27e1ce1", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Constrained sampling and counting are two fundamental problems in artificial intelligence with a diverse range of applications, spanning probabilistic reasoning and planning to constrained-random verification. While the theory of these problems was thoroughly investigated in the 1980s, prior work either did not scale to industrial size instances or gave up correctness guarantees to achieve scalability. Recently, we proposed a novel approach that combines universal hashing and SAT solving and scales to formulas with hundreds of thousands of variables without giving up correctness guarantees. This paper provides an overview of the key ingredients of the approach and discusses challenges that need to be overcome to handle larger real-world instances.", "targets": "Constrained Sampling and Counting: Universal Hashing Meets SAT Solving\u2217"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-178ade1d179744509292c61197eacf46", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The field of Distributed Constraint Optimization has gained momentum in recent years thanks to its ability to address various applications related to multi-agent cooperation. While techniques to solve Distributed Constraint Optimization Problems (DCOPs) are abundant and have matured substantially since the field inception, the number of DCOP realistic applications and benchmark used to asses the performance of DCOP algorithms is lagging behind. To contrast this background we (i) introduce the Smart Home Device Scheduling (SHDS) problem, which describe the problem of coordinating smart devices schedules across multiple homes as a multi-agent system, (ii) detail the physical models adopted to simulate smart sensors, smart actuators, and homes environments, and (iii) introduce a DCOP realistic benchmark for SHDS problems.", "targets": "A Realistic Dataset for the Smart Home Device Scheduling Problem for DCOPs"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-acc9febdce7b44ce8b9a18b04d9915f1", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Traditional approaches to non-monotonic reasoning fail to satisfy a number of plausible axioms for belief revision and suffer from conceptual difficulties as well. Recent work on ranked preferential models (RPMs) promises to overcome some of these difficulties. Here we show that RPMs are not adequate to handle iterated belief change. Specifically, we show that RPMs do not always allow for the reversibility of belief change. 1bis result indicates the need for numerical strengths of belief.", "targets": "Non-monotonic Reasoning and the Reversibility of Belief Change"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e8e4c951ac214b84bbf280381f7ba205", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The quality of a Neural Machine Translation system depends substantially on the availability of sizable parallel corpora. For low-resource language pairs this is not the case, resulting in poor translation quality. Inspired by work in computer vision, we propose a novel data augmentation approach that targets low-frequency words by generating new sentence pairs containing rare words in new, synthetically created contexts. Experimental results on simulated low-resource settings show that our method improves translation quality by up to 2.9 BLEU points over the baseline and up to 3.2 BLEU over back-translation.", "targets": "Data Augmentation for Low-Resource Neural Machine Translation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-41fb234bff624c4493c381cefd39683a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "One of the most important aims of the fields of robotics, artificial intelligence and artificial life is the design and construction of systems and machines as versatile and as reliable as living organisms at performing high level human-like tasks. But how are we to evaluate artificial systems if we are not certain how to measure these capacities in living systems, let alone how to define life or intelligence? Here I survey a concrete metric towards measuring abstract properties of natural and artificial systems, such as the ability to react to the environment and to control one\u2019s own behaviour.", "targets": "Quantifying Natural and Artificial Intelligence in Robots and Natural Systems with an Algorithmic Behavioural Test"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-52101dd5049d4524a0721e203a0ea361", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Residual learning has recently surfaced as an effective means of constructing very deep neural networks for object recognition. However, current incarnations of residual networks do not allow for the modeling and integration of complex relations between closely coupled recognition tasks or across domains. Such problems are often encountered in multimedia applications involving large-scale content recognition. We propose a novel extension of residual learning for deep networks that enables intuitive learning across multiple related tasks using cross-connections called cross-residuals. These cross-residuals connections can be viewed as a form of innetwork regularization and enables greater network generalization. We show how cross-residual learning (CRL) can be integrated in multitask networks to jointly train and detect visual concepts across several tasks. We present a single multitask cross-residual network with >40% less parameters that is able to achieve competitive, or even better, detection performance on a visual sentiment concept detection problem normally requiring multiple specialized single-task networks. The resulting multitask cross-residual network also achieves better detection performance by about 10.4% over a standard multitask residual network without cross-residuals with even a small amount of cross-task weighting.", "targets": "Deep Cross Residual Learning for Multitask Visual Recognition"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-52d1a147dfd844419b5c11bfa8f28404", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In the context of contemporary monophonic music, expression can be seen as the difference between a musical performance and its symbolic representation, i.e. a musical score. In this paper, we show how Maximum Entropy (MaxEnt) models can be used to generate musical expression in order to mimic a human performance. As a training corpus, we had a professional pianist play about 150 melodies of jazz, pop, and latin jazz. The results show a good predictive power, validating the choice of our model. Additionally, we set up a listening test whose results reveal that on average, people significantly prefer the melodies generated by the MaxEnt model than the ones without any expression, or with fully random expression. Furthermore, in some cases, MaxEnt melodies are almost as popular as the human performed ones.", "targets": "Maximum entropy models for generation of expressive music"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8f5eb39997324b678835cd1d6f72e1db", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Many online communities present user-contributed responses such as reviews of products and answers to questions. User-provided helpfulness votes can highlight the most useful responses, but voting is a social process that can gain momentum based on the popularity of responses and the polarity of existing votes. We propose the Chinese Voting Process (CVP) which models the evolution of helpfulness votes as a self-reinforcing process dependent on position and presentation biases. We evaluate this model on Amazon product reviews and more than 80 StackExchange forums, measuring the intrinsic quality of individual responses and behavioral coefficients of different communities.", "targets": "Beyond Exchangeability: The Chinese Voting Process"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-758d437c4b5a4d1ba7a5c8c2c0ffa72b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "It is well known that conditional indepen\u00ad dence can be used to factorize a joint prob\u00ad ability into a multiplication of conditional probabilities. This paper proposes a con\u00ad structive definition of intercausal indepen\u00ad dence, which can be used to further factorize a conditional probability. An inference algo\u00ad rithm is developed, which makes use of both conditional independence and intercausal in\u00ad dependence to reduce inference complexity in Bayesian networks.", "targets": "Intercausal Independence and Heterogeneous Factorization"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6392682802ed4ea7ad0bb432132166f3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A frequent object of study in linguistic typology is the order of elements {demonstrative, adjective, numeral, noun} in the noun phrase. The goal is to predict the relative frequencies of these orders across languages. Here we use Poisson regression to statistically compare some prominent accounts of this variation. We compare feature systems derived from Cinque (2005) to feature systems given in Cysouw (2010) and Dryer (in prep). In this setting, we do not find clear reasons to prefer the model of Cinque (2005) or Dryer (in prep), but we find both of these models have substantially better fit to the typological data than the model from Cysouw (2010).", "targets": "A Statistical Comparison of Some Theories of NP Word Order"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4f94cdfb177347fdae09d628f972b653", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This work focuses on the rapid development of linguistic annotation tools for resource-poor languages. We experiment several cross-lingual annotation projection methods using Recurrent Neural Networks (RNN) models. The distinctive feature of our approach is that our multilingual word representation requires only a parallel corpus between the source and target language. More precisely, our method has the following characteristics: (a) it does not use word alignment information, (b) it does not assume any knowledge about foreign languages, which makes it applicable to a wide range of resource-poor languages, (c) it provides truly multilingual taggers. We investigate both uniand bi-directional RNN models and propose a method to include external information (for instance low level information from POS) in the RNN to train higher level taggers (for instance, super sense taggers). We demonstrate the validity and genericity of our model by using parallel corpora (obtained by manual or automatic translation). Our experiments are conducted to induce cross-lingual POS and super sense taggers.", "targets": "Inducing Multilingual Text Analysis Tools Using Bidirectional Recurrent Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5c049555b18740e49025622fea6ddf11", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We derive an equation for temporal difference learning from statistical principles. Specifically, we start with the variational principle and then bootstrap to produce an updating rule for discounted state value estimates. The resulting equation is similar to the standard equation for temporal difference learning with eligibility traces, so called TD(\u03bb), however it lacks the parameter \u03b1 that specifies the learning rate. In the place of this free parameter there is now an equation for the learning rate that is specific to each state transition. We experimentally test this new learning rule against TD(\u03bb) and find that it offers superior performance in various settings. Finally, we make some preliminary investigations into how to extend our new temporal difference algorithm to reinforcement learning. To do this we combine our update equation with both Watkins\u2019 Q(\u03bb) and Sarsa(\u03bb) and find that it again offers superior performance without a learning rate parameter.", "targets": "Temporal Difference Updating without a Learning Rate"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-cc756518aea04e21934fe92520a2ea36", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Existing works based on latent factor models have focused on representing the rating matrix as a product of user and item latent factor matrices, both being dense. Latent (factor) vectors define the degree to which a trait is possessed by an item or the affinity of user towards that trait. A dense user matrix is a reasonable assumption as each user will like/dislike a trait to certain extent. However, any item will possess only a few of the attributes and never all. Hence, the item matrix should ideally have a sparse structure rather than a dense one as formulated in earlier works. Therefore we propose to factor the ratings matrix into a dense user matrix and a sparse item matrix which leads us to the Blind Compressed Sensing (BCS) framework. We derive an efficient algorithm for solving the BCS problem based on Majorization Minimization (MM) technique. Our proposed approach is able to achieve significantly higher accuracy and shorter run times as compared to existing approaches.", "targets": "Blind Compressive Sensing Framework for Collaborative Filtering"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d7d1c71851f149928a0d60bf3a01b35f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Many real-world problems involving constraints can be regarded as instances of the Max-SAT problem, which is the optimization variant of the classic satisfiability problem. In this paper, we propose a novel probabilistic approach for Max-SAT called ProMS. Our algorithm relies on a stochastic local search strategy using a novel probability distribution function with two strategies for picking variables, one based on available information and another purely random one. Moreover, while most previous algorithms based on WalkSAT choose unsatisfied clauses randomly, we introduce a novel clause selection strategy to improve our algorithm. Experimental results illustrate that ProMS outperforms many state-of-the-art stochastic local search solvers on hard unweighted random Max-SAT benchmarks.", "targets": "A Probability Distribution Strategy with Efficient Clause Selection for Hard Max-SAT Formulas"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-49d2976e14af48128cba20515004d588", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The purported \u201cblack box\u201d nature of neural networks is a barrier to adoption in applications where interpretability is essential. Here we present DeepLIFT (Learning Important FeaTures), an efficient and effective method for computing importance scores in a neural network. DeepLIFT compares the activation of each neuron to its \u2018reference activation\u2019 and assigns contribution scores according to the difference. We apply DeepLIFT to models trained on natural images and genomic data, and show significant advantages over gradient-based methods.", "targets": "Not Just A Black Box: Interpretable Deep Learning by Propagating Activation Differences"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5f545e4d6ebb455799ae8c9dca3e634c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "User engagement refers to the amount of interaction an instance (e.g., tweet, news, and forum post) achieves. Ranking the items in social media websites based on the amount of user participation in them, can be used in different applications, such as recommender systems. In this paper, we consider a tweet containing a rating for a movie as an instance and focus on ranking the instances of each user based on their engagement, i.e., the total number of retweets and favorites it will gain. For this task, we define several features which can be extracted from the meta-data of each tweet. The features are partitioned into three categories: user-based, movie-based, and tweet-based. We show that in order to obtain good results, features from all categories should be considered. We exploit regression and learning to rank methods to rank the tweets and propose to aggregate the results of regression and learning to rank methods to achieve better performance. We have run our experiments on an extended version of MovieTweeting dataset provided by ACM RecSys Challenge 2014. The results show that learning to rank approach outperforms most of the regression models and the combination can improve the performance significantly.", "targets": "Regression and Learning to Rank Aggregation for User Engagement Evaluation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4b64ed401dd94dbb8e72f25abae51837", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Analogy Based Effort Estimation (ABE) is one of the prominent methods for software effort estimation. The fundamental concept of ABE is closer to the mentality of expert estimation but with an automated procedure in which the final estimate is generated by reusing similar historical projects. The main key issue when using ABE is how to adapt the effort of the retrieved nearest neighbors. The adaptation process is an essential part of ABE to generate more successful accurate estimation based on tuning the selected raw solutions, using some adaptation strategy. In this study we show that there are three interrelated decision variables that have great impact on the success of adaptation method: (1) number of nearest analogies (k), (2) optimum feature set needed for adaptation, and (3) adaptation weights. To find the right decision regarding these variables, one need to study all possible combinations and evaluate them individually to select the one that can improve all prediction evaluation measures. The existing evaluation measures usually behave differently, presenting sometimes opposite trends in evaluating prediction methods. This means that changing one decision variable could improve one evaluation measure while it is decreasing the others. Therefore, the main theme of this research is how to come up with best decision variables that improve adaptation strategy and thus, the overall evaluation measures without degrading the others. The impact of these decisions together has not been investigated before, therefore we propose to view the building of adaptation procedure as a multi-objective optimization problem. The Particle Swarm Optimization Algorithm (PSO) is utilized to find the optimum solutions for such decision variables based on optimizing multiple evaluation measures. We evaluated the proposed approaches over 15 datasets and using 4 evaluation measures. After extensive experimentation we found that: (1) predictive performance of ABE has noticeably been improved, (2) optimizing all decision variables together is more efficient than ignoring any one of them. (3) Optimizing decision variables for each project individually yield better accuracy than optimizing them for the whole dataset.", "targets": "Pareto Efficient Multi Objective Optimization for Local Tuning of Analogy Based Estimation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7c09f85f578f460fbe1c15115afae877", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Computing the probability of evidence even with known error bounds is NP-hard. In this paper we address this hard problem by settling on an easier problem. We propose an approximation which provides high confidence lower bounds on probability of evidence but does not have any guarantees in terms of relative or absolute error. Our proposed approximation is a randomized importance sampling scheme that uses the Markov inequality. However, a straight-forward application of the Markov inequality may lead to poor lower bounds. We therefore propose several heuristic measures to improve its performance in practice. Empirical evaluation of our scheme with stateof-the-art lower bounding schemes reveals the promise of our approach.", "targets": "Studies in Lower Bounding Probability of Evidence using the Markov Inequality"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-fde34b1273b14536b9f82186975130c7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Most contemporary multi-task learning methods assume linear models. This setting is considered shallow in the era of deep learning. In this paper, we present a new deep multi-task representation learning framework that learns cross-task sharing structure at every layer in a deep network. Our approach is based on generalising the matrix factorisation techniques explicitly or implicitly used by many conventional MTL algorithms to tensor factorisation, to realise automatic learning of end-to-end knowledge sharing in deep networks. This is in contrast to existing deep learning approaches that need a user-defined multi-task sharing strategy. Our approach applies to both homogeneous and heterogeneous MTL. Experiments demonstrate the efficacy of our deep multi-task representation learning in terms of both higher accuracy and fewer design choices.", "targets": "DEEP MULTI-TASK REPRESENTATION LEARNING: A TENSOR FACTORISATION APPROACH"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-412fc33783714f9c9dfe0bde7abd4477", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This work presents a fast and scalable algorithm for incremental learning of Gaussian mixture models. By performing rank-one updates on its precision matrices and determinants, its asymptotic time complexity is of O ( NKD ) for N data points, K Gaussian components and D dimensions. The resulting algorithm can be applied to high dimensional tasks, and this is confirmed by applying it to the classification datasets MNIST and CIFAR-10. Additionally, in order to show the algorithm\u2019s applicability to function approximation and control tasks, it is applied to three reinforcement learning tasks and its data-efficiency is evaluated.", "targets": "Scalable and Incremental Learning of Gaussian Mixture Models"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b3352a7ba98d4e6facc2f487b7a39c10", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Kernel classifiers and regressors designed for structured data, such as sequences, trees and graphs, have significantly advanced in a number of interdisciplinary areas such as computational biology and drug design. Typically, kernel functions are designed beforehand for a data type which either exploit statistics of the structures or make use of probabilistic generative models, and then a discriminative classifier is learned based on the kernels via convex optimization. However, such an elegant two-stage approach also limited kernel methods from scaling up to millions of data points, and exploiting discriminative information to learn feature representations. We propose an effective and scalable approach for structured data representation which is based on the idea of embedding latent variable models into feature spaces, and learning such feature spaces using discriminative information. Furthermore, our feature learning algorithm runs a sequence of function mappings in a way similar to graphical model inference procedures, such as mean field and belief propagation. In real world applications involving sequences and graphs, we showed that the proposed approach is much more scalable than alternatives while at the same time produce comparable results to the state-of-the-art in terms of classification and regression.", "targets": "Discriminative Embeddings of Latent Variable Models for Structured Data"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4ddb5981b56846f08e024d34b52edc8f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We show that strategies implemented in automatic theorem proving involve an interesting tradeoff between execution speed, proving speedup/computational time and usefulness of information. We advance formal definitions for these concepts by way of a notion of normality related to an expected (optimal) theoretical speedup when adding useful information (other theorems as axioms), as compared with actual strategies that can be effectively and efficiently implemented. We propose the existence of an ineluctable tradeoff between this normality and computational time complexity. The argument quantifies the usefulness of information in terms of (positive) speed-up. The results disclose a kind of no-free-lunch scenario and a tradeoff of a fundamental nature. The main theorem in this paper together with the numerical experiment\u2014undertaken using two different automatic theorem provers (AProS and Prover9) on random theorems of propositional logic\u2014provide strong theoretical and empirical arguments for the fact that finding new useful information for solving a specific problem (theorem) is, in general, as hard as the problem (theorem) itself.", "targets": "Rare Speed-up in Automatic Theorem Proving Reveals Tradeoff Between Computational Time and Information Value"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3d8dfe5709ed4496bc5b50442501da91", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This report presents a general model of the architecture of information systems for the children\u2019s speech recognition. It presents a model of the speech data stream and how it works. The result of these studies and presented veins architectural model shows that research needs to be focused on acoustic-phonetic modeling in order to improve the quality of children's speech recognition and the sustainability of the systems to noise and changes in transmission environment. Another important aspect is the development of more accurate algorithms for modeling of spontaneous child speech.", "targets": "On model architecture for a children\u2019s speech recognition interactive dialog system"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-23d8cfac301a44b9a3f7f9075fa6a7ed", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Pointwise matches between two time series are of great importance in time series analysis, and dynamic time warping (DTW) is known to provide generally reasonable matches. There are situations where time series alignment should be invariant to scaling and offset in amplitude or where local regions of the considered time series should be strongly reflected in pointwise matches. Two different variants of DTW, affine DTW (ADTW) and regional DTW (RDTW), are proposed to handle scaling and offset in amplitude and provide regional emphasis respectively. Furthermore, ADTW and RDTW can be combined in two different ways to generate alignments that incorporate advantages from both methods, where the affine model can be applied either globally to the entire time series or locally to each region. The proposed alignment methods outperform DTW on specific simulated datasets, and one-nearest-neighbor classifiers using their associated difference measures are competitive with the difference measures associated with state-of-the-art alignment methods on real datasets.", "targets": "Affine and Regional Dynamic Time Warping"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-aa0ec363cb3f439f942f02153e2d4e09", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We consider the problem of sequentially choosing between a set of unbiased Monte Carlo estimators to minimize the mean-squared-error (MSE) of a final combined estimate. By reducing this task to a stochastic multi-armed bandit problem, we show that well developed allocation strategies can be used to achieve an MSE that approaches that of the best estimator chosen in retrospect. We then extend these developments to a scenario where alternative estimators have different, possibly stochastic costs. The outcome is a new set of adaptive Monte Carlo strategies that provide stronger guarantees than previous approaches while offering practical advantages.", "targets": "Adaptive Monte Carlo via Bandit Allocation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5a6c483b9ee545ca96e3d3cf55acd468", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The paper presents an application of Conformal Predictors to a chemoinformatics problem of identifying activities of chemical compounds. The paper addresses some specific challenges of this domain: a large number of compounds (training examples), high-dimensionality of feature space, sparseness and a strong class imbalance. A variant of conformal predictors called Inductive Mondrian Conformal Predictor is applied to deal with these challenges. Results are presented for several non-conformity measures (NCM) extracted from underlying algorithms and different kernels. A number of performance measures are used in order to demonstrate the flexibility of Inductive Mondrian Conformal Predictors in dealing with such a complex set of data.", "targets": "Conformal Predictors for Compound Activity Prediction"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ba26ac712d6d4e6f9f3a8b77626421e1", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recent advances in deep learning have led various applications to unprecedented achievements, which could potentially bring higher intelligence to a broad spectrum of mobile and ubiquitous applications. Although existing studies have demonstrated the e\u0082ectiveness and feasibility of running deep neural network inference operations on mobile and embedded devices, they overlooked the reliability of mobile computing models. Reliability measurements such as predictive uncertainty estimations are key factors for improving the decision accuracy and user experience. In this work, we propose RDeepSense, the \u0080rst deep learning model that provides well-calibrated uncertainty estimations for resource-constrained mobile and embedded devices. RDeepSense enables the predictive uncertainty by adopting a tunable proper scoring rule as the training criterion and dropout as the implicit Bayesian approximation, which theoretically proves its correctness. To reduce the computational complexity, RDeepSense employs e\u0081cient dropout and predictive distribution estimation instead of model ensemble or sampling-based method for inference operations. We evaluate RDeepSense with four mobile sensing applications using Intel Edison devices. Results show that RDeepSense can reduce around 90% of the energy consumption while producing superior uncertainty estimations and preserving at least the same model accuracy compared with other state-of-the-art methods.", "targets": "RDeepSense: Reliable Deep Mobile Computing Models with Uncertainty Estimations"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c5b57adf6c274c3f80037a511217a141", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this report, we describe a Theano-based AlexNet (Krizhevsky et al., 2012) implementation and its naive data parallelism on multiple GPUs. Our performance on 2 GPUs is comparable with the state-of-art Caffe library (Jia et al., 2014) run on 1 GPU. To the best of our knowledge, this is the first open-source Python-based AlexNet implementation to-date.", "targets": "TION WITH MULTIPLE GPUS"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c22e833de4604470a2483a530b3eda89", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We consider online learning algorithms that guarantee worst-case regret rates in adversarial environments (so they can be deployed safely and will perform robustly), yet adapt optimally to favorable stochastic environments (so they will perform well in a variety of settings of practical importance). We quantify the friendliness of stochastic environments by means of the well-known Bernstein (a.k.a. generalized Tsybakov margin) condition. For two recent algorithms (Squint for the Hedge setting and MetaGrad for online convex optimization) we show that the particular form of their data-dependent individual-sequence regret guarantees implies that they adapt automatically to the Bernstein parameters of the stochastic environment. We prove that these algorithms attain fast rates in their respective settings both in expectation and with high probability.", "targets": "Combining Adversarial Guarantees and Stochastic Fast Rates in Online Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4664ee9c79504d5a8cb3e7bbc8ded7b7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper we introduce Latent Tree Language Model (LTLM), a novel approach to language modeling that encodes syntax and semantics of a given sentence as a tree of word roles. The learning phase iteratively updates the trees by moving nodes according to Gibbs sampling. We introduce two algorithms to infer a tree for a given sentence. The first one is based on Gibbs sampling. It is fast, but does not guarantee to find the most probable tree. The second one is based on dynamic programming. It is slower, but guarantees to find the most probable tree. We provide comparison of both algorithms. We combine LTLM with 4-gram Modified Kneser-Ney language model via linear interpolation. Our experiments with English and Czech corpora show significant perplexity reductions (up to 46% for English and 49% for Czech) compared with standalone 4-gram Modified Kneser-Ney language model.", "targets": "Latent Tree Language Model"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1b72dc450b164216ac0140ff5d558614", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Past research has challenged us with the task of showing relational patterns between text-based data and then clustering for predictive analysis using Golay Code technique. We focus on a novel approach to extract metaknowledge in multimedia datasets. Our collaboration has been an on-going task of studying the relational patterns between datapoints based on metafeatures extracted from metaknowledge in multimedia datasets. Those selected are significant to suit the mining technique we applied, Golay Code algorithm. In this research paper we summarize findings in optimization of metaknowledge representation for 23-bit representation of structured and unstructured multimedia data in order to be processed in 23-bit Golay Code for cluster recognition. Keywords\u2014 Big Multimedia Data Processing and Analytics; Information Retrieval Challenges; Content Identification, Metafeature Extraction and Selection; Metalearning System; 23-Bit Meta-knowledge template; Knowledge Discovery, Golay Code.", "targets": "Novel Metaknowledge-based Processing Technique for Multimedia Big Data clustering challenges"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d4c19f4aaa6b4f08ac4a30a2cee18f01", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In natural-language discourse, related events tend to appear near each other to describe a larger scenario. Such structures can be formalized by the notion of a frame (a.k.a. template), which comprises a set of related events and prototypical participants and event transitions. Identifying frames is a prerequisite for information extraction and natural language generation, and is usually done manually. Methods for inducing frames have been proposed recently, but they typically use ad hoc procedures and are difficult to diagnose or extend. In this paper, we propose the first probabilistic approach to frame induction, which incorporates frames, events, participants as latent topics and learns those frame and event transitions that best explain the text. The number of frames is inferred by a novel application of a split-merge method from syntactic parsing. In end-to-end evaluations from text to induced frames and extracted facts, our method produced state-of-the-art results while substantially reducing engineering effort.", "targets": "Probabilistic Frame Induction\u2217"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c9bf0c2ba7144ae2953bf5aa41a51ee2", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Syntactic parsing, the process of obtaining the internal structure of sentences in natural languages, is a crucial task for artificial intelligence applications that need to extract meaning from natural language text or speech. Sentiment analysis is one example of application for which parsing has recently proven useful. In recent years, there have been significant advances in the accuracy of parsing algorithms. In this article, we perform an empirical, task-oriented evaluation to determine how parsing accuracy influences the performance of a state-of-the-art sentiment analysis system that determines the polarity of sentences from their parse trees. In particular, we evaluate the system using four well-known dependency parsers, including both current models with state-of-the-art accuracy and more innacurate models which, however, require less computational resources. The experiments show that all of the parsers produce similarly good results in the sentiment analysis task, without their accuracy having any relevant influence on the results. Since parsing is currently a task with a relatively high computational cost that varies strongly between algorithms, this suggests that sentiment analysis researchers and users should prioritize speed over accuracy when choosing a parser; and parsing researchers should investigate models that improve speed further, even at some cost to accuracy.", "targets": "How Important is Syntactic Parsing Accuracy? An Empirical Evaluation on Sentiment Analysis"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c57fc5158c28435c9daee21ec702ee3a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We introduce a new model of interactive learning in which an expert examines the predictions of a learner and partially fixes them if they are wrong. Although this kind of feedback is not i.i.d., we show statistical generalization bounds on the quality of the learned model.", "targets": "Learning from partial correction"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-41eac14527f34cac9777658314789383", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "As mobile devices have become indispensable in modern life, mobile security is becoming much more important. Traditional password or PIN-like point-of-entry security measures score low on usability and are vulnerable to brute force and other types of attacks. In order to improve mobile security, an adaptive neuro-fuzzy inference system(ANFIS)-based implicit authentication system is proposed in this paper to provide authentication in a continuous and transparent manner. To illustrate the applicability and capability of ANFIS in our implicit authentication system, experiments were conducted on behavioural data collected for up to 12 weeks from different Android users. The ability of the ANFIS-based system to detect an adversary is also tested with scenarios involving an attacker with varying levels of knowledge. The results demonstrate that ANFIS is a feasible and efficient approach for implicit authentication with an average of 95% user recognition rate. Moreover, the use of ANFIS-based system for implicit authentication significantly reduces manual tuning and configuration tasks due to its selflearning capability.", "targets": "Continuous Implicit Authentication for Mobile Devices based on Adaptive Neuro-Fuzzy Inference System"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e41bebeda74a4e2ab282bcf179b10c06", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "RCC8 is a popular fragment of the region connection calculus, in which qualitative spatial relations between regions, such as adjacency, overlap and parthood, can be expressed. While RCC8 is essentially dimensionless, most current applications are confined to reasoning about two-dimensional or threedimensional physical space. In this paper, however, we are mainly interested in conceptual spaces, which typically are high-dimensional Euclidean spaces in which the meaning of natural language concepts can be represented using convex regions. The aim of this paper is to analyze how the restriction to convex regions constrains the realizability of networks of RCC8 relations. First, we identify all ways in which the set of RCC8 base relations can be restricted to guarantee that consistent networks can be convexly realized in respectively 1D, 2D, 3D, and 4D. Most surprisingly, we find that if the relation \u2018partially overlaps\u2019 is disallowed, all consistent atomic RCC8 networks can be convexly realized in 4D. If instead refinements of the relation \u2018part of\u2019 are disallowed, all consistent atomic RCC8 relations can be convexly realized in 3D. We furthermore show, among others, that any consistent RCC8 network with 2n + 1 variables can be realized using convex regions in the n-dimensional Euclidean space.", "targets": "Realizing RCC8 networks using convex regions"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2da5d516f9a149a4ad2737d44ea5f2d3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A standard assumption in machine learning is the exchangeability of data, which is equivalent to assuming that the examples are generated from the same probability distribution independently. This paper is devoted to testing the assumption of exchangeability on-line: the examples arrive one by one, and after receiving each example we would like to have a valid measure of the degree to which the assumption of exchangeability has been falsified. Such measures are provided by exchangeability martingales. We extend known techniques for constructing exchangeability martingales and show that our new method is competitive with the martingales introduced before. Finally we investigate the performance of our testing method on two benchmark datasets, USPS and Statlog Satellite data; for the former, the known techniques give satisfactory results, but for the latter our new more flexible method becomes necessary.", "targets": "Plug-in martingales for testing exchangeability on-line"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8c991e150c814d899c8eae1b28586f5d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This manuscript develops the theory of agglomerative clustering with Bregman divergences. Geometric smoothing techniques are developed to deal with degenerate clusters. To allow for cluster models based on exponential families with overcomplete representations, Bregman divergences are developed for nondifferentiable convex functions.", "targets": "Agglomerative Bregman Clustering"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f48074c48c2540f59b232db49106b0e0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Sequence prediction and classification are ubiquitous and challenging problems in machine learning that can require identifying complex dependencies between temporally distant inputs. Recurrent Neural Networks (RNNs) have the ability, in theory, to cope with these temporal dependencies by virtue of the short-term memory implemented by their recurrent (feedback) connections. However, in practice they are difficult to train successfully when the long-term memory is required. This paper introduces a simple, yet powerful modification to the standard RNN architecture, the Clockwork RNN (CW-RNN), in which the hidden layer is partitioned into separate modules, each processing inputs at its own temporal granularity, making computations only at its prescribed clock rate. Rather than making the standard RNN models more complex, CW-RNN reduces the number of RNN parameters, improves the performance significantly in the tasks tested, and speeds up the network evaluation. The network is demonstrated in preliminary experiments involving two tasks: audio signal generation and TIMIT spoken word classification, where it outperforms both RNN and LSTM networks.", "targets": "A Clockwork RNN"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ab1200d3ff1146c2b8cb0823b7498ab0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Programming languages themselves have a limited number of reserved keywords and character based tokens that define the language specification. However, programmers have a rich use of natural language within their code through comments, text literals and naming entities. The programmer defined names that can be found in source code are a rich source of information to build a high level understanding of the project. The goal of this paper is to apply topic modeling to names used in over 13.6 million repositories and perceive the inferred topics. One of the problems in such a study is the occurrence of duplicate repositories not officially marked as forks (obscure forks). We show how to address it using the same identifiers which are extracted for topic modeling. We open with a discussion on naming in source code, we then elaborate on our approach to remove exact duplicate and fuzzy duplicate repositories using Locality Sensitive Hashing on the bag-of-words model and then discuss our work on topic modeling; and finally present the results from our data analysis together with open-access to the source code, tools and datasets.", "targets": "Topic modeling of public repositories at scale using names in source code"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0e3d45b7964d4f86b33e577915aab30f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Transfer learning is a vital technique that generalizes models trained for one setting or task to other settings or tasks. For example in speech recognition, an acoustic model trained for one language can be used to recognize speech in another language, with little or no re-training data. Transfer learning is closely related to multi-task learning (cross-lingual vs. multilingual), and is traditionally studied in the name of \u2018model adaptation\u2019. Recent advance in deep learning shows that transfer learning becomes much easier and more effective with high-level abstract features learned by deep models, and the \u2018transfer\u2019 can be conducted not only between data distributions and data types, but also between model structures (e.g., shallow nets and deep nets) or even model types (e.g., Bayesian models and neural models). This review paper summarizes some recent prominent research towards this direction, particularly for speech and language processing. We also report some results from our group and highlight the potential of this very interesting research field.", "targets": "Transfer Learning for Speech and Language Processing"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6efd47cad32d46ef81d7b725a430d6d6", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The paradigm shift from shallow classifiers with hand-crafted features to endto-end trainable deep learning models has shown significant improvements on supervised learning tasks. Despite the promising power of deep neural networks (DNN), how to alleviate overfitting during training has been a research topic of interest. In this paper, we present a Generative-Discriminative Variational Model (GDVM) for visual classification, in which we introduce a latent variable inferred from inputs for exhibiting generative abilities towards prediction. In other words, our GDVM casts the supervised learning task as a generative learning process, with data discrimination to be jointly exploited for improved classification. In our experiments, we consider the tasks of multi-class classification, multi-label classification, and zero-shot learning. We show that our GDVM performs favorably against the baselines or recent generative DNN models.", "targets": "Generative-Discriminative Variational Model for Visual Recognition"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-37450cbe74fc4964a8160f4e1191712e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Despite tremendous progress in computer vision, there has not been an attempt for machine learning on very large-scale medical image databases. We present an interleaved text/image deep learning system to extract and mine the semantic interactions of radiology images and reports from a national research hospital\u2019s Picture Archiving and Communication System. With natural language processing, we mine a collection of representative \u223c216K two-dimensional key images selected by clinicians for diagnostic reference, and match the images with their descriptions in an automated manner. Our system interleaves between unsupervised learning and supervised learning on documentand sentence-level text collections, to generate semantic labels and to predict them given an image. Given an image of a patient scan, semantic topics in radiology levels are predicted, and associated key-words are generated. Also, a number of frequent disease types are detected as present or absent, to provide more specific interpretation of a patient scan. This shows the potential of largescale learning and prediction in electronic patient records available in most modern clinical institutions.", "targets": "Interleaved Text/Image Deep Mining on a Large-Scale Radiology Database for Automated Image Interpretation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7e8301266aa547d89f8024ae5f48d1c4", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We improve the computational complexity of online learning algorithms that require to often recompute least squares regression estimates of parameters. We propose two stochastic gradient descent schemes with randomisation in order to efficiently track the true solutions of the regression problems achieving an O(d) improvement in complexity, where d is the dimension of the data. The first algorithm assumes strong convexity in the regression problem, and we provide bounds on the error both in expectation and high probability (the latter is often needed to provide theoretical guarantees for higher level algorithms). The second algorithm deals with cases where strong convexity of the regression problem cannot be guaranteed and uses adaptive regularisation. We again give error bounds in both expectation and high probability. We apply our approaches to the linear bandit algorithms PEGE and ConfidenceBall and demonstrate significant gains in complexity in both cases. Since strong convexity is guaranteed by the PEGE algorithm, we lose only logarithmic factors in the regret performance of the algorithm. On the other hand, in the ConfidenceBall algorithm we adaptively regularise to ensure strong convexity, and this results in an \u00d5(n1/5)1 deterioration of the regret.", "targets": "Online gradient descent for least squares regression: Non-asymptotic bounds and application to bandits"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a5d34ce4f3aa478295cd2c5e238701b8", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Bi-directional LSTMs have emerged as a standard method for obtaining per-token vector representations serving as input to various token labeling tasks (whether followed by Viterbi prediction or independent classification). This paper proposes an alternative to Bi-LSTMs for this purpose: iterated dilated convolutional neural networks (ID-CNNs), which have better capacity than traditional CNNs for large context and structured prediction. We describe a distinct combination of network structure, parameter sharing and training procedures that is not only more accurate than Bi-LSTM-CRFs, but also 8x faster at test time on long sequences. Moreover, ID-CNNs with independent classification enable a dramatic 14x testtime speedup, while still attaining accuracy comparable to the Bi-LSTM-CRF. We further demonstrate the ability of IDCNNs to combine evidence over long sequences by demonstrating their improved accuracy on whole-document (rather than per-sentence) inference. Unlike LSTMs whose sequential processing on sentences of length N requires O(N) time even in the face of parallelism, IDCNNs permit fixed-depth convolutions to run in parallel across entire documents. Today when many companies run basic NLP on the entire web and large-volume traffic, faster methods are paramount to saving time and energy costs.", "targets": "Fast and Accurate Sequence Labeling with Iterated Dilated Convolutions"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9e54e6214fda4a3c9d7dbc8a0d2b896e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Natural language generation (NLG) is a critical component of spoken dialogue and it has a significant impact both on usability and perceived quality. Most NLG systems in common use employ rules and heuristics and tend to generate rigid and stylised responses without the natural variation of human language. They are also not easily scaled to systems covering multiple domains and languages. This paper presents a statistical language generator based on a semantically controlled Long Short-term Memory (LSTM) structure. The LSTM generator can learn from unaligned data by jointly optimising sentence planning and surface realisation using a simple cross entropy training criterion, and language variation can be easily achieved by sampling from output candidates. With fewer heuristics, an objective evaluation in two differing test domains showed the proposed method improved performance compared to previous methods. Human judges scored the LSTM system higher on informativeness and naturalness and overall preferred it to the other systems.", "targets": "Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-826e298a460841babdca4f546dd1313c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a method for using synthetic data to help learning classifiers. Synthetic data, even is generated based on real data, normally results in a shift from the distribution of real data in feature space. To bridge the gap between the real and synthetic data, and jointly learn from synthetic and real data, this paper proposes a Multichannel Autoencoder(MCAE). We show that by suing MCAE, it is possible to learn a better feature representation for classification. To evaluate the proposed approach, we conduct experiments on two types of datasets. Experimental results on two datasets validate the efficiency of our MCAE model and our methodology of generating synthetic data.", "targets": "Learning Classifiers from Synthetic Data Using a Multichannel Autoencoder"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7deaed5c2ceb433ca1d6323afd77d8c5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Unsupervised dependency parsing, which tries to discover linguistic dependency structures from unannotated data, is a very challenging task. Almost all previous work on this task focuses on learning generative models. In this paper, we develop an unsupervised dependency parsing model based on the CRF autoencoder. The encoder part of our model is discriminative and globally normalized which allows us to use rich features as well as universal linguistic priors. We propose an exact algorithm for parsing as well as a tractable learning algorithm. We evaluated the performance of our model on eight multilingual treebanks and found that our model achieved comparable performance with state-of-the-art approaches.", "targets": "CRF Autoencoder for Unsupervised Dependency Parsing\u2217"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3881c6f499874e57a559c53d96d58b77", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a regularized linear learning algorithm to sequence groups of features, where each group incurs test-time cost or computation. Specifically, we develop a simple extension to Orthogonal Matching Pursuit (OMP) that respects the structure of groups of features with variable costs, and we prove that it achieves nearoptimal anytime linear prediction at each budget threshold where a new group is selected. Our algorithm and analysis extends to generalized linear models with multi-dimensional responses. We demonstrate the scalability of the resulting approach on large real-world data-sets with many feature groups associated with test-time computational costs. Our method improves over Group Lasso and Group OMP in the anytime performance of linear predictions, measured in timeliness[7], an anytime prediction performance metric, while providing rigorous performance guarantees.", "targets": "Efficient Feature Group Sequencing for Anytime Linear Prediction"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-91e7695ed6e44b679523e589dc59ea44", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "There is a brief description of the probabilistic causal graph model for representing, reasoning with, and learn\u00ad ing causal structure using Bayesian networks. It is then argued that this model is closely related to how humans reason with and learn causal structure. It is shown that studies in psychology on discounting (reasoning concern\u00ad ing how the presence of one cause of an effect makes an\u00ad other cause less probable) support the hypothesis that humans reach the same judgments as algorithms for do\u00ad ing inference in Bayesian networks. Next, it is shown how studies by Piaget indicate that humans learn causal structure by observing the same independencies and de\u00ad pendencies as those used by certain algorithms for learn\u00ad ing the structure of a Bayesian network. Based on this indication, a subjective definition of causality is for\u00ad warded. Finally, methods for further testing the accu\u00ad racy of these claims are discussed.", "targets": "THE CoGNITIVE PROCESSING OF CAUSAL KNOWLEDGE"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-80ff2311b20f4552aac6b83648368c31", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Chain graphs combine directed and undi\u00ad rected graphs and their underlying mathe\u00ad matics combines properties of the two. This paper gives a simplified definition of chain graphs based on a hierarchical combination of Bayesian (directed) and Markov (undirected) networks. Examples of a chain graph are multivariate feed-forward networks, cluster\u00ad ing with conditional interaction between vari\u00ad ables, and forms of Bayes classifiers. Chain graphs are then extended using the notation of plates so that samples and data analysis problems can be represented in a graphical model as well. Implications for learning are discussed in the conclusion.", "targets": "Chain graphs for learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7430e9f95f914384a8947e01a1c3276b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We approach the challenging problem of generating highlights from sports broadcasts utilizing audio information only. A language-independent, multi-stage classification approach is employed for detection of key acoustic events which then act as a platform for summarization of highlight scenes. Objective results and human experience indicate that our system is highly efficient.", "targets": "Sports highlights generation based on acoustic events detection: A rugby case study"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d06afd1952d34ad2a93e0493d311df0e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a black-box variational inference method to approximate intractable distributions with an increasingly rich approximating class. Our method, termed variational boosting, iteratively refines an existing variational approximation by solving a sequence of optimization problems, allowing the practitioner to trade computation time for accuracy. We show how to expand the variational approximating class by incorporating additional covariance structure and by introducing new components to form a mixture. We apply variational boosting to synthetic and real statistical models, and show that resulting posterior inferences compare favorably to existing posterior approximation algorithms in both accuracy and efficiency.", "targets": "Variational Boosting: Iteratively Refining Posterior Approximations"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d68195d7a2c94da9bb60e7d8c3c08a8a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper we study the application of convolutional neural networks for jointly detecting objects depicted in still images and estimating their 3D pose. We identify different feature representations of oriented objects, and energies that lead a network to learn this representations. The choice of the representation is crucial since the pose of an object has a natural, continuous structure while its category is a discrete variable. We evaluate the different approaches on the joint object detection and pose estimation task of the Pascal3D+ benchmark using Average Viewpoint Precision. We show that a classification approach on discretized viewpoints achieves state-of-the-art performance for joint object detection and pose estimation, and significantly outperforms existing baselines on this benchmark. We also show that performing the two tasks jointly can improve significantly the detection performances.", "targets": "A COMPARATIVE STUDY"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-56c47bf35e4743edb7f658e94d0e889a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We address the statistical and optimization impacts of using classical sketch versus Hessian sketch to solve approximately the Matrix Ridge Regression (MRR) problem. Prior research has considered the effects of classical sketch on least squares regression (LSR), a strictly simpler problem. We establish that classical sketch has a similar effect upon the optimization properties of MRR as it does on those of LSR\u2014namely, it recovers nearly optimal solutions. In contrast, Hessian sketch does not have this guarantee; instead, the approximation error is governed by a subtle interplay between the \u201cmass\u201d in the responses and the optimal objective value. For both types of approximations, the regularization in the sketched MRR problem gives it significantly different statistical properties from the sketched LSR problem. In particular, there is a bias-variance trade-off in sketched MRR that is not present in sketched LSR. We provide upper and lower bounds on the biases and variances of sketched MRR; these establish that the variance is significantly increased when classical sketches are used, while the bias is significantly increased when using Hessian sketches. Empirically, sketched MRR solutions can have risks that are higher by an order-of-magnitude than those of the optimal MRR solutions. We establish theoretically and empirically that model averaging greatly decreases this gap. Thus, in the distributed setting, sketching combined with model averaging is a powerful technique that quickly obtains near-optimal solutions to the MRR problem while greatly mitigating the statistical risks incurred by sketching.", "targets": "Sketched Ridge Regression: Optimization Perspective, Statistical Perspective, and Model Averaging"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e7152074f76948fb8ec3050b1d30bac0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper presents a Bayesian optimization method with exponential convergence without the need of auxiliary optimization and without the \u03b4-cover sampling. Most Bayesian optimization methods require auxiliary optimization: an additional non-convex global optimization problem, which can be time-consuming and hard to implement in practice. Also, the existing Bayesian optimization method with exponential convergence [1] requires access to the \u03b4-cover sampling, which was considered to be impractical [1, 2]. Our approach eliminates both requirements and achieves an exponential convergence rate.", "targets": "Bayesian Optimization with Exponential Convergence"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-81fd13f2e29b4552bb369908c010581c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Movie ratings play an important role both in determining the likelihood of a potential viewer to watch the movie and in reflecting the current viewer satisfaction with the movie. They are available in several sources like the television guide, best-selling reference books, newspaper columns, and television programs. Furthermore, movie ratings are crucial for recommendation engines that track the behavior of all users and utilize the information to suggest items they might like. Movie ratings in most cases, thus, provide information that might be more important than movie feature-based data. It is intuitively appealing that information about the viewing preferences in movie genres is sufficient for predicting a genre of an unlabeled movie. In order to predict movie genres, we treat ratings as a feature vector, apply the Bernoulli event model to estimate the likelihood of a movies given genre, and evaluate the posterior probability of the genre of a given movie using the Bayes rule. The goal of the proposed technique is to efficiently use the movie ratings for the task of predicting movie genres. In our approach we attempted to answer the question: \u201dGiven the set of users who watched a movie, is it possible to predict the genre of a movie based on its ratings?\u201d Our simulation results with MovieLens 100k data demonstrated the efficiency and accuracy of our proposed technique, achieving 59% prediction rate for exact prediction and 69% when including correlated genres.", "targets": "A movie genre prediction based on Multivariate Bernoulli model and genre correlations"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-04e11e6af38f432880d35a8794a44f6c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We study the effectiveness of neural sequence models for premise selection in automated theorem proving, one of the main bottlenecks in the formalization of mathematics. We propose a two stage approach for this task that yields good results for the premise selection task on the Mizar corpus while avoiding the handengineered features of existing state-of-the-art models. To our knowledge, this is the first time deep learning has been applied to theorem proving.", "targets": "DeepMath - Deep Sequence Models for Premise Selection"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2257065e68bf402797b7cd51d1c65a41", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This report presents Giraffe, a chess engine that uses self-play to discover all its domain-specific knowledge, with minimal hand-crafted knowledge given by the programmer. Unlike previous attempts using machine learning only to perform parametertuning on hand-crafted evaluation functions, Giraffe\u2019s learning system also performs automatic feature extraction and pattern recognition. The trained evaluation function performs comparably to the evaluation functions of state-of-the-art chess engines all of which containing thousands of lines of carefully hand-crafted pattern recognizers, tuned over many years by both computer chess experts and human chess masters. Giraffe is the most successful attempt thus far at using end-to-end machine learning to play chess. We also investigated the possibility of using probability thresholds instead of depth to shape search trees. Depth-based searches form the backbone of virtually all chess engines in existence today, and is an algorithm that has become well-established over the past half century. Preliminary comparisons between a basic implementation of probability-based search and a basic implementation of depth-based search showed that our new probability-based approach performs moderately better than the established approach. There are also evidences suggesting that many successful ad-hoc add-ons to depth-based searches are generalized by switching to a probability-based search. We believe the probability-based search to be a more fundamentally correct way to perform minimax. Finally, we designed another machine learning system to shape search trees within the probability-based search framework. Given any position, this system estimates the probability of each of the moves being the best move without looking ahead. The system is highly effective the actual best move is within the top 3 ranked moves 70% of the time, out of an average of approximately 35 legal moves from each position. This also resulted in a significant increase in playing strength. With the move evaluator guiding a probability-based search using the learned evaluator, Giraffe plays at approximately the level of an FIDE International Master (top 2.2% of tournament chess players with an official rating)12. F\u00e9d\u00e9ration Internationale des \u00c9checs, or the World Chess Federation, is the international organisation that governs all major international chess competitions. Please see Appendix A for a description of the Elo rating system.", "targets": "Giraffe: Using Deep Reinforcement Learning to Play Chess"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-90e6aee6738e4950be25b37611ea9320", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We study optimization algorithms based on variance reduction for stochastic gradient descent (SGD). Remarkable recent progress has been made in this direction through development of algorithms like SAG, SVRG, SAGA. These algorithms have been shown to outperform SGD, both theoretically and empirically. However, asynchronous versions of these algorithms\u2014a crucial requirement for modern large-scale applications\u2014have not been studied. We bridge this gap by presenting a unifying framework for many variance reduction techniques. Subsequently, we propose an asynchronous algorithm grounded in our framework, and prove its fast convergence. An important consequence of our general approach is that it yields asynchronous versions of variance reduction algorithms such as SVRG and SAGA as a byproduct. Our method achieves near linear speedup in sparse settings common to machine learning. We demonstrate the empirical performance of our method through a concrete realization of asynchronous SVRG.", "targets": "On Variance Reduction in Stochastic Gradient Descent and its Asynchronous Variants"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ba3d2a7a077e49b1b0e27c48c5936a3d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Learning to solve complex sequences of tasks\u2014while both leveraging transfer and avoiding catastrophic forgetting\u2014remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.", "targets": "Progressive Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a4ac7838a679432e9ce57d804c2b9aaf", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The essence of distantly supervised relation extraction is that it is an incomplete multi-label classification problem with sparse and noisy features. To tackle the sparsity and noise challenges, we propose solving the classification problem using matrix completion on factorized matrix of minimized rank. We formulate relation classification as completing the unknown labels of testing items (entity pairs) in a sparse matrix that concatenates training and testing textual features with training labels. Our algorithmic framework is based on the assumption that the rank of item-by-feature and item-by-label joint matrix is low. We apply two optimization models to recover the underlying low-rank matrix leveraging the sparsity of featurelabel matrix. The matrix completion problem is then solved by the fixed point continuation (FPC) algorithm, which can find the global optimum. Experiments on two widely used datasets with different dimensions of textual features demonstrate that our low-rank matrix completion approach significantly outperforms the baseline and the state-of-the-art methods.", "targets": "Errata: Distant Supervision for Relation Extraction with Matrix Completion"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ddd138fd559d490091a5e2928144d82a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The web plays an important role in people\u2019s social lives since the emergence of Web 2.0. It facilitates the interaction between users, gives them the possibility to freely interact, share and collaborate through social networks, online communities forums, blogs, wikis and other online collaborative media. However, an other side of the web is negatively taken such as posting inflammatory messages. Thus, when dealing with the online communities forums, the managers seek to always enhance the performance of such platforms. In fact, to keep the serenity and prohibit the disturbance of the normal atmosphere, managers always try to novice users against these malicious persons by posting such message (DO NOT FEED TROLLS). But, this kind of warning is not enough to reduce this phenomenon. In this context we propose a new approach for detecting malicious people also called \u2019Trolls\u2019 in order to allow community managers to take their ability to post online. To be more realistic, our proposal is defined within an uncertain framework. Based on the assumption consisting on the trolls\u2019 integration in the successful discussion threads, we try to detect the presence of such malicious users. Indeed, this method is based on a conflict measure of the belief function theory applied between the different messages of the thread. In order to show the feasibility and the result of our approach, we test it in different simulated data. Keywords\u2014Q&AC, trolls, belief function theory, conflict measure.", "targets": "Trolls Identification within an Uncertain Framework"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-90be022139354474a69cc8f49d785f87", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A model checker can produce a trace of counterexample, for a erroneous program, which is often long and difficult to understand. In general, the part about the loops is the largest among the instructions in this trace. This makes the location of errors in loops critical, to analyze errors in the overall program. In this paper, we explore the scalability capabilities of LocFaults, our error localization approach exploiting paths of CFG(Control Flow Graph) from a counterexample to calculate the MCDs (Minimal Correction Deviations), and MCSs (Minimal Correction Subsets) from each MCD found. We present the times of our approach on programs with While-loops unfolded b times, and a number of diverted conditions ranging from 0 to n. Our preliminary results show that the times of our approach, constraintbased and flow-driven, are better compared to BugAssist which is based on SAT and transforms the entire program to a Boolean formula, although the information provided by LocFaults is more expressive for the user.", "targets": "Exploration de la scalabilite\u0301 de LocFaults"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b63722af50e149769aa2f08546767e9d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Predicting the Credit Defaulter is a perilous task of Financial Industries like Banks. Ascertainingnonpayer before giving loan is a significant and conflict-ridden task of the Banker. Classification techniques are the better choice for predictive analysis like finding the claimant, whether he/she is an unpretentious customer or a cheat. Defining the outstanding classifier is a risky assignment for any industrialist like a banker. This allow computer science researchers to drill down efficient research works through evaluating different classifiers and finding out the best classifier for such predictive problems. This research work investigates the productivity of LADTree Classifier and REPTree Classifier for the credit risk prediction and compares their fitness through various measures. German credit dataset has been taken and used to predict the credit risk with a help of open source machine learning tool.", "targets": "PROFICIENCY COMPARISON OFLADTREE AND REPTREE CLASSIFIERS FOR CREDIT RISK FORECAST"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-43569bbaebd24cc6803af657694d536f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Automated writing evaluation (AWE) has been shown to be an effective mechanism for quickly providing feedback to students. It has already seen wide adoption in enterprise-scale applications and is starting to be adopted in large-scale contexts. Training an AWE model has historically required a single batch of several hundred writing examples and human scores for each of them. This requirement limits large-scale adoption of AWE since human-scoring essays is costly. Here we evaluate algorithms for ensuring that AWE models are consistently trained using the most informative essays. Our results show how to minimize training set sizes while maximizing predictive performance, thereby reducing cost without unduly sacrificing accuracy. We conclude with a discussion of how to integrate this approach into large-scale AWE systems.", "targets": "Effective sampling for large-scale automated writing evaluation systems"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3768b81ed07946eb9fc9b8afc0e87106", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Modified policy iteration (MPI) is a dynamic programming (DP) algorithm that contains the two celebrated policy and value iteration methods. Despite its generality, MPI has not been thoroughly studied, especially its approximation form which is used when the state and/or action spaces are large or infinite. In this paper, we propose three implementations of approximate MPI (AMPI) that are extensions of well-known approximate DP algorithms: fitted-value iteration, fitted-Q iteration, and classification-based policy iteration. We provide error propagation analyses that unify those for approximate policy and value iteration. On the last classification-based implementation, we develop a finite-sample analysis that shows that MPI\u2019s main parameter allows to control the balance between the estimation error of the classifier and the overall value function approximation.", "targets": "Approximate Modified Policy Iteration"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-942af0746a104c288058776305105d29", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Feature squeezing is a recently-introduced framework for mitigating and detecting adversarial examples. In previous work, we showed that it is effective against several earlier methods for generating adversarial examples. In this short note, we report on recent results showing that simple feature squeezing techniques also make deep learning models significantly more robust against the Carlini/Wagner attacks, which are the best known adversarial methods discovered to date.", "targets": "Feature Squeezing Mitigates and Detects Carlini/Wagner Adversarial Examples"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-609dee87333d4540ae0226dec2585401", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The ability to accurately represent sentences is central to language understanding. We describe a convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) that we adopt for the semantic modelling of sentences. The network uses Dynamic k-Max Pooling, a global pooling operation over linear sequences. The network handles input sentences of varying length and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations. The network does not rely on a parse tree and is easily applicable to any language. We test the DCNN in four experiments: small scale binary and multi-class sentiment prediction, six-way question classification and Twitter sentiment prediction by distant supervision. The network achieves excellent performance in the first three tasks and a greater than 25% error reduction in the last task with respect to the strongest baseline.", "targets": "A Convolutional Neural Network for Modelling Sentences"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c591989ed61148d497ee97db43ff563e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present a software tool that employs state-ofthe-art natural language processing (NLP) and machine learning techniques to help newspaper editors compose effective headlines for online publication. The system identifies the most salient keywords in a news article and ranks them based on both their overall popularity and their direct relevance to the article. The system also uses a supervised regression model to identify headlines that are likely to be widely shared on social media. The user interface is designed to simplify and speed the editor\u2019s decision process on the composition of the headline. As such, the tool provides an efficient way to combine the benefits of automated predictors of engagement and search-engine optimization (SEO) with human judgments of overall headline quality.", "targets": "Helping News Editors Write Better Headlines: A Recommender to Improve the Keyword Contents & Shareability of News Headlines"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-37d8e7d224c943ae9c03e6ae51d8cd20", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The continual growth of high speed networks is a challenge for real-time network analysis systems. The real time traffic classification is an issue for corporations and ISPs (Internet Service Providers). This work presents the design and implementation of a real time flow-based network traffic classification system. The classifier monitor acts as a pipeline consisting of three modules: packet capture and pre-processing, flow reassembly, and classification with Machine Learning (ML). The modules are built as concurrent processes with well defined data interfaces between them so that any module can be improved and updated independently. In this pipeline, the flow reassembly function becomes the bottleneck of the performance. In this implementation, was used a efficient method of reassembly which results in a average delivery delay of 0.49 seconds, approximately. For the classification module, the performances of the K-Nearest Neighbor (KNN), C4.5 Decision Tree, Naive Bayes (NB), Flexible Naive Bayes (FNB) and AdaBoost Ensemble Learning Algorithm are compared in order to validate our approach.", "targets": "ITCM: A REAL TIME INTERNET TRAFFIC"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6601a47d6a334e96867febb675763d62", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Smoothed analysis is a framework for analyzing the complexity of an algorithm, acting as a bridge between average and worst-case behaviour. For example, Quicksort and the Simplex algorithm are widely used in practical applications, despite their heavy worst-case complexity. Smoothed complexity aims to better characterize such algorithms. Existing theoretical bounds for the smoothed complexity of sorting algorithms are still quite weak. Furthermore, empirically computing the smoothed complexity via its original definition is computationally infeasible, even for modest input sizes. In this paper, we focus on accurately predicting the smoothed complexity of sorting algorithms, using machine learning techniques. We propose two regression models that take into account various properties of sorting algorithms and some of the known theoretical results in smoothed analysis to improve prediction quality. We show experimental results for predicting the smoothed complexity of Quicksort, Mergesort, and optimized Bubblesort for large input sizes, therefore filling the gap between known theoretical and empirical results.", "targets": "A Machine Learning Approach to Predicting the Smoothed Complexity of Sorting Algorithms"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-05b6605ec0154622a49c12dd66964417", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present initial ideas for a programming paradigm based on simulation that is targeted towards applications of artificial intelligence (AI). The approach aims at integrating techniques from different areas of AI and is based on the idea that simulated entities may freely exchange data and behavioural patterns. We define basic notions of a simulation-based programming paradigm and show how it can be used for implementing AI applications.", "targets": "Towards a Simulation-Based Programming Paradigm for AI applications"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c49006adf99c40ada6dbe97dd2ccd20a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Attack detection problems in the smart grid are posed as statistical learning problems for different attack scenarios in which the measurements are observed in batch or online settings. In this approach, machine learning algorithms are used to classify measurements as being either secure or attacked. An attack detection framework is provided to exploit any available prior knowledge about the system and surmount constraints arising from the sparse structure of the problem in the proposed approach. Well-known batch and online learning algorithms (supervised and semi-supervised) are employed with decision and feature level fusion to model the attack detection problem. The relationships between statistical and geometric properties of attack vectors employed in the attack scenarios and learning algorithms are analyzed to detect unobservable attacks using statistical learning methods. The proposed algorithms are examined on various IEEE test systems. Experimental analyses show that machine learning algorithms can detect attacks with performances higher than the attack detection algorithms which employ state vector estimation methods in the proposed attack detection framework.", "targets": "Machine Learning Methods for Attack Detection in the Smart Grid"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f9e701874a8e469c822c23c8099399bd", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper studies a new and more general axiomatization than one presented in [6] for preference on likelihood gambles. Likelihood gambles describe actions in a situation where a decision maker knows multiple probabilistic models and a random sample generated from one of those models but does not know prior probability of models. This new axiom system is inspired by Jensen\u2019s axiomatization of probabilistic gambles. Our approach provides a new perspective to the role of data in decision making under ambiguity. 1 Likelihood gambles Likelihood gambles introduced in [5, 6] describe actions in situation of model ambiguity characterized by (1) there are multiple probabilistic models; (2) there is data providing likelihoods for the models and (3) there is no prior probability about the models. Formally, we consider a general problem described by a tuple (X,Y,\u0398,A,x). X,Y are variables describing a phenomenon of interest. X is experiment variable whose values can be observed through experiments or data gathering (e.g. lab test results, clinical observations). Y is utility variable whose values determine the utility of actions (e.g. stages of disease, relative size of the tumor). \u0398 is the set of models that encode the knowledge about the phenomenon. To be precise, \u0398 is a set of indices and knowledge is encoded in probability functions Pr\u03b8(X,Y ) for \u03b8 \u2208 \u0398. A is the set of alternative actions (e.g. surgery, radiation therapy, chemotherapy) that are functions from utility variable Y to the unit interval [0, 1] representing utility. Fi\u2217I thank Bharat Rao for encouragement and support and UAI-2006 referees for their constructive comments. One should note that the use of utility rather than nally, evidence/data/observation gathered on experiment variable is X = x. A fundamental question to be answered is which among the alternative actions is the best choice given the information. We introduce the concept of likelihood gambles and derive a pricing formula that will allow their comparison. Note that given a model \u03b8 \u2208 \u0398 and observation x, distribution on utility variable Y is Pr\u03b8(y|x). According to the classical Bayesian decision theory actions a \u2208 A are values by their expected utility", "targets": "A new axiomatization for likelihood gambles"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4dbe9ca1991f434fb44e2efc9f2db33e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Problem solving in Answer Set Programming consists of two steps, a first grounding phase, systematically replacing all variables by terms, and a second solving phase computing the stable models of the obtained ground program. An intricate part of both phases is the treatment of aggregates, which are popular language constructs that allow for expressing properties over sets. In this paper, we elaborate upon the treatment of aggregates during grounding in gringo series 4. Consequently, our approach is applicable to grounding based on semi-naive database evaluation techniques. In particular, we provide a series of algorithms detailing the treatment of recursive aggregates and illustrate this by a running example.", "targets": "Grounding Recursive Aggregates: Preliminary Report"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-caf585b1d6e743bcbca9c4c529a38de1", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Catastrophic forgetting is a problem which refers to losing the information of the first task after training from the second task in continual learning of neural networks. To resolve this problem, we propose the incremental moment matching (IMM), which uses the Bayesian neural network framework. IMM assumes that the posterior distribution of parameters of neural networks is approximated with Gaussian distribution and incrementally matches the moment of the posteriors, which are trained for the first and second task, respectively. To make our Gaussian assumption reasonable, the IMM procedure utilizes various transfer learning techniques including weight transfer, L2-norm of old and new parameters, and a newly proposed variant of dropout using old parameters. We analyze our methods on the MNIST and CIFAR-10 datasets, and then evaluate them on a real-world life-log dataset collected using Google Glass. Experimental results show that IMM produces state-of-the-art performance in a variety of datasets.", "targets": "Overcoming Catastrophic Forgetting by Incremental Moment Matching"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6b8064dc903c41cf95a530d067f20b06", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a method to construct finite-state reactive controllers for systems whose interactions with their adversarial environment are modeled by infinite-duration twoplayer games over (possibly) infinite graphs. The proposed method targets safety games with infinitely many states or with such a large number of states that it would be impractical\u2014 if not impossible\u2014for conventional synthesis techniques that work on the entire state space. We resort to constructing finitestate controllers for such systems through an automata learning approach, utilizing a symbolic representation of the underlying game that is based on finite automata. Throughout the learning process, the learner maintains an approximation of the winning region (represented as a finite automaton) and refines it using different types of counterexamples provided by the teacher until a satisfactory controller can be derived (if one exists). We present a symbolic representation of safety games (inspired by regular model checking), propose implementations of the learner and teacher, and evaluate their performance on examples motivated by robotic motion planning in dynamic environments.", "targets": "An Automaton Learning Approach to Solving Safety Games over Infinite Graphs"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f0c1f958537d4981882528f2d29004cf", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this work, we study an important problem: learning programs from input-outputexamples. We propose a novel method to learn a neural program operating adomain-specific non-differentiable machine, and demonstrate that this methodcan be applied to learn programs that are significantly more complex than theones synthesized before: programming language parsers from input-output pairswithout knowing the underlying grammar. The main challenge is to train the neuralprogram without supervision on execution traces. To tackle it, we propose: (1)LL machines and neural programs operating them to effectively regularize thespace of the learned programs; and (2) a two-phase reinforcement learning-basedsearch technique to train the model. Our evaluation demonstrates that our approachcan successfully learn to parse programs in both an imperative language and afunctional language, and achieve 100% test accuracy, while existing approaches\u2019accuracies are almost 0%. This is the first successful demonstration of applyingreinforcement learning to train a neural program operating a non-differentiablemachine that can fully generalize to test sets on a non-trivial task.", "targets": "Learning Neural Programs To Parse Programs"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-31e8da96587a45ca9b79f48318235226", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Attempts to train a comprehensive artificial intelligence capable of solving multiple tasks have been impeded by a chronic problem called catastrophic forgetting. Although simply replaying all previous data alleviates the problem, it requires large memory and even worse, often infeasible in real world applications where the access to past data is limited. Inspired by the generative nature of hippocampus as a short-term memory system in primate brain, we propose the Deep Generative Replay, a novel framework with a cooperative dual model architecture consisting of a deep generative model (\u201cgenerator\u201d) and a task solving model (\u201csolver\u201d). With only these two models, training data for previous tasks can easily be sampled and interleaved with those for a new task. We test our methods in several sequential learning settings involving image classification tasks.", "targets": "Continual Learning with Deep Generative Replay"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d2d2a41a3dda4b39be47cca8e9bfe8d8", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Conventional methods of estimating latent behaviour generally use attitudinal questions which are subjective and these survey questions may not always be available. We hypothesize that an alternative approach can be used for latent variable estimation through an undirected graphical models. For instance, non-parametric artificial neural networks. In this study, we explore the use of generative non-parametric modelling methods to estimate latent variables from prior choice distribution without the conventional use of measurement indicators. A restricted Boltzmann machine is used to represent latent behaviour factors by analyzing the relationship information between the observed choices and explanatory variables. The algorithm is adapted for latent behaviour analysis in discrete choice scenario and we use a graphical approach to evaluate and understand the semantic meaning from estimated parameter vector values. We illustrate our methodology on a financial instrument choice dataset and perform statistical analysis on parameter sensitivity and stability. Our findings show that through non-parametric statistical tests, we can extract useful latent information on the behaviour of latent constructs through machine learning methods and present strong and significant influence on the choice process. Furthermore, our modelling framework shows robustness in input variability through sampling and validation. \u2217Paper presented at International Choice Modelling Conference 2017 \u2020Laboratory of Innovations in Transportation (LITrans), Department of Civil Engineering, Ryerson University, Toronto, Canada, Email: melvin.wong@ryerson.ca \u2021Laboratory of Innovations in Transportation (LITrans), Department of Civil Engineering, Ryerson University, Toronto, Canada, Email: bilal.farooq@ryerson.ca Laboratoire d\u2019Interpr\u00e9tation et de Traitement d\u2019Images et Vid\u00e9o (LITIV), Department of Computer and Software Engineering, Polytechnique Montr\u00e9al, Montr\u00e9al, Canada, Email: guillaume-alexandre.bilodeau@polymtl.ca ar X iv :1 70 6. 00 50 5v 1 [ cs .L G ] 1 J un 2 01 7", "targets": "Discriminative conditional restricted Boltzmann machine for discrete choice and latent variable modelling"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-bfe8a72d046f4de7834b708b10182df5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Variational autoencoders (VAE) represent a popular, flexible form of deep generative model that can be stochastically fit to samples from a given random process using an information-theoretic variational bound on the true underlying distribution. Once so-obtained, the model can be putatively used to generate new samples from this distribution, or to provide a low-dimensional latent representation of existing samples. While quite effective in numerous application domains, certain important mechanisms which govern the behavior of the VAE are obfuscated by the intractable integrals and resulting stochastic approximations involved. Moreover, as a highly non-convex model, it remains unclear exactly how minima of the underlying energy relate to original design purposes. We attempt to better quantify these issues by analyzing a series of tractable special cases of increasing complexity. In doing so, we unveil interesting connections with more traditional dimensionality reduction models, as well as an intrinsic yet underappreciated propensity for robustly dismissing outliers when estimating latent manifolds. With respect to the latter, we demonstrate that the VAE can be viewed as the natural evolution of recent robust PCA models, capable of learning nonlinear manifolds obscured by gross corruptions. However, this previously unexplored feature comes with the cost of potential model collapse to a degenerate distribution that may be less suitable as the basis for generating new samples.", "targets": "Veiled Attributes of the Variational Autoencoder"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d5876caac89e47f3b7529e46b370d267", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Relational Markov Random Fields are a general and flexible framework for reasoning about the joint distribution over attributes of a large number of interacting entities. The main computational difficulty in learning such models is inference. Even when dealing with complete data, where one can summarize a large domain by sufficient statistics, learning requires one to compute the expectation of the sufficient statistics given different parameter choices. The typical solution to this problem is to resort to approximate inference procedures, such as loopy belief propagation. Although these procedures are quite efficient, they still require computation that is on the order of the number of interactions (or features) in the model. When learning a large relational model over a complex domain, even such approximations require unrealistic running time. In this paper we show that for a particular class of relational MRFs, which have inherent symmetry, we can perform the inference needed for learning procedures using a template-level belief propagation. This procedure\u2019s running time is proportional to the size of the relational model rather than the size of the domain. Moreover, we show that this computational procedure is equivalent to sychronous loopy belief propagation. This enables a dramatic speedup in inference and learning time. We use this procedure to learn relational MRFs for capturing the joint distribution of large protein-protein interaction networks.", "targets": "Template Based Inference in Symmetric Relational Markov Random Fields"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-35444280be144c45b17c65ca542333fe", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We show that the average stability notion introduced by [12, 4] is invariant to data preconditioning, for a wide class of generalized linear models that includes most of the known exp-concave losses. In other words, when analyzing the stability rate of a given algorithm, we may assume the optimal preconditioning of the data. This implies that, at least from a statistical perspective, explicit regularization is not required in order to compensate for ill-conditioned data, which stands in contrast to a widely common approach that includes a regularization for analyzing the sample complexity of generalized linear models. Several important implications of our findings include: a) We demonstrate that the excess risk of empirical risk minimization (ERM) is controlled by the preconditioned stability rate. This immediately yields a relatively short and elegant proof for the fast rates attained by ERM in our context. b) We strengthen the recent bounds of [9] on the stability rate of the Stochastic Gradient Descent algorithm.", "targets": "Average Stability is Invariant to Data Preconditioning. Implications to Exp-concave Empirical Risk Minimization"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-37abdcf3a2204c2b9fed4e6e9e0624be", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "imbalanced-learn is an open-source python toolbox aiming at providing a wide range of methods to cope with the problem of imbalanced dataset frequently encountered in machine learning and pattern recognition. The implemented state-of-the-art methods can be categorized into 4 groups: (i) under-sampling, (ii) over-sampling, (iii) combination of overand under-sampling, and (iv) ensemble learning methods. The proposed toolbox only depends on numpy, scipy, and scikit-learn and is distributed under MIT license. Furthermore, it is fully compatible with scikit-learn and is part of the scikit-learn-contrib supported project. Documentation, unit tests as well as integration tests are provided to ease usage and contribution. The toolbox is publicly available in GitHub https://github.com/scikit-learn-contrib/imbalanced-learn.", "targets": "Imbalanced-learn: A Python Toolbox to Tackle the Curse of Imbalanced Datasets in Machine Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6937aeae46ae4d4f9f144a1099f72bdd", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This study implements a vector space model approach to measure the sentiment orientations of words. Two representative vectors for positive/negative polarity are constructed using high-dimensional vector space in both an unsupervised and a semisupervised manner. A sentiment orientation value per word is determined by taking the difference between the cosine distances against the two reference vectors. These two conditions (unsupervised and semi-supervised) are compared against an existing unsupervised method (Turney, 2002). As a result of our experiment, we demonstrate that this novel approach significantly outperforms the previous unsupervised approach and is more practical and data efficient as well.", "targets": "A New Approach for Measuring Sentiment Orientation based on Multi-Dimensional Vector Space"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7f5134cd68e64a0b8cb4d8999dc14f01", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We study a surprising phenomenon related to the representation of a cloud of data points using polynomials. We start with the previously unnoticed empirical observation that, given a collection (a cloud) of data points, the sublevel sets of a certain distinguished polynomial capture the shape of the cloud very accurately. This distinguished polynomial is a sum-of-squares (SOS) derived in a simple manner from the inverse of the empirical moment matrix. In fact, this SOS polynomial is directly related to orthogonal polynomials and the Christoffel function. This allows to generalize and interpret extremality properties of orthogonal polynomials and to provide a mathematical rationale for the observed phenomenon. Among diverse potential applications, we illustrate the relevance of our results on a network intrusion detection task for which we obtain performances similar to existing dedicated methods reported in the literature.", "targets": "Sorting out typicality with the inverse moment matrix SOS polynomial"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0bfae2012094417fab086a5602e8a8a4", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a localized approach to multiple kernel learning that can be formulated as a convex optimization problem over a given cluster structure. For which we obtain generalization error guarantees and derive an optimization algorithm based on the Fenchel dual representation. Experiments on real-world datasets from the application domains of computational biology and computer vision show that convex localized multiple kernel learning can achieve higher prediction accuracies than its global and non-convex local counterparts.", "targets": "Localized Multiple Kernel Learning\u2014A Convex Approach"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-181666bb4e6143c3bee51c3c2942b92e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present a new perspective on graph-based methods for collaborative ranking for recommender systems. Unlike user-based or item-based methods that compute a weighted average of ratings given by the nearest neighbors, or low-rank approximation methods using convex optimization and the nuclear norm, we formulate matrix completion as a series of semi-supervised learning problems, and propagate the known ratings to the missing ones on the user-user or item-item graph globally. The semi-supervised learning problems are expressed as LaplaceBeltrami equations on a manifold, or namely, harmonic extension, and can be discretized by a point integral method. We show that our approach does not impose a low-rank Euclidean subspace on the data points, but instead minimizes the dimension of the underlying manifold. Our method, named LDM (low dimensional manifold), turns out to be particularly effective in generating rankings of items, showing decent computational efficiency and robust ranking quality compared to state-of-the-art methods.", "targets": "A Harmonic Extension Approach for Collaborative Ranking"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-dbb4e941afab44e4bae8e9e6a3cd6527", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Previous work has modeled the compositionality of words by creating characterlevel models of meaning, reducing problems of sparsity for rare words. However, in many writing systems compositionality has an effect even on the character-level: the meaning of a character is derived by the sum of its parts. In this paper, we model this effect by creating embeddings for characters based on their visual characteristics, creating an image for the character and running it through a convolutional neural network to produce a visual character embedding. Experiments on a text classification task demonstrate that such model allows for better processing of instances with rare characters in languages such as Chinese, Japanese, and Korean. Additionally, qualitative analyses demonstrate that our proposed model learns to focus on the parts of characters that carry categorical content which resulting in embeddings that are coherent in visual space.", "targets": "Learning Character-level Compositionality with Visual Features"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a4dc92ef1fc2459aa778ab9b788d8dd1", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Temporal abstraction is key to scaling up learning and planning in reinforcement learning. While planning with temporally extended actions is well understood, creating such abstractions autonomously from data has remained challenging. We tackle this problem in the framework of options [Sutton, Precup & Singh, 1999; Precup, 2000]. We derive policy gradient theorems for options and propose a new option-critic architecture capable of learning both the internal policies and the termination conditions of options, in tandem with the policy over options, and without the need to provide any additional rewards or subgoals. Experimental results in both discrete and continuous environments showcase the flexibility and efficiency of the framework. Introduction Temporal abstraction allows representing knowledge about course of actions that take place at different time scales. In reinforcement learning, options (Sutton, Precup, and Singh 1999; Precup 2000) provide a framework for defining such courses of action and for seamlessly learning and planning with them. Discovering temporal abstractions autonomously has been the subject of extensive research efforts in the last 15 years (McGovern and Barto 2001; Stolle and Precup 2002; Menache, Mannor, and Shimkin 2002; \u015eim\u015fek and Barto 2009; Silver and Ciosek 2012), but approaches that can be used naturally with continuous state and/or action spaces have only recently started to become feasible (Konidaris et al. 2011; Niekum and Barto 2011; Mann, Mannor, and Precup ; Mankowitz, Mann, and Mannor 2016; Kulkarni et al. 2016; Vezhnevets et al. 2016; Daniel et al. 2016). The majority of the existing work has focused on finding subgoals (useful states that an agent should reach) and subsequently learning policies to achieve them. This idea has lead to interesting methods but ones which are also difficult to scale up given their \u201ccombinatorial\u201d flavor. Additionally, learning policies associated with subgoals can be expensive in terms of data and computation time; in the worst case, it can be as expensive as solving the entire task. We present an alternative view, which blurs the line between the problem of discovering options from that of learning options. Based on the policy gradient theorem (Sutton et al. 2000), we derive new results which enable a gradual learning process of the intra-option policies and termination functions, simultaneously with the policy over them. This approach works naturally with both linear and non-linear function approximators, under discrete or continuous state and action spaces. Existing methods for learning options are considerably slower when learning from a single task: much of the benefit will come from re-using the learned options in similar tasks. In contrast, we show that our approach is capable of successfully learning options within a single task without incurring any slowdown and while still providing re-use speedups. We start by reviewing background related to the two main ingredients of our work: policy gradient methods and options. We then describe the core ideas of our approach: the intra-option policy and termination gradient theorems. Additional technical details are included in the appendix. We present experimental results showing that our approach learns meaningful temporally extended behaviors in an effective manner. As opposed to other methods, we only need to specify the number of desired options; it is not necessary to have subgoals, extra rewards, demonstrations, multiple problems or any other special accommodations (however, the approach can work with pseudo-reward functions if desired). To our knowledge, this is the first end-to-end approach for learning options that scales to very large domains at comparable efficiency. Preliminaries and Notation A Markov Decision Process consists of a set of states S, a set of actionsA, a transition function P : S\u00d7A \u2192 (S \u2192 [0, 1]) and a reward function r : S \u00d7 A \u2192 R. For convenience, we develop our ideas assuming discrete state and action sets. However, our results extend to continuous spaces using usual measure-theoretic assumptions (some of our empirical results are in continuous tasks). A (Markovian stationary) policy is a probability distribution over actions conditioned on states, \u03c0 : S \u00d7 A \u2192 [0, 1]. In discounted problems, the value function of a policy \u03c0 is defined as the expected return: V\u03c0(s) = E\u03c0 [ \u2211\u221e t=0 \u03b3 rt+1 | s0 = s] and its action-value function as Q\u03c0(s, a) = E\u03c0 [ \u2211\u221e t=0 \u03b3 rt+1 | s0 = s, a0 = a], where \u03b3 \u2208 [0, 1) is the discount factor. A policy \u03c0 is greedy with respect to a given action-value function Q if \u03c0(s, a) > 0 iff a = argmaxa\u2032 Q(s, a \u2032). In a discrete MDP, there is at least one optimal policy which is greedy with rear X iv :1 60 9. 05 14 0v 1 [ cs .A I] 1 6 Se p 20 16 spect to its own action-value function. Policy gradient methods (Sutton et al. 2000; Konda and Tsitsiklis 2000) address the problem of finding a good policy by performing stochastic gradient descent to optimize a performance objective over a given family of parametrized stochastic policies, \u03c0\u03b8. The policy gradient theorem (Sutton et al. 2000) provides expressions for the gradient of the average reward and discounted reward objectives with respect to \u03b8. In the discounted setting, the objective is defined with respect to a designated start state (or distribution) s0: \u03c1(\u03b8, s0) = E\u03c0\u03b8 [ \u2211 t=0 \u03b3 rt+1 | s0]. The policy gradient theorem shows that: \u2202\u03c1(\u03b8,s0) \u2202\u03b8 = \u2211 s \u03bc\u03c0\u03b8 (s | s0) \u2211 a \u2202\u03c0\u03b8(a|s) \u2202\u03b8 Q\u03c0\u03b8 (s, a), where \u03bc\u03c0\u03b8 (s | s0) = \u2211\u221e t=0 \u03b3 t P (st = s | s0) is a discounted weighting of the states along the trajectories starting from s0. In practice, the policy gradient is estimated from samples along the on-policy stationary distribution. (Thomas 2014) showed that neglecting the discount factor in this stationary distribution makes the usual policy gradient estimator biased. However, correcting for this discrepancy also reduces data efficiency. For simplicity, we build on the framework of (Sutton et al. 2000) and discuss how to extend our results according to (Thomas 2014). The options framework (Sutton, Precup, and Singh 1999; Precup 2000) formalizes the idea of temporally extended actions. A Markovian option \u03c9 \u2208 \u03a9 is a triple (I\u03c9, \u03c0\u03c9, \u03b2\u03c9) in which I\u03c9 \u2286 S is an initiation set, \u03c0\u03c9 is an intra-option policy, and \u03b2\u03c9 : S \u2192 [0, 1] is a termination function. We also assume that \u2200s \u2208 S,\u2200\u03c9 \u2208 \u03a9 : s \u2208 I\u03c9 (i.e., all options are available everywhere), an assumption made in the majority of options discovery algorithms. We will discuss how to dispense with this assumption in the final section. (Sutton, Precup, and Singh 1999; Precup 2000) show that an MDP endowed with a set of options becomes a Semi-Markov Decision Process (Puterman 1994, chapter 11), which has a corresponding optimal value function over options V\u03a9(s) and option-value function Q\u03a9(s, \u03c9). Learning and planning algorithms for MDPs have their counterparts in this setting. However, the existence of the underlying MDP offers the possibility of learning about many different options in parallel : the idea of intra-option learning, which we leverage in our work. Learning Options We adopt a continual perspective on the problem of learning options. At any time, we would like to distill all of the available experience into every component of our system: value function and policy over options, intra-option policies and termination functions. To achieve this goal, we focus on learning option policies and termination functions, assuming they are represented using differentiable parameterized function approximators. We consider the call-and-return option execution model, in which an agent picks option \u03c9 according to its policy over options \u03c0\u03a9 , then follows the intra-option policy \u03c0\u03c9 until termination (as dictated by \u03b2\u03c9), at which point this procedure is repeated. Let \u03c0\u03c9,\u03b8 denote the intra-option policy of option \u03c9 parametrized by \u03b8 and \u03b2\u03c9,\u03b8, the termination function of \u03c9 parameterized by \u03b8. We present two new results for learning options, obtained using as blueprint the policy gradient theorem (Sutton et al. 2000). Both results are derived under the assumption that the goal is to learn options that maximize the expected return in the current task. However, if one wanted to add extra information to the objective function, this could readily be done so long as it comes in the form of an additive differentiable function. Suppose we aim to optimize directly the discounted return, expected over all the trajectories starting at a designated state s0 and option \u03c90, then: \u03c1(\u03a9, \u03b8, \u03b8, s0, \u03c90) = E\u03a9,\u03b8,\u03c9 [ \u2211\u221e t=0 \u03b3 rt+1 | s0, \u03c90]. Note that this return depends on the policy over options, as well as the parameters of the option policies and termination functions. We will take gradients of this objective with respect to \u03b8 and \u03b8. In order to do this, we will manipulate equations similar to those used in intra-option learning (Sutton, Precup, and Singh 1999, section 8). Specifically, the definition of the option-value function can be written as: Q\u03a9(s, \u03c9) = E \u03a9,\u03b8,\u03b8 [ \u221e \u2211 t=0 \u03b3rt+1 \u2223\u2223\u2223\u2223 s0 = s, \u03c90 = \u03c9 ]", "targets": "The Option-Critic Architecture"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3589da6b7f1844faa36aa7b5c0924d40", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The current information analysis capabilities of legal professionals are still lagging behind the explosive growth in legal document availability through digital means, driving the need for higher efficiency Legal Information Retrieval (IR) and Question Answering (QA) methods. The IR task in particular has a set of unique challenges that invite the use of semantic motivated NLP techniques. In this work, a two-stage method for Legal Information Retrieval is proposed, combining lexical statistics and distributional sentence representations in the context of Competition on Legal Information Extraction/Entailment (COLIEE). The combination is done by means of disambiguation rules, applied over the lexical rankings when those deemed unreliable for a given query. Competition and experimental results indicate small gains in overall retrieval performance using the proposed approach. Additionally, a analysis of error and improvement cases is presented for a better understanding of the contributions.", "targets": "Improving Legal Information Retrieval by Distributional Composition with Term Order Probabilities"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-931fe73cdf3e428dae953d550f473555", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Multivariate time series naturally exist in many fields, like energy, bioinformatics, signal processing, and finance. Most of these applications need to be able to compare these structured data. In this context, dynamic time warping (DTW) is probably the most common comparison measure. However, not much research effort has been put into improving it by learning. In this paper, we propose a novel method for learning similarities based on DTW, in order to improve time series classification. Making use of the uniform stability framework, we provide the first theoretical guarantees in the form of a generalization bound for linear classification. The experimental study shows that the proposed approach is efficient, while yielding sparse classifiers.", "targets": "Similarity Learning for Time Series Classification"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-899f300880744a96b7669b9f6dfac9f9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we describe a system for generating threedimensional visual simulations of natural language motion expressions. We use a rich formal model of events and their participants to generate simulations that satisfy the minimal constraints entailed by the associated utterance, relying on semantic knowledge of physical objects and motion events. This paper outlines technical considerations and discusses implementing the aforementioned semantic models into such a system.", "targets": "Multimodal Semantic Simulations of Linguistically Underspecified Motion Events"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a5f16201612e46d7a0d4f34b229b77d6", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Within the framework of ADABOOST.MH, we propose to train vector-valued decision trees to optimize the multi-class edge without reducing the multi-class problem toK binary one-againstall classifications. The key element of the method is a vector-valued decision stump, factorized into an input-independent vector of length K and label-independent scalar classifier. At inner tree nodes, the label-dependent vector is discarded and the binary classifier can be used for partitioning the input space into two regions. The algorithm retains the conceptual elegance, power, and computational efficiency of binary ADABOOST. In experiments it is on par with support vector machines and with the best existing multi-class boosting algorithm AOSOLOGITBOOST, and it is significantly better than other known implementations of ADABOOST.MH.", "targets": "The return of ADABOOST.MH: multi-class Hamming trees"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a8ecfbfc89b34853b72f04c34decef43", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Neural network based models are a very powerful tool for creating word embeddings, the objective of these models is to group similar words together. These embeddings have been used as features to improve results in various applications such as document classification, named entity recognition, etc. Neural language models are able to learn word representations which have been used to capture semantic shifts across time and geography. The objective of this paper is to first identify and then visualize how words change meaning in different text corpus. We will train a neural language model on texts from a diverse set of disciplines \u2013 philosophy, religion, fiction etc. Each text will alter the embeddings of the words to represent the meaning of the word inside that text. We will present a computational technique to detect words that exhibit significant linguistic shift in meaning and usage. We then use enhanced scatterplots and storyline visualization to visualize the linguistic shift", "targets": "Visualizing Linguistic Shift"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-daf66aa975e544c9b5714b4c559b86bd", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The analysis of the current integration attempts of some modes and use cases of user-machine interaction is presented. The new concept of the user-driven intelligent interface is proposed on the basis of multimodal augmented reality and brain-computer interaction for various applications: in disabilities studies, education, home care, health care, etc. The several use cases of multimodal augmentation are presented. The perspectives of the better human comprehension by the immediate feedback through neurophysical channels by means of brain-computer interaction are outlined. It is shown that brain\u2013 computer interface (BCI) technology provides new strategies to overcome limits of the currently available user interfaces, especially for people with functional disabilities. The results of the previous studies of the low end consumer and open-source BCI-devices allow us to conclude that combination of machine learning (ML), multimodal interactions (visual, sound, tactile) with BCI will profit from the immediate feedback from the actual neurophysical reactions classified by ML methods. In general, BCI in combination with other modes of AR interaction can deliver much more information than these types of interaction themselves. Even in the current state the combined AR-BCI interfaces could provide the highly adaptable and personal services, especially for people with functional disabilities. Keywords\u2014 augmented reality, interfaces for accessibility, multimodal user interface, brain-computer interface, eHealth, machine learning, machine-to-machine interactions, human-tohuman interactions, human-to-machine interactions", "targets": "User-driven Intelligent Interface on the Basis of Multimodal Augmented Reality and Brain-Computer Interaction for People with Functional Disabilities"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4ec136cbc64c46cbba097f4aaadc0c18", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Our experience of the world is multimodal we see objects, hear sounds, feel texture, smell odors, and taste flavors. Modality refers to the way in which something happens or is experienced and a research problem is characterized as multimodal when it includes multiple such modalities. In order for Artificial Intelligence to make progress in understanding the world around us, it needs to be able to interpret such multimodal signals together. Multimodal machine learning aims to build models that can process and relate information from multiple modalities. It is a vibrant multi-disciplinary field of increasing importance and with extraordinary potential. Instead of focusing on specific multimodal applications, this paper surveys the recent advances in multimodal machine learning itself and presents them in a common taxonomy. We go beyond the typical early and late fusion categorization and identify broader challenges that are faced by multimodal machine learning, namely: representation, translation, alignment, fusion, and co-learning. This new taxonomy will enable researchers to better understand the state of the field and identify directions for future research.", "targets": "Multimodal Machine Learning: A Survey and Taxonomy"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0a3b185a4e8749aeb25cf81219388848", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The lack of diversity in a genetic algorithm\u2019s population may lead to a bad performance of the genetic operators since there is not an equilibrium between exploration and exploitation. In those cases, genetic algorithms present a fast and unsuitable convergence. In this paper we develop a novel hybrid genetic algorithm which attempts to obtain a balance between exploration and exploitation. It confronts the diversity problem using the named greedy diversification operator. Furthermore, the proposed algorithm applies a competition between parent and children so as to exploit the high quality visited solutions. These operators are complemented by a simple selection mechanism designed to preserve and take advantage of the population diversity. Additionally, we extend our proposal to the field of memetic algorithms, obtaining an improved model with outstanding results in practice. The experimental study shows the validity of the approach as well as how important is taking into account the exploration and exploitation concepts when designing an evolution-", "targets": "GENETIC AND MEMETIC ALGORITHM WITH DIVERSITY EQUILIBRIUM BASED ON GREEDY DIVERSIFICATION"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b38e1bce64ad41eab61ffff3060b959e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Deep generative models provide a powerful and flexible means to learn complex distributions over data by incorporating neural networks into latent-variable models. Variational approaches to training such models introduce a probabilistic encoder that casts data, typically unsupervised, into an entangled representation space. While unsupervised learning is often desirable, sometimes even necessary, when we lack prior knowledge about what to represent, being able to incorporate domain knowledge in characterising certain aspects of variation in the data can often help learn better disentangled representations. Here, we introduce a new formulation of semi-supervised learning in variational autoencoders that allows precisely this. It permits flexible specification of probabilistic encoders as directed graphical models via a stochastic computation graph, containing both continuous and discrete latent variables, with conditional distributions parametrised by neural networks. We demonstrate how the provision of dependency structures, along with a few labelled examples indicating plausible values for some components of the latent space, can help quickly learn disentangled representations. We then evaluate its ability to do so, both qualitatively by exploring its generative capacity, and quantitatively by using the disentangled representation to perform classification, on a variety of models and datasets.", "targets": "IN DEEP GENERATIVE MODELS"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5262709cf899441d87aee7f56be0bc31", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Optimization by stochastic gradient descent is an important component of many large-scale machine learning algorithms. A wide variety of such optimization algorithms have been devised; however, it is unclear whether these algorithms are robust and widely applicable across many different optimization landscapes. In this paper we develop a collection of unit tests for stochastic optimization. Each unit test rapidly evaluates an optimization algorithm on a small-scale, isolated, and well-understood difficulty, rather than in real-world scenarios where many such issues are entangled. Passing these unit tests is not sufficient, but absolutely necessary for any algorithms with claims to generality or robustness. We give initial quantitative and qualitative results on a dozen established algorithms. The testing framework is open-source, extensible, and easy to apply to new algorithms.", "targets": "Unit Tests for Stochastic Optimization"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-06b26215e7a54b378f09cc42fa58780d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "As Wireless Sensor Networks are penetrating into the industrial domain, many research opportunities are emerging. One such essential and challenging application is that of node localization. A feed-forward neural network based methodology is adopted in this paper. The Received Signal Strength Indicator (RSSI) values of the anchor node beacons are used. The number of anchor nodes and their configurations has an impact on the accuracy of the localization system, which is also addressed in this paper. Five different training algorithms are evaluated to find the training algorithm that gives the best result. The multi-layer Perceptron (MLP) neural network model was trained using Matlab. In order to evaluate the performance of the proposed method in real time, the model obtained was then implemented on the Arduino microcontroller. With four anchor nodes, an average 2D localization error of 0.2953 m has been achieved with a 12-12-2 neural network structure. The proposed method can also be implemented on any other embedded microcontroller system.", "targets": "LOCALIZATION FOR WIRELESS SENSOR NETWORKS: A NEURAL NETWORK APPROACH"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-03def38e85ba45bb8435182ea2061418", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In many embedded systems, such as imaging systems, the system has a single designated purpose, and same threads are executed repeatedly. Profiling thread behavior, allows the system to allocate each thread its resources in a way that improves overall system performance. We study an online resource allocation problem, where a resource manager simultaneously allocates resources (exploration), learns the impact on the different consumers (learning) and improves allocation towards optimal performance (exploitation). We build on the rich framework of multiarmed bandits and present online and offline algorithms. Through extensive experiments with both synthetic data and real-world cache allocation to threads we show the merits and properties of our algorithms.", "targets": "Bandits meet Computer Architecture: Designing a Smartly-allocated Cache"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-454bddb7124e4e2a85e060f9bb2e2113", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Motivated by value function estimation in reinforcement learning, we study statistical linear inverse problems, i.e., problems where the coefficients of a linear system to be solved are observed in noise. We consider penalized estimators, where performance is evaluated using a matrix-weighted two-norm of the defect of the estimator measured with respect to the true, unknown coefficients. Two objective functions are considered depending whether the error of the defect measured with respect to the noisy coefficients is squared or unsquared. We propose simple, yet novel and theoretically well-founded data-dependent choices for the regularization parameters for both cases that avoid datasplitting. A distinguishing feature of our analysis is that we derive deterministic error bounds in terms of the error of the coefficients, thus allowing the complete separation of the analysis of the stochastic properties of these errors. We show that our results lead to new insights and bounds for linear value function estimation in reinforcement learning.", "targets": "Statistical linear estimation with penalized estimators: an application to reinforcement learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1234d0c669174457a87c9bb81cf6f717", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We discuss representing and reasoning with knowledge about the time-dependent util\u00ad ity of an agent's actions. Time-dependent utility plays a crucial role in the interac\u00ad tion between computation and action under bounded resources. We present a semantics for time-dependent utility and describe the use of time-dependent information in deci\u00ad sion contexts. We illustrate our discussion with examples of time-pressured reasoning in Protos, a system constructed to explore the ideal control of inference by reasoners with limited abilities.", "targets": "Time-Dependent Utility and Action Under Uncertainty"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-31b2c6f4693a4890ae3850222edbe753", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we propose a universal model for high-dimensional data, called the Hybrid Orthogonal Projection and Estimation (HOPE) model, which combines a linear orthogonal projection and a finite mixture model under a unified generative modelling framework. The HOPE model itself can be learned unsupervisedly from un-labelled data based on the maximum likelihood estimation as well as trained discriminatively from labelled data. More interestingly, we have shown the proposed HOPE models are closely related to neural networks (NNs) in a sense that each hidden layer can be reformulated as a HOPE model. As a result, the HOPE framework can be used as a novel tool to probe why and how NNs work, more importantly, it also provides several new learning algorithms to learn NNs either supervisedly or unsupervisedly. In this work, we have investigated the HOPE framework in learning NNs for several standard tasks, including image recognition on MNIST and speech recognition on TIMIT. Experimental results show that the HOPE framework yields significant performance gains over the current stateof-the-art methods in various types of NN learning problems, including unsupervised feature learning, supervised or semi-supervised learning.", "targets": "Hybrid Orthogonal Projection and Estimation (HOPE): A New Framework to Probe and Learn Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-749b4063a2dd49d2a7541f38bb6f16a2", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In previous work [BGHK92, BGHK93], we have studied the random-worlds approach\ufffda particular (and quite powerful) method for generating degrees of belief (i.e., subjective probabilities) from a knowl\u00ad edge base consisting of objective (first-order, slalisti\u00ad cal, and defaull) infonnation. But allowing a know l \u00ad edge base to contain only objective information is sometimes limiting. We occa<;ionally wish to include infonnation about degrees of belief in the knowledge base as well, because there are contexts in which old be1iefs represent important information that should influence new beliefs. In this paper, we describe three quite general techniques for extending a method that generates degrees of belief from objective informa\u00ad tion to one that can make use of degrees of belief as well. All of our techniques are ha<;ed on well-known approaches, such a5 cross-emropy. We discuss gen\u00ad eral connections between the techniques and in partic\u00ad ular show that, although conceptually and technically quite different, all of the techniques give the same answer when applied to the random-worlds method.", "targets": "Generating New Beliefs From Old*"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-bf74d80bf37147848e164ba5b50fa927", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Exploration has been a crucial part of reinforcement learning, yet several important questions concerning exploration efficiency are still not answered satisfactorily by existing analytical frameworks. These questions include exploration parameter setting, situation analysis, and hardness of MDPs, all of which are unavoidable for practitioners. To bridge the gap between the theory and practice, we propose a new analytical framework called the success probability of exploration. We show that those important questions of exploration above can all be answered under our framework, and the answers provided by our framework meet the needs of practitioners better than the existing ones. More importantly, we introduce a concrete and practical approach to evaluating the success probabilities in certain MDPs without the need of actually running the learning algorithm. We then provide empirical results to verify our approach, and demonstrate how the success probability of exploration can be used to analyse and predict the behaviours and possible outcomes of exploration, which are the keys to the answer of the important questions of exploration.", "targets": "Success Probability of Exploration: a Concrete Analysis of Learning Efficiency"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f00fa921ec7d4aaaa5a1dd73cc0b5972", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "It has always been a burden to the users of statistical topic models to predetermine the right number of topics, which is a key parameter of most topic models. Conventionally, automatic selection of this parameter is done through either statistical model selection (e.g., cross-validation, AIC, or BIC) or Bayesian nonparametric models (e.g., hierarchical Dirichlet process). These methods either rely on repeated runs of the inference algorithm to search through a large range of parameter values which does not suit the mining of big data, or replace this parameter with alternative parameters that are less intuitive and still hard to be determined. In this paper, we explore to \u201celiminate\u201d this parameter from a new perspective. We first present a nonparametric treatment of the PLSA model named nonparametric probabilistic latent semantic analysis (nPLSA). The inference procedure of nPLSA allows for the exploration and comparison of different numbers of topics within a single execution, yet remains as simple as that of PLSA. This is achieved by substituting the parameter of the number of topics with an alternative parameter that is the minimal goodness of fit of a document. We show that the new parameter can be further eliminated by two parameter-free treatments: either by monitoring the diversity among the discovered topics or by a weak supervision from users in the form of an exemplar topic. The parameter-free topic model finds the appropriate number of topics when the diversity among the discovered topics is maximized, or when the granularity of the discovered topics matches the exemplar topic. Experiments on both synthetic and real data prove that the parameterfree topic model extracts topics with a comparable quality comparing to classical topic models with \u201cmanual transmission.\u201d The quality of the topics outperforms those extracted through classical Bayesian nonparametric models. \u2217This study is done when the first author is visiting the University of Michigan. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$10.00.", "targets": "\"Look Ma, No Hands!\" A Parameter-Free Topic Model"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-49f19a109f5e46df963770dccb8b97da", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Knowledge compilation is an approach to tackle the computational intractability of general reasoning problems. According to this approach, knowledge bases are converted off-line into a target compilation language which is tractable for on-line querying. Reduced ordered binary decision diagram (ROBDD) is one of the most influential target languages. We generalize ROBDD by associating some implied literals in each node and the new language is called reduced ordered binary decision diagram with implied literals (ROBDD-L). Then we discuss a kind of subsets of ROBDD-L called ROBDD-i with precisely i implied literals (0 \u2264 i \u2264 \u221e). In particular, ROBDD-0 is isomorphic to ROBDD; ROBDD-\u221e requires that each node should be associated by the implied literals as many as possible. We show that ROBDD-i has uniqueness over some specific variables order, and ROBDD-\u221e is the most succinct subset in ROBDD-L and can meet most of the querying requirements involved in the knowledge compilation map. Finally, we propose an ROBDD-i compilation algorithm for any i and a ROBDD-\u221e compilation algorithm. Based on them, we implement a ROBDD-L package called BDDjLu and then get some conclusions from preliminary experimental results: ROBDD-\u221e is obviously smaller than ROBDD for all benchmarks; ROBDD-\u221e is smaller than the d-DNNF the benchmarks whose compilation results are relatively small; it seems that it is better to transform ROBDDs-\u221e into FBDDs and ROBDDs rather than straight compile the benchmarks.", "targets": "Reduced Ordered Binary Decision Diagram with Implied Literals: A New knowledge Compilation Approach"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3c7d8333d0cd40b38b10ce693f3930ea", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Diagnosis of liver infection at preliminary stage is important for better treatment. In today\u2019s scenario devices like sensors are used for detection of infections. Accurate classification techniques are required for automatic identification of disease samples. In this context, this study utilizes data mining approaches for classification of liver patients from healthy individuals. Four algorithms (Na\u00efve Bayes, Bagging, Random forest and SVM) were implemented for classification using R platform. Further to improve the accuracy of classification a hybrid NeuroSVM model was developed using SVM and feed-forward artificial neural network (ANN). The hybrid model was tested for its performance using statistical parameters like root mean square error (RMSE) and mean absolute percentage error (MAPE). The model resulted in a prediction accuracy of 98.83%. The results suggested that development of hybrid model improved the accuracy of prediction. To serve the medicinal community for prediction of liver disease among patients, a graphical user interface (GUI) has been developed using R. The GUI is deployed as a package in local repository of R platform for users to perform prediction.", "targets": "NeuroSVM: A Graphical User Interface for Identification of Liver Patients"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-da2488501a174357a159e4a2453d085f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. This has resulted is substantial duplication of effort and incompatible infrastructure across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon. This TensorFlow-based infrastructure provides a complete modular deep learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications with data loading, data augmentation, network architectures, loss functions and evaluation metrics that are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted interventions.", "targets": "NiftyNet: a deep-learning platform for medical imaging"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-52a0da7e094d4e2d84cb6d1e38c3ac17", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Cooperative games model the allocation of profit from joint actions, following considerations such as stability and fairness. We propose the reliability extension of such games, where agents may fail to participate in the game. In the reliability extension, each agent only \u201csurvives\u201d with a certain probability, and a coalition\u2019s value is the probability that its surviving members would be a winning coalition in the base game. We study prominent solution concepts in such games, showing how to approximate the Shapley value and how to compute the core in games with few agent types. We also show that applying the reliability extension may stabilize the game, making the core non-empty even when the base game has an empty core.", "targets": "Solving Cooperative Reliability Games"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b01bade413f244f78cb541cac6416e12", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Previous studies have proposed image-based clutter measures that correlate with human search times and/or eye movements. However, most models do not take into account the fact that the effects of clutter interact with the foveated nature of the human visual system: visual clutter further from the fovea has an increasing detrimental influence on perception. Here, we introduce a new foveated clutter model to predict the detrimental effects in target search utilizing a forced fixation search task. We use Feature Congestion (Rosenholtz et al.) as our non foveated clutter model, and we stack a peripheral architecture on top of Feature Congestion for our foveated model. We introduce the Peripheral Integration Feature Congestion (PIFC) coefficient, as a fundamental ingredient of our model that modulates clutter as a non-linear gain contingent on eccentricity. We finally show that Foveated Feature Congestion (FFC) clutter scores (r(44) = \u22120.82 \u00b1 0.04, p < 0.0001) correlate better with target detection (hit rate) than regular Feature Congestion (r(44) = \u22120.19 \u00b1 0.13, p = 0.0774) in forced fixation search. Thus, our model allows us to enrich clutter perception research by computing fixation specific clutter maps. A toolbox for creating peripheral architectures: Piranhas: Peripheral Architectures for Natural, Hybrid and Artificial Systems will be made available1.", "targets": "Can Peripheral Representations Improve Clutter Metrics on Complex Scenes?"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-901e562200cb4b16aa81387767281102", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The advent of Web 2.0 has led to an increase in the amount of sentimental content available in the Web. Such content is often found in social media web sites in the form of movie or product reviews, user comments, testimonials, messages in discussion forums etc. Timely discovery of the sentimental or opinionated web content has a number of advantages, the most important of all being monetization. Understanding of the sentiments of human masses towards different entities and products enables better services for contextual advertisements, recommendation systems and analysis of market trends. The focus of our project is sentiment focussed web crawling framework to facilitate the quick discovery of sentimental contents of movie reviews and hotel reviews and analysis of the same. We use statistical methods to capture elements of subjective style and the sentence polarity. The paper elaborately discusses two supervised machine learning algorithms: K-Nearest Neighbour(K-NN) and Na\u00efve Bayes\u2019 and compares their overall accuracy, precisions as well as recall values. It was seen that in case of movie reviews Na\u00efve Bayes\u2019 gave far better results than K-NN but for hotel reviews these algorithms gave lesser, almost same", "targets": "Sentiment Analysis of Review Datasets using Nai\u0308ve Bayes\u2019 and K-NN Classifier"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8fe34a78f2b1421ab61253d8c529d4b3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Nowadays this is very popular to use deep architectures in machine learning. Deep Belief Networks (DBNs) are deep architectures that use stack of Restricted Boltzmann Machines (RBM) to create a powerful generative model using training data. In this paper we present an improvement in a common method that is usually used in training of RBMs. The new method uses free energy as a criterion to obtain elite samples from generative model. We argue that these samples can more accurately compute gradient of log probability of training data. According to the results, an error rate of 0.99% was achieved on MNIST test set. This result shows that the proposed method outperforms the method presented in the first paper introducing DBN (1.25% error rate) and general classification methods such as SVM (1.4% error rate) and KNN (with 1.6% error rate). In another test using ISOLET dataset, letter classification error dropped to 3.59% compared to 5.59% error rate achieved in those papers using this dataset. The implemented method is available online at \u201chttp://ceit.aut.ac.ir/~keyvanrad/DeeBNet Toolbox.html\u201d.", "targets": "Deep Belief Network Training Improvement Using Elite Samples Minimizing Free Energy"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4a00259c4cf0451ca805e056bc8f097e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In many applications, ideas that are described by a set of words often flow between different groups. To facilitate users in analyzing the flow, we present a method to model the flow behaviors that aims at identifying the lead-lag relationships between word clusters of different user groups. In particular, an improved Bayesian conditional cointegration based on dynamic time warping is employed to learn links between words in different groups. A tensor-based technique is developed to cluster these linked words into different clusters (ideas) and track the flow of ideas. The main feature of the tensor representation is that we introduce two additional dimensions to represent both time and lead-lag relationships. Experiments on both synthetic and real datasets show that our method is more effective than methods based on traditional clustering techniques and achieves better accuracy. A case study was conducted to demonstrate the usefulness of our method in helping users understand the flow of ideas between different user groups on social media.", "targets": "Tracking Idea Flows between Social Groups"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-87752b8c0c274c91adb2aac9a2a2c2d3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We combine Riemannian geometry with the mean field theory of high dimensional chaos to study the nature of signal propagation in generic, deep neural networks with random weights. Our results reveal an order-to-chaos expressivity phase transition, with networks in the chaotic phase computing nonlinear functions whose global curvature grows exponentially with depth but not width. We prove this generic class of deep random functions cannot be efficiently computed by any shallow network, going beyond prior work restricted to the analysis of single functions. Moreover, we formalize and quantitatively demonstrate the long conjectured idea that deep networks can disentangle highly curved manifolds in input space into flat manifolds in hidden space. Our theoretical analysis of the expressive power of deep networks broadly applies to arbitrary nonlinearities, and provides a quantitative underpinning for previously abstract notions about the geometry of deep functions.", "targets": "Exponential expressivity in deep neural networks through transient chaos"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e02b9df5ce9e40b8aac541afed889ecb", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A model of associative memory is studied, which stores and reliably retrieves many more patterns than the number of neurons in the network. We propose a simple duality between this dense associative memory and neural networks commonly used in deep learning. On the associative memory side of this duality, a family of models that smoothly interpolates between two limiting cases can be constructed. One limit is referred to as the feature-matching mode of pattern recognition, and the other one as the prototype regime. On the deep learning side of the duality, this family corresponds to feedforward neural networks with one hidden layer and various activation functions, which transmit the activities of the visible neurons to the hidden layer. This family of activation functions includes logistics, rectified linear units, and rectified polynomials of higher degrees. The proposed duality makes it possible to apply energy-based intuition from associative memory to analyze computational properties of neural networks with unusual activation functions \u2013 the higher rectified polynomials which until now have not been used for training neural networks. The utility of the dense memories is illustrated for two test cases: the logical gate XOR and the recognition of handwritten digits from the MNIST data set.", "targets": "Dense Associative Memory for Pattern Recognition"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d595290d870e4d49a8e777aafaeee7a9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Observational studies are based on accurate assessment of human state. A behavior recognition system that models interlocutors\u2019 state in real-time can significantly aid the mental health domain. However, behavior recognition from speech remains a challenging task since it is difficult to find generalizable and representative features because of noisy and high-dimensional data, especially when data is limited and annotated coarsely and subjectively. Deep Neural Networks (DNN) have shown promise in a wide range of machine learning tasks, but for Behavioral Signal Processing (BSP) tasks their application has been constrained due to limited quantity of data. We propose a Sparsely-Connected and Disjointly-Trained DNN (SD-DNN) framework to deal with limited data. First, we break the acoustic feature set into subsets and train multiple distinct classifiers. Then, the hidden layers of these classifiers become parts of a deeper network that integrates all feature streams. The overall system allows for full connectivity while limiting the number of parameters trained at any time and allows convergence possible with even limited data. We present results on multiple behavior codes in the couples\u2019 therapy domain and demonstrate the benefits in behavior classification accuracy. We also show the viability of this system towards live behavior annotations.", "targets": "Sparsely Connected and Disjointly Trained Deep Neural Networks for Low Resource Behavioral Annotation: Acoustic Classification in Couples\u2019 Therapy"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-dc84f629fee143a385308bc3868d306d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Compared with word-level and sentence-level convolutional neural networks (ConvNets), the character-level ConvNets has a better applicability for misspellings and typos input. Due to this, recent researches for text classification mainly focus on character-level ConvNets. However, while the majority of these researches employ English corpus for the character-level text classification, few researches have been done using Chinese corpus. This research hopes to bridge this gap, exploring character-level ConvNets for Chinese corpus test classification. We have constructed a large-scale Chinese dataset, and the result shows that character-level ConvNets works better on Chinese character dataset than its corresponding pinyin format dataset, which is the general solution in previous researches. This is the first time that character-level ConvNets has been applied to Chinese character dataset for text classification problem.", "targets": "Character-level Convolutional Network for Text Classification Applied to Chinese Corpus"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8ebb9f28ed2d4de7838557d3ae0e0357", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a StochAstic Fault diagnosis AlgoRIthm, called Safari, which trades off guarantees of computing minimal diagnoses for computational efficiency. We empirically demonstrate, using the 74XXX and ISCAS85 suites of benchmark combinatorial circuits, that Safari achieves several orders-of-magnitude speedup over two well-known deterministic algorithms, CDA\u2217 and HA\u2217, for multiple-fault diagnoses; further, Safari can compute a range of multiple-fault diagnoses that CDA\u2217 and HA\u2217 cannot. We also prove that Safari is optimal for a range of propositional fault models, such as the widely-used weak-fault models (models with ignorance of abnormal behavior). We discuss the optimality of Safari in a class of strong-fault circuit models with stuck-at failure modes. By modeling the algorithm itself as a Markov chain, we provide exact bounds on the minimality of the diagnosis computed. Safari also displays strong anytime behavior, and will return a diagnosis after any non-trivial inference time.", "targets": "Approximate Model-Based Diagnosis Using Greedy Stochastic Search"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2cf53a29e942455389799d92663712c7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper we deal with a new approach to probabilistic reasoning in a logical frame\u00ad work. Nearly almost all logics of probabil\u00ad ity that have been proposed in the litera\u00ad ture are based on classical two-valued logic. After making clear the differences between fuzzy logic and probability theory, here we propose a fuzzy logic of probability for which completeness results (in a probabilistic sense) are provided. The main idea behind this approach is that probability values of crisp propositions can be understood as truth\u00ad values of some suitable fuzzy propositions as\u00ad sociated to the crisp ones. Moreover, sug\u00ad gestiotlS and examples of how to extend the formalism to cope with conditional probabil\u00ad ities and with other uncertainty formalisms are also provided.", "targets": "Fuzzy logic and probability"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-199a361140e440b9aad1a39538c49b20", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In Chile, does not exist an independent entity that publishes quantitative or qualitative surveys to understand the traditional media environment and its adaptation on the Social Web. Nowadays, Chilean newsreaders are increasingly using social web platforms as their primary source of information, among which Twitter plays a central role. Historical media and pure players are developing different strategies to increase their audience and influence on this platform. In this article, we propose a methodology based on data mining techniques to provide a first level of analysis of the new Chilean media environment. We use a crawling technique to mine news streams of 37 different Chilean media actively presents on Twitter and propose several indicators to compare them. We analyze their volumes of production, their potential audience, and using NLP techniques, we explore the content of their production: their editorial line and their geographic coverage.", "targets": "Diagnosing editorial strategies of Chilean media on Twitter using an automatic news classifier"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d848d98055014dfb8ebca165e6e022c4", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Practically all programming languages allow the programmer to split a program into several modules which brings along several advantages in software development. In this paper, we are interested in the area of answer-set programming where fully declarative and nonmonotonic languages are applied. In this context, obtaining a modular structure for programs is by no means straightforward since the output of an entire program cannot in general be composed from the output of its components. To better understand the effects of disjunctive information on modularity we restrict the scope of analysis to the case of disjunctive logic programs (DLPs) subject to stable-model semantics. We define the notion of a DLP-function, where a well-defined input/output interface is provided, and establish a novel module theorem which indicates the compositionality of stable-model semantics for DLP-functions. The module theorem extends the well-known splitting-set theorem and enables the decomposition of DLP-functions given their strongly connected components based on positive dependencies induced by rules. In this setting, it is also possible to split shared disjunctive rules among components using a generalized shifting technique. The concept of modular equivalence is introduced for the mutual comparison of DLP-functions using a generalization of a translation-based verification method.", "targets": "Modularity Aspects of Disjunctive Stable Models"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-de34d08edba7416c85f0b7e8678cffc2", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this work, we study the guaranteed delivery model which is widely used in online display advertising. In the guaranteed delivery scenario, ad exposures (which are also called impressions in some works) to users are guaranteed by contracts signed in advance between advertisers and publishers. A crucial problem for the advertising platform is how to fully utilize the valuable user traffic to generate as much as possible revenue. Different from previous works which usually minimize the penalty of unsatisfied contracts and some other cost (e.g. representativeness), we propose the novel consumption minimization model, in which the primary objective is to minimize the user traffic consumed to satisfy all contracts. Under this model, we develop a near optimal method to deliver ads for users. The main advantage of our method lies in that it consumes nearly as least as possible user traffic to satisfy all contracts, therefore more contracts can be accepted to produce more revenue. It also enables the publishers to estimate how much user traffic is redundant or short so that they can sell or buy this part of traffic in bulk in the exchange market. Furthermore, it is robust with regard to priori knowledge of user type distribution. Finally, the simulation shows that our method outperforms the traditional state-of-the-art methods.", "targets": "Efficient Delivery Policy to Minimize User Traffic Consumption in Guaranteed Advertising\u2217"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e6f93704aed44197bd13ef04b6f4db64", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The CSA-ES is an Evolution Strategy with Cumulative Step size Adaptation, where the step size is adapted measuring the length of a so-called cumulative path. The cumulative path is a combination of the previous steps realized by the algorithm, where the importance of each step decreases with time. This article studies the CSA-ES on composites of strictly increasing with affine linear functions through the investigation of its underlying Markov chains. Rigorous results on the change and the variation of the step size are derived with and without cumulation. The step-size diverges geometrically fast in most cases. Furthermore, the influence of the cumulation parameter is studied.", "targets": "Cumulative Step-size Adaptation on Linear Functions"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f7dc3a7ad86d42d4abcc77cbf9380df7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "miRNA and gene expression profiles have been proved useful for classifying cancer samples. Efficient classifiers have been recently sought and developed. A number of attempts to classify cancer samples using miRNA/gene expression profiles are known in literature. However, the use of semi-supervised learning models have been used recently in bioinformatics, to exploit the huge corpuses of publicly available sets. Using both labeled and unlabeled sets to train sample classifiers, have not been previously considered when gene and miRNA expression sets are used. Moreover, there is a motivation to integrate both miRNA and gene expression for a semi-supervised cancer classification as that provides more information on the characteristics of cancer samples. In this paper, two semi-supervised machine learning approaches, namely self-learning and co-training, are adapted to enhance the quality of cancer sample classification. These approaches exploit the huge public corpuses to enrich the training data. In self-learning, miRNA and gene based classifiers are enhanced independently. While in co-training, both miRNA and gene expression profiles are used simultaneously to provide different views of cancer samples. To our knowledge, it is the first attempt to apply these learning approaches to cancer classification. The approaches were evaluated using breast cancer, hepatocellular carcinoma (HCC) and lung cancer expression sets. Results show up to 20% improvement in F1-measure over Random Forests and SVM classifiers. Co-Training also outperforms Low Density Separation (LDS) approach by around 25% improvement in F1-measure in breast cancer. Keywords\u2014 miRNA and gene expression analysis; Semisupervised Approaches; Self-Learning; Co-Training; Cancer sample classifiers", "targets": "miRNA and Gene Expression based Cancer Classification using Self- Learning and Co-Training Approaches"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c7cd37919cb04e20a404a9d3e72e61c6", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Building neural networks to query a knowledge base (a table) with natural language is an emerging research topic in NLP. The neural enquirer typically necessitates multiple steps of execution because of the compositionality of queries. In previous studies, researchers have developed either distributed enquirers or symbolic ones for table querying. The distributed enquirer is end-to-end learnable, but is weak in terms of execution efficiency and explicit interpretability. The symbolic enqurier, on the contrary, is efficient during execution; but it is very difficult to train especially at initial stages. In this paper, we propose to couple distributed and symbolic execution for natural language queries. The observation is that a fully distributed executor also exhibits meaningful, albeit imperfect, interpretation. We can thus pretrain the symbolic executor with the distributed one\u2019s intermediate execution results in a step-by-step fashion. Experiments show that our approach significantly outperforms either the distributed or symbolic executor; moreover, we have recovered more than 80% execution sequences with only groundtruth denotations during training. In summary, the coupled neural enquirer takes advantages of both distributed and symbolic executors, and has high performance, high learning efficiency, high execution efficiency, and high interpretability.", "targets": "Coupling Distributed and Symbolic Execution for Natural Language Queries"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d37df6c1df1541de8972c282dd3819b7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this work we present a new approach to learn compressible representations in deep architectures with an end-to-end training strategy. Our method is based on a soft (continuous) relaxation of quantization and entropy, which we anneal to their discrete counterparts throughout training. We showcase this method for two challenging applications: Image compression and neural network compression. While these tasks have typically been approached with different methods, our soft-to-hard quantization approach gives state-of-the-art results for both.", "targets": "Soft-to-Hard Vector Quantization for End-to-End Learned Compression of Images and Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-aaea8d983a164fd29496d3f22a0ed929", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Automated answering of natural language questions is an interesting and useful problem to solve. Question answering (QA) systems often perform information retrieval at an initial stage. Information retrieval (IR) performance, provided by engines such as Lucene, places a bound on overall system performance. For example, no answer bearing documents are retrieved at low ranks for almost 40% of questions. In this paper, answer texts from previous QA evaluations held as part of the Text REtrieval Conferences (TREC) are paired with queries and analysed in an attempt to identify performance-enhancing words. These words are then used to evaluate the performance of a query expansion method. Data driven extension words were found to help in over 70% of difficult questions. These words can be used to improve and evaluate query expansion methods. Simple blind relevance feedback (RF) was correctly predicted as unlikely to help overall performance, and an possible explanation is provided for its low value in IR for QA.", "targets": "A Data Driven Approach to Query Expansion in Question Answering"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9620d4b3fbb54ed6969781b2f4b8ac07", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we introduce a lightweight dynamic epistemic logical framework for automated planning under initial uncertainty. We reduce plan verification and conformant planning to model checking problems of our logic. We show that the model checking problem of the iteration-free fragment is PSPACE-complete. By using two non-standard (but equivalent) semantics, we give novel model checking algorithms to the full language and the iteration-free language.", "targets": "A Dynamic Epistemic Framework for Conformant Planning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9ccd6eea5f25421b9c9341d2d1cdde16", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recurrent Neural Network (RNN) is one of the most popular architectures used in Natural Language Processsing (NLP) tasks because its recurrent structure is very suitable to process variablelength text. RNN can utilize distributed representations of words by first converting the tokens comprising each text into vectors, which form a matrix. And this matrix includes two dimensions: the time-step dimension and the feature vector dimension. Then most existing models usually utilize one-dimensional (1D) max pooling operation or attention-based operation only on the time-step dimension to obtain a fixed-length vector. However, the features on the feature vector dimension are not mutually independent, and simply applying 1D pooling operation over the time-step dimension independently may destroy the structure of the feature representation. On the other hand, applying two-dimensional (2D) pooling operation over the two dimensions may sample more meaningful features for sequence modeling tasks. To integrate the features on both dimensions of the matrix, this paper explores applying 2D max pooling operation to obtain a fixed-length representation of the text. This paper also utilizes 2D convolution to sample more meaningful information of the matrix. Experiments are conducted on six text classification tasks, including sentiment analysis, question classification, subjectivity classification and newsgroup classification. Compared with the state-of-the-art models, the proposed models achieve excellent performance on 4 out of 6 tasks. Specifically, one of the proposed models achieves highest accuracy on Stanford Sentiment Treebank binary classification and fine-grained classification tasks.", "targets": "Text Classification Improved by Integrating Bidirectional LSTM with Two-dimensional Max Pooling"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-436184804fc747439f097010ae1f524b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Given an existing trained neural network, it is often desirable to be able to add new capabilities without hindering performance of already learned tasks. Existing approaches either learn sub-optimal solutions, require joint training, or incur a substantial increment in the number of parameters for each added task, typically as many as the original network. We propose a method which fully preserves performance on the original task, with only a small increase (around 20%) in the number of required parameters while performing on par with more costly finetuning procedures, which typically double the number of parameters. The learned architecture can be controlled to switch between various learned representations, enabling a single network to solve a task from multiple different domains. We conduct extensive experiments showing the effectiveness of our method and explore different aspects of its behavior.", "targets": "Incremental Learning Through Deep Adaptation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5527d0b69498451c937a0049c9cf6b25", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We study the parsing complexity of Combinatory Categorial Grammar (CCG) in the formalism of Vijay-Shanker and Weir (1994). As our main result, we prove that any parsing algorithm for this formalism will necessarily take exponential time when the size of the grammar, and not only the length of the input sentence, is included in the analysis. This result sets the formalism of Vijay-Shanker and Weir (1994) apart from weakly equivalent formalisms such as Tree-Adjoining Grammar (TAG), for which parsing can be performed in time polynomial in the combined size of grammar and input sentence. Our proof highlights important differences between the formalism of Vijay-Shanker and Weir (1994) and contemporary incarnations of CCG.", "targets": "On the Complexity of CCG Parsing"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7e026b45ac57400daf9ded7951eb5100", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In any knowledge discovery process the value of extracted knowledge is directly related to the quality of the data used. Big Data problems, generated by massive growth in the scale of data observed in recent years, also follow the same dictate. A common problem affecting data quality is the presence of noise, particularly in classification problems, where label noise refers to the incorrect labeling of training instances, and is known to be a very disruptive feature of data. However, in this Big Data era, the massive growth in the scale of the data poses a challenge to traditional proposals created to tackle noise, as they have difficulties coping with such a large amount of data. New algorithms need to be proposed to treat the noise in Big Data problems, providing high quality and clean data, also known as Smart Data. In this paper, two Big Data preprocessing approaches to remove noisy examples are proposed: an homogeneous ensemble and an heterogeneous ensemble filter, with special emphasis in their scalability and performance traits. The obtained results show that these proposals enable the practitioner to efficiently obtain a Smart Dataset from any Big Data classification problem.", "targets": "Enabling Smart Data: Noise filtering in Big Data classification"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2b0663c21d6c4b6cbc637c2709254695", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The human visual system can spot an abnormal image, and reason about what makes it strange. This task has not received enough attention in computer vision. In this paper we study various types of atypicalities in images in a more comprehensive way than has been done before. We propose a new dataset of abnormal images showing a wide range of atypicalities. We design human subject experiments to discover a coarse taxonomy of the reasons for abnormality. Our experiments reveal three major categories of abnormality: object-centric, scene-centric, and contextual. Based on this taxonomy, we propose a comprehensive computational model that can predict all different types of abnormality in images and outperform prior arts in abnormality recognition.", "targets": "Toward a Taxonomy and Computational Models of Abnormalities in Images"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e7774178381148b7ad0a131a79a85ea9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper presents an approach to identify efficient techniques used in Web Search Engine Optimization (SEO). Understanding SEO factors which can influence page\u2019s ranking in search engine is significant for webmasters who wish to attract large number of users to their website. Different from previous relevant research, in this study we developed an intelligent Meta search engine which aggregates results from various search engines and ranks them based on several important SEO parameters. The research tries to establish that using more SEO parameters in ranking algorithms helps in retrieving better search results thus increasing user satisfaction. Initial results generated from Meta search engine outperformed existing search engines in terms of better retrieved search results with high precision.", "targets": "An Innovative Approach for online Meta Search Engine Optimization"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e5130cad877843f9b3802b1e2aca2c36", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Multitask learning can be effective when features useful in one task are also useful for other tasks, and the group lasso is a standard method for selecting a common subset of features. In this paper, we are interested in a less restrictive form of multitask learning, wherein (1) the available features can be organized into subsets according to a notion of similarity and (2) features useful in one task are similar, but not necessarily identical, to the features best suited for other tasks. The main contribution of this paper is a new procedure called Sparse Overlapping Sets (SOS) lasso, a convex optimization that automatically selects similar features for related learning tasks. Error bounds are derived for SOSlasso and its consistency is established for squared error loss. In particular, SOSlasso is motivated by multisubject fMRI studies in which functional activity is classified using brain voxels as features. Experiments with real and synthetic data demonstrate the advantages of SOSlasso compared to the lasso and group lasso.", "targets": "Sparse Overlapping Sets Lasso for Multitask Learning and its Application to fMRI Analysis"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-45a7f74185eb4fbea545457a4c726d3f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Finding repeated patterns or motifs in a time series is an important unsupervised task that has still a number of open issues, starting by the definition of motif. In this paper, we revise the notion of motif support, characterizing it as the number of patterns or repetitions that define a motif. We then propose GENMOTIF, a genetic algorithm to discover motifs with support which, at the same time, is flexible enough to accommodate other motif specifications and task characteristics. GENMOTIF is an anytime algorithm that easily adapts to many situations: searching in a range of segment lengths, applying uniform scaling, dealing with multiple dimensions, using different similarity and grouping criteria, etc. GENMOTIF is also parameter-friendly: it has only two intuitive parameters which, if set within reasonable bounds, do not substantially affect its performance. We demonstrate the value of our approach in a number of synthetic and real-world settings, considering traffic volume measurements, accelerometer signals, and telephone call records.", "targets": "A Genetic Algorithm to Discover Flexible Motifs with Support"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5d4b479a63a34b79b1baba4673fbef1d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a new fast word embedding technique using hash functions. The method is a derandomization of a new type of random projections: By disregarding the classic constraint used in designing random projections (i.e., preserving pairwise distances in a particular normed space), our solution exploits extremely sparse non-negative random projections. Our experiments show that the proposed method can achieve competitive results, comparable to neural embedding learning techniques, however, with only a fraction of the computational complexity of these methods. While the proposed derandomization enhances the computational and space complexity of our method, the possibility of applying weighting methods such as positive pointwise mutual information (PPMI) to our models after their construction (and at a reduced dimensionality) imparts a high discriminatory power to the resulting embeddings. Obviously, this method comes with other known benefits of random projection-based techniques such as ease of update.", "targets": "Sketching Word Vectors Through Hashing"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-16cdd1eceee14979aa6228abb07ae33c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A long-standing dream of Artificial Intelligence (AI) has pursued to enrich computer programs with commonsense knowledge enabling machines to reason about our world. This paper offers a new practical insight towards the automation of commonsense reasoning with first-order logic (FOL) ontologies. We propose a new black-box testing methodology of FOL SUMO-based ontologies by exploiting WordNet and its mapping into SUMO. Our proposal includes a method for the (semi-)automatic creation of a very large set of tests and a procedure for its automated evaluation by using automated theorem provers (ATPs). Applying our testing proposal, we are able to successfully evaluate a) the competency of several translations of SUMO into FOL and b) the performance of various automated ATPs. In addition, we are also able to evaluate the resulting set of tests according to different quality criteria.", "targets": "Black-box Testing of First-Order Logic Ontologies Using WordNet"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ebe584e564764fb193cd3e2c391eabb6", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Fuzzy controllers are known to serve as efficient and interpretable system controllers for continuous state and action spaces. To date these controllers have been constructed by hand, or automatically trained either on expert generated problem specific cost functions or by incorporating detailed knowledge about the optimal control strategy. Both requirements for automatic training processes are not given in the majority of real world reinforcement learning (RL) problems. We introduce a new particle swarm reinforcement learning (PSRL) approach which is capable of constructing fuzzy RL policies solely by training parameters on world models produced from randomly generated samples of the real system. This approach relates self-organizing fuzzy controllers to model-based RL for the first time. PSRL can be used straightforward on any RL problem, which is demonstrated on three standard RL benchmarks, mountain car, cart pole balancing and cart pole swing up. Our experiments yielded high performing and well interpretable fuzzy policies.", "targets": "Particle Swarm Optimization for Generating Fuzzy Reinforcement Learning Policies"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1d85de2d3e754986b7cdbb53755fd065", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Graph models are relevant in many fields, such as distributed computing, intelligent tutoring systems or social network analysis. In many cases, such models need to take changes in the graph structure into account, i.e. a varying number of nodes or edges. Predicting such changes within graphs can be expected to yield important insight with respect to the underlying dynamics, e.g. with respect to user behaviour. However, predictive techniques in the past have almost exclusively focused on single edges or nodes. In this contribution, we attempt to predict the future state of a graph as a whole. We propose to phrase time series prediction as a regression problem and apply dissimilarityor kernel-based regression techniques, such as 1-nearest neighbor, kernel regression and Gaussian process regression, which can be applied to graphs via graph kernels. The output of the regression is a point embedded in a pseudo-Euclidean space, which can be analyzed using subsequent dissimilarityor kernel-based processing methods. We discuss strategies to speed up Gaussian Processes regression from cubic to linear time and evaluate our approach on two well-established theoretical models of graph evolution as well as two real data sets from the domain of intelligent tutoring systems. We find that simple regression methods, such as kernel regression, are sufficient to capture the dynamics in the theoretical models, but that Gaussian process regression significantly improves the prediction error for real-world data.", "targets": "Time Series Prediction for Graphs in Kernel and Dissimilarity Spaces\u2217\u2020"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c795ff55476e44329d45befef535c5f3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We describe a neural network model that jointly learns distributed representations of texts and knowledge base (KB) entities. Given a text in the KB, we train our proposed model to predict entities that are relevant to the text. Our model is designed to be generic with the ability to address various NLP tasks with ease. We train the model using a large corpus of texts and their entity annotations extracted from Wikipedia. We evaluated the model on three important NLP tasks (i.e., sentence textual similarity, entity linking, and factoid question answering) involving both unsupervised and supervised settings. As a result, we achieved state-of-the-art results on all three of these tasks.", "targets": "Learning Distributed Representations of Texts and Entities from Knowledge Base"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6b134c916bdf4e318074c78c144691c2", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Modeling the purposeful behavior of imperfect agents from a small number of observations is a challenging task. When restricted to the single-agent decision-theoretic setting, inverse optimal control techniques assume that observed behavior is an approximately optimal solution to an unknown decision problem. These techniques learn a utility function that explains the example behavior and can then be used to accurately predict or imitate future behavior in similar observed or unobserved situations. In this work, we consider similar tasks in competitive and cooperative multi-agent domains. Here, unlike single-agent settings, a player cannot myopically maximize its reward; it must speculate on how the other agents may act to influence the game\u2019s outcome. Employing the 1 ar X iv :1 30 8. 35 06 v1 [ cs .G T ] 1 5 A ug 2 01 3 game-theoretic notion of regret and the principle of maximum entropy, we introduce a technique for predicting and generalizing behavior.", "targets": "Computational Rationalization: The Inverse Equilibrium Problem"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-dc07ef32d16d487a89946918c7cbd474", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Learning to predict multi-label outputs is challenging, but in many problems there is a natural metric on the outputs that can be used to improve predictions. In this paper we develop a loss function for multi-label learning, based on the Wasserstein distance. The Wasserstein distance provides a natural notion of dissimilarity for probability measures. Although optimizing with respect to the exact Wasserstein distance is costly, recent work has described a regularized approximation that is efficiently computed. We describe efficient learning algorithms based on this regularization, extending the Wasserstein loss from probability measures to unnormalized measures. We also describe a statistical learning bound for the loss and show connections with the total variation norm and the Jaccard index. The Wasserstein loss can encourage smoothness of the predictions with respect to a chosen metric on the output space. We demonstrate this property on a real-data tag prediction problem, using the Yahoo Flickr Creative Commons dataset, achieving superior performance over a baseline that doesn\u2019t use the metric.", "targets": "Learning with a Wasserstein Loss"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f305dc700cc94ce69b72d287221eac91", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Community-based question answering platforms have attracted substantial users to share knowledge and learn from each other. As the rapid enlargement of CQA platforms, quantities of overlapped questions emerge, which makes users confounded to select a proper reference. It is urgent for us to take effective automated algorithms to reuse historical questions with corresponding answers. In this paper we focus on the problem with question retrieval, which aims to match historical questions that are relevant or semantically equivalent to resolve one\u2019s query directly. The challenges in this task are the lexical gaps between questions for the word ambiguity and word mismatch problem. Furthermore, limited words in queried sentences cause sparsity of word features. To alleviate these challenges, we propose a novel framework named HNIL which encodes not only the question contents but also the asker\u2019s social interactions to enhance the question embedding performance. More specifically, we apply random walk based learning method with recurrent neural network to match the similarities between asker\u2019s question and historical questions proposed by other users. Extensive experiments on a large-scale dataset from a real world CQA site Quora show that employing the heterogeneous social network information outperforms the other state-of-the-art solutions in this task.", "targets": "Question Retrieval for Community-based Question Answering via Heterogeneous Network Integration Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1f1b6c6ee01a4da0b36d980ff689834a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we study the impact of selection methods in the context of on-line on-board distributed evolutionary algorithms. We propose a variant of the mEDEA algorithm in which we add a selection operator, and we apply it in a task-driven scenario. We evaluate four selection methods that induce different intensity of selection pressure in a multi-robot navigation with obstacle avoidance task and a collective foraging task. Experiments show that a small intensity of selection pressure is sufficient to rapidly obtain good performances on the tasks at hand. We introduce different measures to compare the selection methods, and show that the higher the selection pressure, the better the performances obtained, especially for the more challenging food foraging task.", "targets": "Comparison of Selection Methods in On-line Distributed Evolutionary Robotics"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a638eb7baaea4513af21d6f86303c3e5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The term \u201caffordance\u201d denotes the behavioral meaning of objects. We propose a cognitive architecture for the detection of affordances in the visual modality. This model is based on the internal simulation of movement sequences. For each movement step, the resulting sensory state is predicted by a forward model, which in turn triggers the generation of a new (simulated) motor command by an inverse model. Thus, a series of mental images in the sensory and in the motor domain is evoked. Starting from a real sensory state, a large number of such sequences is simulated in parallel. Final affordance detection is based on the generated motor commands. We apply this model to a real\u2013world mobile robot which is faced with obstacle arrangements some of which are passable (corridor) and some of which are not (dead ends). The robot\u2019s task is to detect the right affordance (\u201cpass\u2013through\u2013able\u201d or \u201cnon\u2013pass\u2013through\u2013able\u201d). The required internal models are acquired in a hierarchical training process. Afterwards, the robotic agent is able to distinguish reliably between corridors and dead ends. This real\u2013world result enhances the validity of the proposed mental simulation approach. In addition, we compare several key factors in the simulation process regarding performance and efficiency. Funding statement: This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors. 1 ar X iv :1 61 1. 00 27 4v 1 [ cs .A I] 1 N ov 2 01 6", "targets": "Detecting Affordances by Visuomotor Simulation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5245f0e1ac9d4176af7d67b1d3cfae02", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Multi-step temporal-difference (TD) learning, where the update targets contain information from multiple time steps ahead, is one of the most popular forms of TD learning for linear function approximation. The reason is that multi-step methods often yield substantially better performance than their single-step counter-parts, due to a lower bias of the update targets. For non-linear function approximation, however, single-step methods appear to be the norm. Part of the reason could be that on many domains the popular multi-step methods TD(\u03bb) and Sarsa(\u03bb) do not perform well when combined with non-linear function approximation. In particular, they are very susceptible to divergence of value estimates. In this paper, we identify the reason behind this. Furthermore, based on our analysis, we propose a new multi-step TD method for non-linear function approximation that addresses this issue. We confirm the effectiveness of our method using two benchmark tasks with neural networks as function approximation.", "targets": "Effective Multi-step Temporal-Difference Learning for Non-Linear Function Approximation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9e6a278e2f024853bbd9c857fb510462", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "As probabilistic systems gain popularity and are coming into wider use, the need for a mechanism that explains the system's findings and recom\u00ad mendations becomes more critical. The system will also need a mechanism for ordering compet\u00ad ing explanations. We examine two representa\u00ad tive approaches to explanation in the literature\u00ad one due to Gardenfors and one due to Pearl-and show that both suffer from significant problems. We propose an approach to defining a notion of \"better explanation\" that combines some of the features of both together with more recent work by Pearl and others on causality.", "targets": "Defining Explanation in Probabilistic Systems"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8192c2649fcd45ecaba3b14f835a36ba", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We address the novel problem of automatically generating quiz-style knowledge questions from a knowledge graph such as DBpedia. Questions of this kind have ample applications, for instance, to educate users about or to evaluate their knowledge in a specific domain. To solve the problem, we propose an end-to-end approach. The approach first selects a named entity from the knowledge graph as an answer. It then generates a structured triple-pattern query, which yields the answer as its sole result. If a multiplechoice question is desired, the approach selects alternative answer options. Finally, our approach uses a template-based method to verbalize the structured query and yield a natural language question. A key challenge is estimating how difficult the generated question is to human users. To do this, we make use of historical data from the Jeopardy! quiz show and a semantically annotated Web-scale document collection, engineer suitable features, and train a logistic regression classifier to predict question difficulty. Experiments demonstrate the viability of our overall approach.", "targets": "Knowledge Questions from Knowledge Graphs"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1ab40d038b454b2c895d3245320e6280", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A crucial aspect of a knowledge base population system that extracts new facts from text corpora, is the generation of training data for its relation extractors. In this paper, we present a method that maximizes the effectiveness of newly trained relation extractors at a minimal annotation cost. Manual labeling can be significantly reduced by Distant Supervision, which is a method to construct training data automatically by aligning a large text corpus with an existing knowledge base of known facts. For example, all sentences mentioning both \u2018Barack Obama\u2019 and \u2018US\u2019 may serve as positive training instances for the relation born in(subject,object). However, distant supervision typically results in a highly noisy training set: many training sentences do not really express the intended relation. We propose to combine distant supervision with minimal manual supervision in a technique called feature labeling, to eliminate noise from the large and noisy initial training set, resulting in a significant increase of precision. We further improve on this approach by introducing the Semantic Label Propagation method, which uses the similarity between low-dimensional representations of candidate training instances, to extend the training set in order to increase recall while maintaining high precision. Our proposed strategy for generating training data is studied and evaluated on an established test collection designed for knowledge base population tasks. The experimental results show that the Semantic Label Propagation strategy leads to substantial performance gains when compared to existing approaches, while requiring an almost negligible manual annotation effort.", "targets": "Knowledge Base Population using Semantic Label Propagation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c27d98351c3e4c499729bae9a76d6ff7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Convolutional neural networks (CNNs) with convolutional and pooling operations along the frequency axis have been proposed to attain invariance to frequency shifts of features. However, this is inappropriate with regard to the fact that acoustic features vary in frequency. In this paper, we contend that convolution along the time axis is more effective. We also propose the addition of an intermap pooling (IMP) layer to deep CNNs. In this layer, filters in each group extract common but spectrally variant features, then the layer pools the feature maps of each group. As a result, the proposed IMP CNN can achieve insensitivity to spectral variations characteristic of different speakers and utterances. The effectiveness of the IMP CNN architecture is demonstrated on several LVCSR tasks. Even without speaker adaptation techniques, the architecture achieved a WER of 12.7% on the SWB part of the Hub5\u20192000 evaluation test set, which is competitive with other state-of-the-art methods.", "targets": "Deep CNNs along the Time Axis with Intermap Pooling for Robustness to Spectral Variations"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5121e28562ab4989a420c41f6a45f12d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, the performance of two dependency parsers, namely Stanford and Minipar, on biomedical texts has been reported. The performance of the parsers to assign dependencies between two biomedical concepts that are already proved to be connected is not satisfying. Both Stanford and Minipar, being statistical parsers, fail to assign dependency relation between two connected concepts if they are distant by at least one clause. Minipar\u2019s performance, in terms of precision, recall and the F-Score of the attachment score (e.g., correctly identified head in a dependency), to parse biomedical text is also measured taking the Stanford\u2019s as a gold standard. The results suggest that Minipar is not suitable yet to parse biomedical texts. In addition, a qualitative investigation reveals that the difference between working principles of the parsers also play a vital role for Minipar\u2019s degraded performance.", "targets": "Performance of Stanford and Minipar Parser on Biomedical Texts"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2b95055baf1b49809b83d962f3e6203a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Class imbalance is one of the challenging problems for machine learning in many real-world applications, such as coal and gas burst accident monitoring: the burst premonition data is extreme smaller than the normal data, however, which is the highlight we truly focus on. Cost-sensitive adjustment approach is a typical algorithm-level method resisting the data set imbalance. For SVMs classifier, which is modified to incorporate varying penalty parameter(C) for each of considered groups of examples. However, the C value is determined empirically, or is calculated according to the evaluation metric, which need to be computed iteratively and time consuming. This paper presents a novel cost-sensitive SVM method whose penalty parameter C optimized on the basis of cluster probability density function(PDF) and the cluster PDF is estimated only according to similarity matrix and some predefined hyper-parameters. Experimental results on various standard benchmark data sets and real-world data with different ratios of imbalance show that the proposed method is effective in comparison with commonly used cost-sensitive techniques.", "targets": "Optimizing Cost-Sensitive SVM for Imbalanced Data :Connecting Cluster to Classification"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-32d088c0d40442a19339dfc29599dcfc", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Variational autoencoders (VAE) often use Gaussian or category distribution to model the inference process. This puts a limit on variational learning because this simplified assumption does not match the true posterior distribution, which is usually much more sophisticated. To break this limitation and apply arbitrary parametric distribution during inference, this paper derives a semi-continuous latent representation, which approximates a continuous density up to a prescribed precision, and is much easier to analyze than its continuous counterpart because it is fundamentally discrete. We showcase the proposition by applying polynomial exponential family distributions as the posterior, which are universal probability density function generators. Our experimental results show consistent improvements over commonly used VAE models.", "targets": "Coarse Grained Exponential Variational Autoencoders"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c9f2c8254be94311ba2501637094ff7e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Leaf vein forms the basis of leaf characterization and classification. Different species have different leaf vein patterns. It is seen that leaf vein segmentation will help in maintaining a record of all the leaves according to their specific pattern of veins thus provide an effective way to retrieve and store information regarding various plant species in database as well as provide an effective means to characterize plants on the basis of leaf vein structure which is unique for every species. The algorithm proposes a new way of segmentation of leaf veins with the use of Odd Gabor filters and the use of morphological operations for producing a better output. The Odd Gabor filter gives an efficient output and is robust and scalable as compared with the existing techniques as it detects the fine fiber like veins present in leaves much more efficiently.", "targets": "Leaf vein segmentation using Odd Gabor filters and morphological operations"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1c7e702e917640c6bf28c04eefb38ab6", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Or\u2019s of And\u2019s (OA) models are comprised of a small number of disjunctions of conjunctions, also called disjunctive normal form. An example of an OA model is as follows: If (x1 = \u2018blue\u2019 AND x2 = \u2018middle\u2019) OR (x1 = \u2018yellow\u2019), then predict Y = 1, else predict Y = 0. Or\u2019s of And\u2019s models have the advantage of being interpretable to human experts, since they are a set of conditions that concisely capture the characteristics of a specific subset of data. We present two optimization-based machine learning frameworks for constructing OA models, Optimized OA (OOA) and its faster version, Optimized OA with Approximations (OOAx). We prove theoretical bounds on the properties of patterns in an OA model. We build OA models as a diagnostic screening tool for obstructive sleep apnea, that achieves high accuracy with a substantial gain in interpretability over other methods.", "targets": "Learning Optimized Or\u2019s of And\u2019s"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-01817aedc6ee49248f51ddff64733e27", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper presents a new deterministic approx\u00ad imation technique in Bayesian networks. This method, \"Expectation Propagation,\" unifies two previous techniques: assumed-density filtering, an extension of the Kalman filter, and loopy be\u00ad lief propagation, an extension of belief propaga\u00ad tion in Bayesian networks. Loopy belief propa\u00ad gation, because it propagates exact belief states, is useful for a limited class of belief networks, such as those which are purely discrete. Expec\u00ad tation Propagation approximates the belief states by only retaining expectations, such as mean and variance, and iterates until these expectations are consistent throughout the network. This makes it applicable to hybrid networks with discrete and continuous nodes. Experiments with Gaussian mixture models show Expectation Propagation to be convincingly better than methods with simi\u00ad lar computational cost: Laplace's method, vari\u00ad ational Bayes, and Monte Carlo. Expectation Propagation also provides an efficient algorithm for training Bayes point machine classifiers.", "targets": "Expectation Propagation for Approximate Bayesian Inference"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0d9dd84414b347b0a5aa7a450b7963d7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Obtaining models that capture imaging markers relevant fordisease progression and treatment monitoring is challenging. Models aretypically based on large amounts of data with annotated examples ofknown markers aiming at automating detection. High annotation ef-fort and the limitation to a vocabulary of known markers limit thepower of such approaches. Here, we perform unsupervised learning toidentify anomalies in imaging data as candidates for markers. We pro-pose AnoGAN, a deep convolutional generative adversarial network tolearn a manifold of normal anatomical variability, accompanying a novelanomaly scoring scheme based on the mapping from image space to a la-tent space. Applied to new data, the model labels anomalies, and scoresimage patches indicating their fit into the learned distribution. Resultson optical coherence tomography images of the retina demonstrate thatthe approach correctly identifies anomalous images, such as images con-taining retinal fluid or hyperreflective foci.", "targets": "Unsupervised Anomaly Detection with Generative Adversarial Networks to Guide Marker Discovery"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-34302f3dd4244fcb88fa8189f1718ee0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A major investment made by a telecom operator goes into the infrastructure and its maintenance, while business revenues are proportional to how big and good the customer base is. We present a data-driven analytic strategy based on combinatorial optimization and analysis of historical data. The data cover historical mobility of the users in one region of Sweden during a week. Applying the proposed method to the case study, we have identified the optimal proportion of geo-demographic segments in the customer base, developed a functionality to assess the potential of a planned marketing campaign, and explored the problem of an optimal number and types of the geo-demographic segments to target through marketing campaigns. With the help of fuzzy logic, the conclusions of data analysis are automatically translated into comprehensible recommendations in a natural language.", "targets": "Recommendations for Marketing Campaigns in Telecommunication Business based on the footprint analysis"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-27fd20d529db45b98b4e9828ddca9c30", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This work presents and analyzes three convolutional neural network (CNN) models for efficient pixelwise classification of images. When using convolutional neural networks to classify single pixels in patches of a whole image, a lot of redundant computations are carried out when using sliding window networks. This set of new architectures solve this issue by either removing redundant computations or using fully convolutional architectures that inherently predict many pixels at once. The implementations of the three models are accessible through a new utility on top of the Caffe library. The utility provides support for a wide range of image input and output formats, pre-processing parameters and methods to equalize the label histogram during training. The Caffe library has been extended by new layers and a new backend for availability on a wider range of hardware such as CPUs and GPUs through OpenCL. On AMD GPUs, speedups of 54\u00d7 (SK-Net), 437\u00d7 (U-Net) and 320\u00d7 (USKNet) have been observed, taking the SK equivalent SW (sliding window) network as the baseline. The label throughput is up to one megapixel per second. The analyzed neural networks have distinctive characteristics that apply during training or processing, and not every data set is suitable to every architecture. The quality of the predictions is assessed on two neural tissue data sets, of which one is the ISBI 2012 challenge data set. Two different loss functions, Malis loss and Softmax loss, were used during training. The whole pipeline, consisting of models, interface and modified Caffe library, is available as Open Source software under the working title Project Greentea.", "targets": "Efficient Convolutional Neural Networks for Pixelwise Classification on Heterogeneous Hardware Systems"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b22cfce82d404459a340a43df53f8987", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Different notions of equivalence, such as the prominent notions of strong and uniform equivalence, have been studied in Answer-Set Programming, mainly for the purpose of identifying programs that can serve as substitutes without altering the semantics, for instance in program optimization. Such semantic comparisons are usually characterized by various selections of models in the logic of Hereand-There (HT). For uniform equivalence however, correct characterizations in terms of HT-models can only be obtained for finite theories, respectively programs. In this article, we show that a selection of countermodels in HT captures uniform equivalence also for infinite theories. This result is turned into coherent characterizations of the different notions of equivalence by countermodels, as well as by a mixture of HT-models and countermodels (so-called equivalence interpretations). Moreover, we generalize the so-called notion of relativized hyperequivalence for programs to propositional theories, and apply the same methodology in order to obtain a semantic characterization which is amenable to infinite settings. This allows for a lifting of the results to first-order theories under a very general semantics given in terms of a quantified version of HT. We thus obtain a general framework for the study of various notions of equivalence for theories under answer-set semantics. Moreover, we prove an expedient property that allows for a simplified treatment of extended signatures, and provide further results for non-ground logic programs. In particular, uniform equivalence coincides under open and ordinary answer-set semantics, and for finite non-ground programs under these semantics, also the usual characterization of uniform equivalence in terms of maximal and total HT-models of the grounding is correct, even for infinite domains, when corresponding ground programs are infinite. To appear in Theory and Practice of Logic Programming (TPLP).", "targets": "A General Framework for Equivalences in Answer-Set Programming by Countermodels in the Logic of Here-and-There \u2217"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1d7e8f10a54d4ca6a907bf71824a1d2c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Language is a social phenomenon and inherent to its social nature is that it is constantly changing. Recently, a surge of interest can be observed within the computational linguistics (CL) community in the social dimension of language. In this article we present a survey of the emerging field of \u2018Computational Sociolinguistics\u2019 that reflects this increased interest. We aim to provide a comprehensive overview of CL research on sociolinguistic themes, featuring topics such as the relation between language and social identity, language use in social interaction and multilingual communication. Moreover, we demonstrate the potential for synergy between the research communities involved, by showing how the large-scale data-driven methods that are widely used in CL can complement existing sociolinguistic studies, and how sociolinguistics can inform and challenge the methods and assumptions employed in CL studies. We hope to convey the possible benefits of a closer collaboration between the two communities and conclude with a discussion of open challenges.", "targets": "Computational Sociolinguistics: A Survey"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6e76462cf49a4c429a3a40ddb443c3a0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Summarization of large texts is still an open problem in language processing. In this work we develop a full fledged pipeline to generate summaries of news articles using the Abstract Meaning Representation(AMR). We first generate the AMR graphs of stories then extract summary graphs from the story graphs and finally generate sentences from the summary graph. For extracting summary AMRs from the story AMRs we use a two step process. First, we find important sentences from the text and then extract the summary AMRs from those selected sentences. We outperform the previous methods using AMR for summarization by more that 3 ROGUE-1 points. On the CNN-Dailymail corpus we achieve results competitive with the strong lead-3 baseline till summary graph extraction step.", "targets": "Text Summarization using Abstract Meaning Representation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-47270aee5bd94008ba9a52b6f56c1e5c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We address the problem of determining correspondences between two images in agreement with a geometric model such as an affine or thin-plate-spline transformation, and estimating its parameters. The contributions of this work are three-fold. First, we propose a convolutional neural network architecture for geometric matching. The architecture is based on three main components that mimic the standard steps of feature extraction, matching and simultaneous inlier detection and model parameter estimation, while being trainable end-to-end. Second, we demonstrate that the network parameters can be trained from synthetically generated imagery without the need for manual annotation and that our matching layer significantly increases generalization capabilities to never seen before images. Finally, we show that the same model can perform both instance-level and category-level matching giving state-of-the-art results on the challenging Proposal Flow dataset.", "targets": "Convolutional neural network architecture for geometric matching"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-bbdd07964d924be8af8c7522920b531e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Deep learning tools have recently gained much attention in applied machine learning. However such tools for regression and classification do not allow us to capture model uncertainty. Bayesian models offer us the ability to reason about model uncertainty, but usually come with a prohibitive computational cost. We show that dropout in multilayer perceptron models (MLPs) can be interpreted as a Bayesian approximation. Results are obtained for modelling uncertainty for dropout MLP models \u2013 extracting information that has been thrown away so far, from existing models. This mitigates the problem of representing uncertainty in deep learning without sacrificing computational performance or test accuracy. We perform an exploratory study of the dropout uncertainty properties. Various network architectures and non-linearities are assessed on tasks of extrapolation, interpolation, and classification. We show that model uncertainty is important for classification tasks using MNIST as an example, and use the model\u2019s uncertainty in a Bayesian pipeline, with deep reinforcement learning as a concrete example.", "targets": "Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-43b02fb21dab4d65918c9b81894ebbae", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper has two parts. In the first part we discuss word embeddings. We discuss the need for them, some of the methods to create them, and some of their interesting properties. We also compare them to image embeddings and see how word embedding and image embedding can be combined to perform different tasks. In the second part we implement a convolutional neural network trained on top of pre-trained word vectors. The network is used for several sentence-level classification tasks, and achieves state-of-art (or comparable) results, demonstrating the great power of pre-trainted word embeddings over random ones.", "targets": "Word Embeddings and Their Use In Sentence Classification Tasks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e548cb4952164568b081a99c6af89872", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Asynchronous parallel implementations for stochastic optimization have received huge successes in theory and practice recently. Asynchronous implementations with lock-free are more efficient than the one with writing or reading lock. In this paper, we focus on a composite objective function consisting of a smooth convex function f and a block separable convex function, which widely exists in machine learning and computer vision. We propose an asynchronous stochastic block coordinate descent algorithm with the accelerated technology of variance reduction (AsySBCDVR), which are with lock-free in the implementation and analysis. AsySBCDVR is particularly important because it can scale well with the sample size and dimension simultaneously. We prove that AsySBCDVR achieves a linear convergence rate when the function f is with the optimal strong convexity property, and a sublinear rate when f is with the general convexity. More importantly, a near-linear speedup on a parallel system with shared memory can be obtained.", "targets": "Asynchronous Stochastic Block Coordinate Descent with Variance Reduction"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-12c1573e1f1048db92073077f7aa4f12", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We consider a general framework of online learning with expert advice where the regret is defined with respect to a competitor class defined by a weighted automaton over sequences of experts. Our framework covers several problems previously studied, in particular that of competing against k-shifting experts. We give a series of algorithms for this problem, including an automata-based algorithm extending weightedmajority and more efficient algorithms based on the notion of failure transitions. We further present efficient algorithms based on a compact approximation of the competitor automaton, in particular efficient n-gram models obtained by minimizing the R\u00e9nyi divergence, and present an extensive study of the approximation properties of such models. We also extend our algorithms and results to the framework of sleeping experts. Finally, we describe the extension of our approximation methods to online convex optimization and a general mirror descent setting.", "targets": "Online Learning against Expert Automata"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-aab1c93ec6ec473fb80ea5f345f356b9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, the framework of kernel machines with two layers is introduced, generalizing classical kernel methods. The new learning methodology provide a formal connection between computational architectures with multiple layers and the theme of kernel learning in standard regularization methods. First, a representer theorem for two-layer networks is presented, showing that finite linear combinations of kernels on each layer are optimal architectures whenever the corresponding functions solve suitable variational problems in reproducing kernel Hilbert spaces (RKHS). The input-output map expressed by these architectures turns out to be equivalent to a suitable single-layer kernel machines in which the kernel function is also learned from the data. Recently, the so-called multiple kernel learning methods have attracted considerable attention in the machine learning literature. In this paper, multiple kernel learning methods are shown to be specific cases of kernel machines with two layers in which the second layer is linear. Finally, a simple and effective multiple kernel learning method called RLS2 (regularized least squares with two layers) is introduced, and his performances on several learning problems are extensively analyzed. An open source MATLAB toolbox to train and validate RLS2 models with a Graphic User Interface is available.", "targets": "Kernel machines with two layers and multiple kernel learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-88c2a6a10e3b47868898c63d2022ba8b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We study two mixed robust/average-case submodular partitioning problems that we collectively call Submodular Partitioning. These problems generalize both purely robust instances of the problem (namely max-min submodular fair allocation (SFA) Golovin (2005) and min-max submodular load balancing (SLB) Svitkina and Fleischer (2008)) and also generalize average-case instances (that is the submodular welfare problem (SWP) Vondr\u00e1k (2008) and submodular multiway partition (SMP) Chekuri and Ene (2011a)). While the robust versions have been studied in the theory community Goemans et al. (2009); Golovin (2005); Khot and Ponnuswami (2007); Svitkina and Fleischer (2008); Vondr\u00e1k (2008), existing work has focused on tight approximation guarantees, and the resultant algorithms are not, in general, scalable to very large real-world applications. This is in contrast to the average case, where most of the algorithms are scalable. In the present paper, we bridge this gap, by proposing several new algorithms (including those based on greedy, majorization-minimization, minorization-maximization, and relaxation algorithms) that not only scale to large sizes but that also achieve theoretical approximation guarantees close to the state-of-the-art, and in some cases achieve new tight bounds. We also provide new scalable algorithms that apply to additive combinations of the robust and average-case extreme objectives. We show that these problems have many applications in machine learning (ML). This includes: 1) data 1 ar X iv :1 51 0. 08 86 5v 2 [ cs .D S] 1 6 A ug 2 01 6 Wei, Iyer, Wang, Bai, Bilmes partitioning and load balancing for distributed machine algorithms on parallel machines; 2) data clustering; and 3) multi-label image segmentation with (only) Boolean submodular functions via pixel partitioning. We empirically demonstrate the efficacy of our algorithms on real-world problems involving data partitioning for distributed optimization of standard machine learning objectives (including both convex and deep neural network objectives), and also on purely unsupervised (i.e., no supervised or semi-supervised learning, and no interactive segmentation) image segmentation.", "targets": "Mixed Robust/Average Submodular Partitioning: Fast Algorithms, Guarantees, and Applications to Parallel Machine Learning and Multi-Label Image Segmentation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8d14e0d06a904da689ef62f1dbef3bd0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We show that any model trained by a stochastic gradient method with few iterations hasvanishing generalization error. We prove this by showing the method is algorithmically stablein the sense of Bousquet and Elisseeff. Our analysis only employs elementary tools from convexand continuous optimization. Our results apply to both convex and non-convex optimizationunder standard Lipschitz and smoothness assumptions.Applying our results to the convex case, we provide new explanations for why multipleepochs of stochastic gradient descent generalize well in practice. In the nonconvex case, weprovide a new interpretation of common practices in neural networks, and provide a formalrationale for stability-promoting mechanisms in training large, deep models. Conceptually, ourfindings underscore the importance of reducing training time beyond its obvious benefit.", "targets": "Stability of stochastic gradient descent"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-fb0078bad92d4bfbbf76ba1c70e6d946", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Tasks such as record linkage and multi-target tracking, which involve reconstructing the set of objects that underlie some observed data, are particularly challenging for probabilistic inference. Recent work has achieved efficient and accurate inference on such problems using Markov chain Monte Carlo (MCMC) techniques with customized proposal distributions. Currently, implementing such a system requires coding MCMC state representations and acceptance probability calculations that are specific to a particular application. An alternative approach, which we pursue in this paper, is to use a general-purpose probabilistic modeling language (such as BLOG) and a generic Metropolis-Hastings MCMC algorithm that supports user-supplied proposal distributions. Our algorithm gains flexibility by using MCMC states that are only partial descriptions of possible worlds; we provide conditions under which MCMC over partial worlds yields correct answers to queries. We also show how to use a context-specific Bayes net to identify the factors in the acceptance probability that need to be computed for a given proposed move. Experimental results on a citation matching task show that our general-purpose MCMC engine compares favorably with an application-specific system.", "targets": "General-Purpose MCMC Inference over Relational Structures"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2aff2ff7839143f99b70c95935f942bf", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recommender systems often use latent features to explain the behaviors of users and capture the properties of items. As users interact with different items over time, user and item features can influence each other, evolve and co-evolve over time. To accurately capture the fine grained nonlinear coevolution of these features, we propose a recurrent coevolutionary feature embedding process model, which combines recurrent neural network (RNN) with a multidimensional point process model. The RNN learns a nonlinear representation of user and item features which take into account mutual influence between user and item features, and the feature evolution over time. We also develop an efficient stochastic gradient algorithm for learning the model parameters, which can readily scale up to millions of events. Experiments on diverse real-world datasets demonstrate significant improvements in user behavior prediction compared to state-of-the-arts.", "targets": "Recurrent Coevolutionary Feature Embedding Processes for Recommendation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0ac78de116394329b250583efd7a9572", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Multiple automakers have in development or in production automated driving systems (ADS) that offer freeway-pilot functions. This type of ADS is typically limited to restricted-access freeways only, that is, the transition from manual to automated modes takes place only after the ramp merging process is completed manually. One major challenge to extend the automation to ramp merging is that the automated vehicle needs to incorporate and optimize long-term objectives (e.g. successful and smooth merge) when near-term actions must be safely executed. Moreover, the merging process involves interactions with other vehicles whose behaviors are sometimes hard to predict but may influence the merging vehicle\u2019s optimal actions. To tackle such a complicated control problem, we propose to apply Deep Reinforcement Learning (DRL) techniques for finding an optimal driving policy by maximizing the long-term reward in an interactive environment. Specifically, we apply a Long Short-Term Memory (LSTM) architecture to model the interactive environment, from which an internal state containing historical driving information is conveyed to a Deep Q-Network (DQN). The DQN is used to approximate the Q-function, which takes the internal state as input and generates Q-values as output for action selection. With this DRL architecture, the historical impact of interactive environment on the long-term reward can be captured and taken into account for deciding the optimal control policy. The proposed architecture has the potential to be extended and applied to other autonomous driving scenarios such as driving through a complex intersection or changing lanes under varying traffic flow conditions. Keywords\u2014 Autonomous Driving; Highway On-Ramp Merge; Deep Reinforcement Learning; Long Short-Term Memory; Deep Q-Network; Control Policy", "targets": "Formulation of Deep Reinforcement Learning Architecture Toward Autonomous Driving for On-Ramp Merge"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f55fddc273664cf78e18bdc6c39d8dfe", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Stochastic variational inference (SVI) lets us scale up Bayesian computation to massive data. It uses stochastic optimization to fit a variational distribution, following easy-to-compute noisy natural gradients. As with most traditional stochastic optimization methods, SVI takes precautions to use unbiased stochastic gradients whose expectations are equal to the true gradients. In this paper, we explore the idea of following biased stochastic gradients in SVI. Our method replaces the natural gradient with a similarly constructed vector that uses a fixed-window moving average of some of its previous terms. We will demonstrate the many advantages of this technique. First, its computational cost is the same as for SVI and storage requirements only multiply by a constant factor. Second, it enjoys significant variance reduction over the unbiased estimates, smaller bias than averaged gradients, and leads to smaller mean-squared error against the full gradient. We test our method on latent Dirichlet allocation with three large corpora.", "targets": "Smoothed Gradients for Stochastic Variational Inference"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-29e5d980cebf45d5bb19b91127d833fc", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The alternating direction method of multipliers (ADMM) has been recognized as a versatile approach for solving modern large-scale machine learning and signal processing problems efficiently. When the data size and/or the problem dimension is large, a distributed version of ADMM can be used, which is capable of distributing the computation load and the data set to a network of computing nodes. Unfortunately, a direct synchronous implementation of such algorithm does not scale well with the problem size, as the algorithm speed is limited by the slowest computing nodes. To address this issue, in a companion paper, we have proposed an asynchronous distributed ADMM (AD-ADMM) and studied its worst-case convergence conditions. In this paper, we further the study by characterizing the conditions under which the AD-ADMM achieves linear convergence. Our conditions as well as the resulting linear rates reveal the impact that various algorithm parameters, network delay and network size have on the algorithm performance. To demonstrate the superior time efficiency of the proposed AD-ADMM, we test the ADADMM on a high-performance computer cluster by solving a large-scale logistic regression problem. Keywords\u2212 Distributed optimization, ADMM, Asynchronous, Consensus optimization \u22c6Tsung-Hui Chang is the corresponding author. Address: School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, China 518172, E-mail: tsunghui.chang@ieee.org. Wei-Cheng Liao is with Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN 55455, USA, E-mail: mhong@umn.edu Mingyi Hong is with Department of Industrial and Manufacturing Systems Engineering, Iowa State University, Ames, 50011, USA, E-mail: mingyi@iastate.edu Xiangfeng Wang is with Shanghai Key Lab for Trustworthy Computing, Software Engineering Institute, East China Normal University, Shanghai, 200062, China, E-mail: xfwang@sei.ecnu.edu.cn September 10, 2015 DRAFT", "targets": "Asynchronous Distributed ADMM for Large-Scale Optimization- Part II: Linear Convergence Analysis and Numerical Performance"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a4492fb4eb824c2498cafec18d704248", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In many sequential decision-making problems we may want to manage risk by minimizing some measure of variability in costs in addition to minimizing a standard criterion. Conditional value-at-risk (CVaR) is a relatively new risk measure that addresses some of the shortcomings of the well-known variance-related risk measures, and because of its computational efficiencies has gained popularity in finance and operations research. In this paper, we consider the mean-CVaR optimization problem in MDPs. We first derive a formula for computing the gradient of this risk-sensitive objective function. We then devise policy gradient and actor-critic algorithms that each uses a specific method to estimate this gradient and updates the policy parameters in the descent direction. We establish the convergence of our algorithms to locally risk-sensitive optimal policies. Finally, we demonstrate the usefulness of our algorithms in an optimal stopping problem.", "targets": "Algorithms for CVaR Optimization in MDPs"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ffacb99b228e4e36af6bb36a7bff45a3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Despite significant developments in Proof Theory, surprisingly little attention has been devoted to the concept of proof verifier. In particular, mathematical community may be interested in studying different types of proof verifiers (people, programs, oracles, communities, superintelligences, etc.) as mathematical objects, their properties, their powers and limitations (particularly in human mathematicians), minimum and maximum complexity, as well as selfverification and self-reference issues in verifiers. We propose an initial classification system for verifiers and provide some rudimentary analysis of solved and open problems in this important domain. Our main contribution is a formal introduction of the notion of unverifiability, for which the paper could serve as a general citation in domains of theorem proving, software and AI verification.", "targets": "Verifier Theory from Axioms to Unverifiability of Mathematical Proofs, Software and AI"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2b524933ceea4dc18416de6403a7a8e2", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The paper studies machine learning problems where each example is described using a set of Boolean features and where hypotheses are represented by linear threshold elements. One method of increasing the expressiveness of learned hypotheses in this context is to expand the feature set to include conjunctions of basic features. This can be done explicitly or where possible by using a kernel function. Focusing on the well known Perceptron and Winnow algorithms, the paper demonstrates a tradeoff between the computational efficiency with which the algorithm can be run over the expanded feature space and the generalization ability of the corresponding learning algorithm. We first describe several kernel functions which capture either limited forms of conjunctions or all conjunctions. We show that these kernels can be used to efficiently run the Perceptron algorithm over a feature space of exponentially many conjunctions; however we also show that using such kernels, the Perceptron algorithm can provably make an exponential number of mistakes even when learning simple functions. We then consider the question of whether kernel functions can analogously be used to run the multiplicative-update Winnow algorithm over an expanded feature space of exponentially many conjunctions. Known upper bounds imply that the Winnow algorithm can learn Disjunctive Normal Form (DNF) formulae with a polynomial mistake bound in this setting. However, we prove that it is computationally hard to simulate Winnow\u2019s behavior for learning DNF over such a feature set. This implies that the kernel functions which correspond to running Winnow for this problem are not efficiently computable, and that there is no general construction that can run Winnow with kernels.", "targets": "Efficiency versus Convergence of Boolean Kernels for On-Line Learning Algorithms"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d26eebc31a6c41f3b5340943043b4b7e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper we consider the problem of multi-task learning, in which a learner is given a collection of prediction tasks that need to be solved. In contrast to previous work, we give up on the assumption that labeled training data is available for all tasks. Instead, we propose an active task selection framework, where based only on the unlabeled data, the learner can choose a, typically small, subset of tasks for which he gets some labeled examples. For the remaining tasks, which have no available annotation, solutions are found by transferring information from the selected tasks. We analyze two transfer strategies and develop generalization bounds for each of them. Based on this theoretical analysis we propose two algorithms for making the choice of labeled tasks in a principled way and show their effectiveness on synthetic and real data.", "targets": "Active Task Selection for Multi-Task Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9022f0ce8f1b407a86b34ad2593be916", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The study of social networks is a burgeoning research area. However, most existing work deals with networks that simply encode whether relationships exist or not. In contrast, relationships in signed networks can be positive (\u201clike\u201d, \u201ctrust\u201d) or negative (\u201cdislike\u201d, \u201cdistrust\u201d). The theory of social balance shows that signed networks tend to conform to some local patterns that, in turn, induce certain global characteristics. In this paper, we exploit both local as well as global aspects of social balance theory for two fundamental problems in the analysis of signed networks: sign prediction and clustering. Motivated by local patterns of social balance, we first propose two families of sign prediction methods: measures of social imbalance (MOIs), and supervised learning using high order cycles (HOCs). These methods predict signs of edges based on triangles and l-cycles for relatively small values of l. Interestingly, by examining measures of social imbalance, we show that the classic Katz measure, which is used widely in unsigned link prediction, actually has a balance theoretic interpretation when applied to signed networks. Furthermore, motivated by the global structure of balanced networks, we propose an effective low rank modeling approach for both sign prediction and clustering. For the low rank modeling approach, we provide theoretical performance guarantees via convex relaxations, scale it up to large problem sizes using a matrix factorization based algorithm, and provide extensive experimental validation including comparisons with local approaches. Our experimental results indicate that, by adopting a more global viewpoint of balance structure, we get significant performance and computational gains in prediction and clustering tasks on signed networks. Our work therefore highlights the usefulness of the global aspect of balance theory for the analysis of signed networks.", "targets": "Prediction and Clustering in Signed Networks: A Local to Global Perspective"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-46cc087bd1df483e8fc3b71ebacb6cf4", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Generative adversarial networks have been proposed as a way of efficiently training deep generative neural networks. We propose a generative adversarial model that works on continuous sequential data, and apply it by training it on a collection of classical music. We conclude that it generates music that sounds better and better as the model is trained, report statistics on generated music, and let the reader judge the quality by downloading the generated songs.", "targets": "C-RNN-GAN: Continuous recurrent neural networks with adversarial training"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4e9bf7929fe548ee8aa46bb30a3ad742", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We train a generator by maximum likelihood and we also train the same generator architecture by Wasserstein GAN. We then compare the generated samples, exact log-probability densities and approximate Wasserstein distances. We show that an independent critic trained to approximate Wasserstein distance between the validation set and the generator distribution helps detect overfitting. Finally, we use ideas from the one-shot learning literature to develop a novel fast learning critic.", "targets": "Comparison of Maximum Likelihood and GAN-based training of Real NVPs"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6d226de836b54a25b22b4fc2f71c1ed8", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Neural Turing Machines (NTM) [2] contain memory component that simulates \u201cworking memroy\u201d in the brain to store and retrieve information to ease simple algorithms learning. So far, only linearly organized memory is proposed, and during experiments, we observed that the model does not always converge, and overfits easily when handling certain tasks. We think memory component is key to some faulty behaviors of NTM, and better organization of memory component could help fight those problems. In this paper, we propose several different structures of memory for NTM, and we proved in experiments that two of our proposed structured-memory NTMs could lead to better convergence, in term of speed and prediction accuracy on copy task and associative recall task as in [2].", "targets": "Structured Memory for Neural Turing Machines"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8a9e27b683e54213872ff0708f6047f9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Label distribution learning (LDL) is a general learning framework, which assigns a distribution over a set of labels to an instance rather than a single label or multiple labels. Current LDL methods have either restricted assumptions on the expression form of the label distribution or limitations in representation learning. This paper presents label distribution learning forests (LDLFs) a novel label distribution learning algorithm based on differentiable decision trees, which have several advantages: 1) Decision trees have the potential to model any general form of label distributions by the mixture of leaf node predictions. 2) The learning of differentiable decision trees can be combined with representation learning, e.g., to learn deep features in an end-to-end manner. We define a distributionbased loss function for forests, enabling all the trees to be learned jointly, and show that an update function for leaf node predictions, which guarantees a strict decrease of the loss function, can be derived by variational bounding. The effectiveness of the proposed LDLFs is verified on two LDL problems, including age estimation and crowd opinion prediction on movies, showing significant improvements to the state-of-the-art LDL methods.", "targets": "Label Distribution Learning Forests"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-817e8e0c6bcd455cb566244d77b13d2f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The paper investigates parameterized approximate message-passing schemes that are based on bounded inference and are inspired by Pearl\u2019s belief propagation algorithm (BP). We start with the bounded inference mini-clustering algorithm and then move to the iterative scheme called Iterative Join-Graph Propagation (IJGP), that combines both iteration and bounded inference. Algorithm IJGP belongs to the class of Generalized Belief Propagation algorithms, a framework that allowed connections with approximate algorithms from statistical physics and is shown empirically to surpass the performance of mini-clustering and belief propagation, as well as a number of other stateof-the-art algorithms on several classes of networks. We also provide insight into the accuracy of iterative BP and IJGP by relating these algorithms to well known classes of constraint propagation schemes.", "targets": "Join-Graph Propagation Algorithms"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ca92977a88164afeaf5655946ba8c47f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present in this paper a study on the ability and the benefits of using a keystroke dynamics authentication method for collaborative systems. Authentication is a challenging issue in order to guarantee the security of use of collaborative systems during the access control step. Many solutions exist in the state of the art such as the use of one time passwords or smart-cards. We focus in this paper on biometric based solutions that do not necessitate any additional sensor. Keystroke dynamics is an interesting solution as it uses only the keyboard and is invisible for users. Many methods have been published in this field. We make a comparative study of many of them considering the operational constraints of use for collaborative systems.", "targets": "Keystroke Dynamics Authentication For Collaborative Systems"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-47ceb37194d3487283a48e2ee8889043", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We introduce a globally-convergent algorithm for optimizing the tree-reweighted (TRW) variational objective over the marginal polytope. The algorithm is based on the conditional gradient method (Frank-Wolfe) and moves pseudomarginals within the marginal polytope through repeated maximum a posteriori (MAP) calls. This modular structure enables us to leverage black-box MAP solvers (both exact and approximate) for variational inference, and obtains more accurate results than tree-reweighted algorithms that optimize over the local consistency relaxation. Theoretically, we bound the sub-optimality for the proposed algorithm despite the TRW objective having unbounded gradients at the boundary of the marginal polytope. Empirically, we demonstrate the increased quality of results found by tightening the relaxation over the marginal polytope as well as the spanning tree polytope on synthetic and real-world instances.", "targets": "Barrier Frank-Wolfe for Marginal Inference"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0e698b417518461d831267c5a900286d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "As the type and the number of such venues increase, automated analysis of sentiment on textual resources has become an essential data mining task. In this paper, we investigate the problem of mining opinions on the collection of informal short texts. Both positive and negative sentiment strength of texts are detected. We focus on a non-English language that has few resources for text mining. This approach would help enhance the sentiment analysis in languages where a list of opinionated words does not exist. We propose a new method projects the text into dense and low dimensional feature vectors according to the sentiment strength of the words. We detect the mixture of positive and negative sentiments on a multi-variant scale. Empirical evaluation of the proposed framework on Turkish tweets shows that our approach gets good results for opinion mining.", "targets": "Opinion Mining on Non-English Short Text"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-71032cf9c2a74eea94853b52febe1a72", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In daily communications, Arabs use local dialects which are hard to identify automatically using conventional classification methods. The dialect identification challenging task becomes more complicated when dealing with an under-resourced dialects belonging to a same county/region. In this paper, we start by analyzing statistically Algerian dialects in order to capture their specificities related to prosody information which are extracted at utterance level after a coarse-grained consonant/vowel segmentation. According to these analysis findings, we propose a Hierarchical classification approach for spoken Arabic algerian Dialect IDentification (HADID). It takes advantage from the fact that dialects have an inherent property of naturally structured into hierarchy. Within HADID, a top-down hierarchical classification is applied, in which we use Deep Neural Networks (DNNs) method to build a local classifier for every parent node into the hierarchy dialect structure. Our framework is implemented and evaluated on Algerian Arabic dialects corpus. Whereas, the hierarchy dialect structure is deduced from historic and linguistic knowledges. The results reveal that within HADID, the best classifier is DNNs compared to Support Vector Machine. In addition, compared with a baseline Flat classification system, our HADID gives an improvement of 63.5% in term of precision. Furthermore, overall results evidence the suitability of our prosody-based HADID for speaker independent dialect identification while requiring less than 6s test utterances. Email addresses: sm.bougrine@lagh-univ.dz (Soumia Bougrine), hadda_cherroun@mail.lagh-univ.dz (Hadda Cherroun), djelloul.ziadi@univ-rouen.fr (Djelloul Ziadi ) Preprint submitted to Elsevier March 30, 2017 ar X iv :1 70 3. 10 06 5v 1 [ cs .C L ] 2 9 M ar 2 01 7", "targets": "Hierarchical Classification for Spoken Arabic Dialect Identification using Prosody: Case of Algerian Dialects"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0077571cd27348fea97ed5ea26743058", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recurrent Neural Networks (RNNs), and specifically a variant with Long ShortTerm Memory (LSTM), are enjoying renewed interest as a result of successful applications in a wide range of machine learning problems that involve sequential data. However, while LSTMs provide exceptional results in practice, the source of their performance and their limitations remain rather poorly understood. Using character-level language models as an interpretable testbed, we aim to bridge this gap by providing a comprehensive analysis of their representations, predictions and error types. In particular, our experiments reveal the existence of interpretable cells that keep track of long-range dependencies such as line lengths, quotes and brackets. Moreover, an extensive analysis with finite horizon n-gram models suggest that these dependencies are actively discovered and utilized by the networks. Finally, we provide detailed error analysis that suggests areas for further study.", "targets": "Visualizing and Understanding Recurrent Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e28b82ab8a82461eb532cf659452ad88", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We consider the Bayesian active learning and experimental design problem, where the goal is to learn the value of some unknown target variable through a sequence of informative, noisy tests. In contrast to prior work, we focus on the challenging, yet practically relevant setting where test outcomes can be conditionally dependent given the hidden target variable. Under such assumptions, common heuristics, such as greedily performing tests that maximize the reduction in uncertainty of the target, often perform poorly. In this paper, we propose ECED, a novel, computationally efficient active learning algorithm, and prove strong theoretical guarantees that hold with correlated, noisy tests. Rather than directly optimizing the prediction error, at each step, ECED picks the test that maximizes the gain in a surrogate objective, which takes into account the dependencies between tests. Our analysis relies on an information-theoretic auxiliary function to track the progress of ECED, and utilizes adaptive submodularity to attain the near-optimal bound. We demonstrate strong empirical performance of ECED on two problem instances, including a Bayesian experimental design task intended to distinguish among economic theories of how people make risky decisions, and an active preference learning task via pairwise comparisons.", "targets": "Near-optimal Bayesian Active Learning with Correlated and Noisy Tests"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ff956737d80642918b70c4702e1bdaa8", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recurrent neural networks (RNN) are capable of learning to encode and exploit activation history over an arbitrary timescale. However, in practice, state of the art gradient descent based training methods are known to suffer from difficulties in learning long term dependencies. Here, we describe a novel training method that involves concurrent parallel cloned networks, each sharing the same weights, each trained at different stimulus phase and each maintaining independent activation histories. Training proceeds by recursively performing batch-updates over the parallel clones as activation history is progressively increased. This allows conflicts to propagate hierarchically from short-term contexts towards longer-term contexts until they are resolved. We illustrate the parallel clones method and hierarchical conflict propagation with a character-level deep RNN tasked with memorizing a paragraph of Moby Dick (by Herman Melville).", "targets": "Hierarchical Conflict Propagation: Sequence Learning in a Recurrent Deep Neural Network"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-86f531c4de1247e0a6b8f2aa5481d28a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Reservoir computing is a new, powerful and flexible machine learning technique that is easily implemented in hardware. Recently, by using a time-multiplexed architecture, hardware reservoir computers have reached performance comparable to digital implementations. Operating speeds allowing for real time information operation have been reached using optoelectronic systems. At present the main performance bottleneck is the readout layer which uses slow, digital postprocessing. We have designed an analog readout suitable for time-multiplexed optoelectronic reservoir computers, capable of working in real time. The readout has been built and tested experimentally on a standard benchmark task. Its performance is better than non-reservoir methods, with ample room for further improvement. The present work thereby overcomes one of the major limitations for the future development of hardware reservoir computers.", "targets": "Analog readout for optical reservoir computers"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-954e3d96f0d34169bf3b71ed779fc867", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Here we describe work on learning the subcategories of verbs in a morphologically rich language using only minimal linguistic resources. Our goal is to learn verb subcategorizations for Quechua, an under-resourced morphologically rich language, from an unannotated corpus. We compare results from applying this approach to an unannotated Arabic corpus with those achieved by processing the same text in treebank form. The original plan was to use only a morphological analyzer and an unannotated corpus, but experiments suggest that this approach by itself will not be effective for learning the combinatorial potential of Arabic verbs in general. The lower bound on resources for acquiring this information is somewhat higher, apparently requiring a a part-of-speech tagger and chunker for most languages, and a morphological disambiguater for Arabic.", "targets": "Considering a resource-light approach to learning verb valencies"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-590402a5451e4006905e5523df69fa3f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Electronic health records (EHRs) contain important clinical information about patients. Efficient and effective use of this information could supplement or even replace manual chart review as a means of studying and improving the quality and safety of healthcare delivery. However, some of these clinical data are in the form of free text and require pre-processing before use in automated systems. A common free text data source is radiology reports, typically dictated by radiologists to explain their interpretations. We sought to demonstrate machine learning classification of computed tomography (CT) imaging reports into binary outcomes, i.e. positive and negative for fracture, using regular text classification and classifiers based on topic modeling. Topic modeling provides interpretable themes (topic distributions) in reports, a representation that is more compact than the commonly used bag-of-words representation and can be processed faster than raw text in subsequent automated processes. We demonstrate new classifiers based on this topic modeling representation of the reports. Aggregate topic classifier (ATC) and confidence-based topic classifier (CTC) use a single topic that is determined from the training dataset based on different measures to classify the reports on the test dataset. Alternatively, similarity-based topic classifier (STC) measures the similarity between the reports\u2019 topic distributions to determine the predicted class. Our proposed topic modeling-based classifier systems are shown to be competitive with existing text classification techniques and provides an efficient and interpretable representation.", "targets": "Topic Modeling for Classification of Clinical Reports"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-40859a933c954811a97c885cc0966ae8", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Network data mining has become an important area of study due to the large number of problems it can be applied to. This paper presents NOESIS, an open source framework for network data mining that provides a large collection of network analysis techniques, including the analysis of network structural properties, community detection methods, link scoring, and link prediction, as well as network visualization algorithms. It also features a complete stand\u2013alone graphical user interface that facilitates the use of all these techniques. The NOESIS framework has been designed using solid object\u2013oriented design principles and structured parallel programming. As a lightweight library with minimal external dependencies and a permissive software license, NOESIS can be incorporated into other software projects. Released under a BSD license, it is available from http://noesis.ikor.org.", "targets": "The NOESIS Network-Oriented Exploration, Simulation, and Induction System"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-07f4642ac9994035a2e17fb66407011c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Word embedding, specially with its recent developments, promises a quantification of the similarity between terms. However, it is not clear to which extent this similarity value can be genuinely meaningful and useful for subsequent tasks. We explore how the similarity score obtained from the models is really indicative of term relatedness. We first observe and quantify the uncertainty factor of the word embedding models regarding to the similarity value. Based on this factor, we introduce a general threshold on various dimensions which effectively filters the highly related terms. Our evaluation on four information retrieval collections supports the effectiveness of our approach as the results of the introduced threshold are significantly better than the baseline while being equal to or statistically indistinguishable from the optimal results.", "targets": "Uncertainty in Neural Network Word Embedding Exploration of Threshold for Similarity"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ff559614e1544daaa9f4922971ca8393", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Here we study the problem of predicting labels for large text corpora where each text can be assigned multiple labels. The problem might seem trivial when the number of labels is small, and can be easily solved using a series of one-vsall classifiers. However, as the number of labels increases to several thousand, the parameter space becomes extremely large, and it is no longer possible to use the one-vs-all technique. Here we propose a model based on the factorization of higher order word vector moments, as well as the cross moments between the labels and the words for multi-label prediction. Our model provides guaranteed converge bounds on the extracted parameters. Further, our model takes only three passes through the training dataset to extract the parameters, resulting in a highly scalable algorithm that can train on GB\u2019s of data consisting of millions of documents with hundreds of thousands of labels using a nominal resource of a single processor with 16GB RAM. Our model achieves 10x-15x order of speed-up on large-scale datasets while producing competitive performance in comparison with existing benchmark algorithms.", "targets": "Large-Scale Label Prediction for Sparse Data with Probable Guarantees"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a2649f61c6a84315a858028a45f50d8c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present a practical and statistically consistent scheme for actively learning binary classifiers under general loss functions. Our algorithm uses importance weighting to correct sampling bias, and by controlling the variance, we are able to give rigorous label complexity bounds for the learning process. Experiments on passively labeled data show that this approach reduces the label complexity required to achieve good predictive performance on many learning problems.", "targets": "Importance Weighted Active Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-bb33a5a9bdab437faebb8b154991e54b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper addresses the issue of model selection for hidden Markov models (HMMs). We generalize factorized asymptotic Bayesian inference (FAB), which has been recently developed for model selection on independent hidden variables (i.e., mixture models), for time-dependent hidden variables. As with FAB in mixture models, FAB for HMMs is derived as an iterative lower bound maximization algorithm of a factorized information criterion (FIC). It inherits, from FAB for mixture models, several desirable properties for learning HMMs, such as asymptotic consistency of FIC with marginal log-likelihood, a shrinkage effect for hidden state selection, monotonic increase of the lower FIC bound through the iterative optimization. Further, it does not have a tunable hyper-parameter, and thus its model selection process can be fully automated. Experimental results shows that FAB outperforms states-of-the-art variational Bayesian HMM and non-parametric Bayesian HMM in terms of model selection accuracy and computational efficiency.", "targets": "Factorized Asymptotic Bayesian Hidden Markov Models"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4423cbc8801944b7b4f67a457da19f76", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The strength with which a statement is made can have a significant impact on the audience. For example, international relations can be strained by how the media in one country describes an event in another; and papers can be rejected because they overstate or understate their findings. It is thus important to understand the effects of statement strength. A first step is to be able to distinguish between strong and weak statements. However, even this problem is understudied, partly due to a lack of data. Since strength is inherently relative, revisions of texts that make claims are a natural source of data on strength differences. In this paper, we introduce a corpus of sentence-level revisions from academic writing. We also describe insights gained from our annotation efforts for this task.", "targets": "A Corpus of Sentence-level Revisions in Academic Writing: A Step towards Understanding Statement Strength in Communication"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0217b9cf17244b5e82814343e4871aee", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Regularization is a well studied problem in the context of neural networks. It is usually used to improve the generalization performance when the number of input samples is relatively small or heavily contaminated with noise. The regularization of a parametric model can be achieved in different manners some of which are early stopping (Morgan and Bourlard, 1990), weight decay, output smoothing that are used to avoid overfitting during the training of the considered model. From a Bayesian point of view, many regularization techniques correspond to imposing certain prior distributions on model parameters (Krogh and Hertz, 1991). Using Bishop\u2019s approximation (Bishop, 1995) of the objective function when a restricted type of noise is added to the input of a parametric function, we derive the higher order terms of the Taylor expansion and analyze the coefficients of the regularization terms induced by the noisy input. In particular we study the effect of penalizing the Hessian of the mapping function with respect to the input in terms of generalization performance. We also show how we can control independently this coefficient by explicitly penalizing the Jacobian of the mapping function on corrupted inputs.", "targets": "Adding noise to the input of a model trained with a regularized objective"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d0e7da6a49404cf2913eee43b43a2991", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Despite interest in using cross-lingual knowledge to learn word embeddings for various tasks, a systematic comparison of the possible approaches is lacking in the literature. We perform an extensive evaluation of four popular approaches of inducing cross-lingual embeddings, each requiring a different form of supervision, on four typographically different language pairs. Our evaluation setup spans four different tasks, including intrinsic evaluation on mono-lingual and cross-lingual similarity, and extrinsic evaluation on downstream semantic and syntactic applications. We show that models which require expensive cross-lingual knowledge almost always perform better, but cheaply supervised models often prove competitive on certain tasks.", "targets": "Cross-lingual Models of Word Embeddings: An Empirical Comparison"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-83ca1e4afe2545d6bc713472a406a9d1", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Deep learning (DL) became the method of choice in recent years for solving problems ranging from object recognition and speech recognition to robotic perception and human disease prediction. In this paper, we present a hybrid architecture of convolutional neural networks (CNN) and stacked autoencoders (SAE) to learn a sequence of actions that nonlinearly transforms an input shape or distribution into a target shape or distribution with the same support. While such a framework can be useful in a variety of problems such as robotic path planning, sequential decision-making in games and identifying material processing pathways to achieve desired microstructures, this paper focuses on controlling fluid deformations in a microfluidic channel by deliberately placing a sequence of pillars, which has a significant impact on manufacturing for biomedical and textile applications where highly targeted shapes are desired. We propose an architecture which simultaneously predicts the intermediate shape lying in the nonlinear transformation pathway between the undeformed and desired flow shape, then learns the causal action\u2013the single pillar which results in the deformation of the flow\u2013one at a time. The learning of stage-wise transformations provides deep insights into the physical flow deformation. Results show that under the current framework, our model is able to predict a sequence of pillars that reconstructs the flow shape which highly resembles the desired shape.", "targets": "Deep Action Sequence Learning for Causal Shape Transformation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-048bea121ec146a08b67d243827f6553", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The production of color language is essential for grounded language generation. Color descriptions have many challenging properties: they can be vague, compositionally complex, and denotationally rich. We present an effective approach to generating color descriptions using recurrent neural networks and a Fouriertransformed color representation. Our model outperforms previous work on a conditional language modeling task over a large corpus of naturalistic color descriptions. In addition, probing the model\u2019s output reveals that it can accurately produce not only basic color terms but also descriptors with non-convex denotations (\u201cgreenish\u201d), bare modifiers (\u201cbright\u201d, \u201cdull\u201d), and compositional phrases (\u201cfaded teal\u201d) not seen in training.", "targets": "Learning to Generate Compositional Color Descriptions"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2ba5c87d4df7409aa954e1753b276a91", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Domain-independent planning is one of the foundational areas in the field of Artificial Intelligence. A description of a planning task consists of an initial world state, a goal, and a set of actions for modifying the world state. The objective is to find a sequence of actions, that is, a plan, that transforms the initial world state into a goal state. In optimal planning, we are interested in finding not just a plan, but one of the cheapest plans. A prominent approach to optimal planning these days is heuristic state-space search, guided by admissible heuristic functions. Numerous admissible heuristics have been developed, each with its own strengths and weaknesses, and it is well known that there is no single \u201cbest\u201d heuristic for optimal planning in general. Thus, which heuristic to choose for a given planning task is a difficult question. This difficulty can be avoided by combining several heuristics, but that requires computing numerous heuristic estimates at each state, and the tradeoff between the time spent doing so and the time saved by the combined advantages of the different heuristics might be high. We present a novel method that reduces the cost of combining admissible heuristics for optimal planning, while maintaining its benefits. Using an idealized search space model, we formulate a decision rule for choosing the best heuristic to compute at each state. We then present an active online learning approach for learning a classifier with that decision rule as the target concept, and employ the learned classifier to decide which heuristic to compute at each state. We evaluate this technique empirically, and show that it substantially outperforms the standard method for combining several heuristics via their pointwise maximum.", "targets": "Online Speedup Learning for Optimal Planning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a70af20310434c3a952549eed856845c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "There is a small but growing body of research on statistical scripts, models of event sequences that allow probabilistic inference of implicit events from documents. These systems operate on structured verb-argument events produced by an NLP pipeline. We compare these systems with recent Recurrent Neural Net models that directly operate on raw tokens to predict sentences, finding the latter to be roughly comparable to the former in terms of predicting missing events in documents.", "targets": "Using Sentence-Level LSTM Language Models for Script Inference"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-22ec4146db414757a40524b5d0906f73", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We introduce a novel training principle for generative probabilistic models that is an alternative to maximum likelihood. The proposed Generative Stochastic Networks (GSN) framework generalizes Denoising Auto-Encoders (DAE) and is based on learning the transition operator of a Markov chain whose stationary distribution estimates the data distribution. The transition distribution is a conditional distribution that generally involves a small move, so it has fewer dominant modes and is unimodal in the limit of small moves. This simplifies the learning problem, making it less like density estimation and more akin to supervised function approximation, with gradients that can be obtained by backprop. The theorems provided here provide a probabilistic interpretation for denoising autoencoders and generalize them; seen in the context of this framework, auto-encoders that learn with injected noise are a special case of GSNs and can be interpreted as generative models. The theorems also provide an interesting justification for dependency networks and generalized pseudolikelihood and define an appropriate joint distribution and sampling mechanism, even when the conditionals are not consistent. GSNs can be used with missing inputs and can be used to sample subsets of variables given the rest. Experiments validating these theoretical results are conducted on both synthetic datasets and image datasets. The experiments employ a particular architecture that mimics the Deep Boltzmann Machine Gibbs sampler but that allows training to proceed with backprop through a recurrent neural network with noise injected inside and without the need for layerwise pretraining.", "targets": "GSNs: Generative Stochastic Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3bcc8177aee843688034dcfaeb66c082", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this thesis we present a new algorithm for the Vehicle Routing Problem called the Enhanced Bees Algorithm. It is adapted from a fairly recent algorithm, the Bees Algorithm, which was developed for continuous optimisation problems. We show that the results obtained by the Enhanced Bees Algorithm are competitive with the best meta-heuristics available for the Vehicle Routing Problem\u2014it is able to achieve results that are within 0.5% of the optimal solution on a commonly used set of test instances. We show that the algorithm has good runtime performance, producing results within 2% of the optimal solution within 60 seconds, making it suitable for use within real world dispatch scenarios. Additionally, we provide a short history of well known results from the literature along with a detailed description of the foundational methods developed to solve the Vehicle Routing Problem.", "targets": "The Bees Algorithm for the Vehicle Routing Problem"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-42a3a536d0f047c3bc608afc14574f82", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We analyze in this paper a random feature map based on a theory of invariance (I-theory) introduced in [1]. More specifically, a group invariant signal signature is obtained through cumulative distributions of group transformed random projections. Our analysis bridges invariant feature learning with kernel methods, as we show that this feature map defines an expected Haar integration kernel that is invariant to the specified group action. We show how this non-linear random feature map approximates this group invariant kernel uniformly on a set of N points. Moreover, we show that it defines a function space that is dense in the equivalent Invariant Reproducing Kernel Hilbert Space. Finally, we quantify error rates of the convergence of the empirical risk minimization, as well as the reduction in the sample complexity of a learning algorithm using such an invariant representation for signal classification, in a classical supervised learning setting.", "targets": "Learning with Group Invariant Features: A Kernel Perspective"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-61c58adfd20f46188bf102d410acc148", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The parameters of temporal models, such as dynamic Bayesian networks, may be modelled in a Bayesian context as static or atemporal variables that influence transition probabilities at every time step. Particle filters fail for models that include such variables, while methods that use Gibbs sampling of parameter variables may incur a per-sample cost that grows linearly with the length of the observation sequence. Storvik (2002) devised a method for incremental computation of exact sufficient statistics that, for some cases, reduces the per-sample cost to a constant. In this paper, we demonstrate a connection between Storvik\u2019s filter and a Kalman filter in parameter space and establish more general conditions under which Storvik\u2019s filter works. Drawing on an analogy to the extended Kalman filter, we develop and analyze, both theoretically and experimentally, a Taylor approximation to the parameter posterior that allows Storvik\u2019s method to be applied to a broader class of models. Our experiments on both synthetic examples and real applications show improvement over existing methods.", "targets": "The Extended Parameter Filter"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-573fdcbdc0eb46de90714b3083c43c27", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Many real systems have been modelled in terms of network concepts, and written texts are a particular example of information networks. In recent years, the use of network methods to analyze language has allowed the discovery of several interesting findings, including the proposition of novel models to explain the emergence of fundamental universal patterns. While syntactical networks, one of the most prevalent networked models of written texts, display both scale-free and small-world properties, such representation fails in capturing other textual features, such as the organization in topics or subjects. In this context, we propose a novel network representation whose main purpose is to capture the semantical relationships of words in a simple way. To do so, we link all words co-occurring in the same semantic context, which is defined in a threefold way. We show that the proposed representations favours the emergence of communities of semantically related words, and this feature may be used to identify relevant topics. The proposed methodology to detect topics was applied to segment selected Wikipedia articles. We have found that, in general, our methods outperform traditional bag-of-words representations, which suggests that a high-level textual representation may be useful to study semantical features of texts.", "targets": "Topic segmentation via community detection in complex networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a18c37fa24424be5acf4c5e9a8f1bced", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In machine learning contests such as the ImageNet Large Scale Visual Recognition Challenge [RDS15] and the KDD Cup, contestants can submit candidate solutions and receive from an oracle (typically the organizers of the competition) the accuracy of their guesses compared to the ground-truth labels. One of the most commonly used accuracy metrics for binary classification tasks is the Area Under the Receiver Operating Characteristics Curve (AUC). In this paper we provide proofs-of-concept of how knowledge of the AUC of a set of guesses can be used, in two different kinds of attacks, to improve the accuracy of those guesses. On the other hand, we also demonstrate the intractability of one kind of AUC exploit by proving that the number of possible binary labelings of n examples for which a candidate solution obtains a AUC score of c grows exponentially in n, for every c \u2208 (0, 1).", "targets": "Exploiting an Oracle that Reports AUC Scores in Machine Learning Contests"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-234979f81bda430faa7d5d05f315cdb2", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We explore techniques to maximize the effectiveness of discourse information in the task of authorship attribution. We present a novel method to embed discourse features in a Convolutional Neural Network text classifier, which achieves a state-ofthe-art result by a substantial margin. We empirically investigate several featurization methods to understand the conditions under which discourse features contribute non-trivial performance gains, and analyze discourse embeddings.", "targets": "Leveraging Discourse Information Effectively for Authorship Attribution\u2217"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0142949520cd4d778a5c322acbf78da6", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Preferences play an important role in our everyday lives. CP-networks, or CP-nets in short, are graphical models for representing conditional qualitative preferences under ceteris paribus (\u201call else being equal\u201d) assumptions. Despite their intuitive nature and rich representation, dominance testing with CP-nets is computationally complex, even when the CP-nets are restricted to binary-valued preferences. Tractable algorithms exist for binary CP-nets, but these algorithms are incomplete for multi-valued CPnets. In this paper, we identify a class of multivalued CP-nets, which we call more-or-less CPnets, that have the same computational complexity as binary CP-nets. More-or-less CP-nets exploit the monotonicity of the attribute values and use intervals to aggregate values that induce similar preferences. We then present a search control rule for dominance testing that effectively prunes the search space while preserving completeness.", "targets": "More-or-Less CP-Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ce1cd846ebea4e8b836fcc18af1b683f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We evaluate Machine Learning techniques for Green energy (wind, solar and biomass) prediction based on weather forecasts. Weather is constituted by multiple attributes: temperature, cloud cover, wind speed / direction which are discrete random variables. One of our objectives is to predict the weather based on the previous weather data. Additionally we are interested in finding correlation (dependencies in order to reduce the dimensionality of the data set) between these variables, predicting missing data predict deviations in weather forecasts (for job scheduling within the green control center), finding clusters within the data (constituted by closely related variables e.g. PCA that can be used to remove redundant variables), classification, finding (non-linear using SVMs) regression models, training artificial neural networks based on the historical data so that they can be used for prediction in the future.", "targets": "Evaluation of Machine Learning Techniques for Green Energy Prediction"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f703ef80d25e4d8fbbcc76c6d2724a0e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We introduce a novel framework for evaluating multimodal deep learning models with respect to their language understanding and generalization abilities. In this approach, artificial data is automatically generated according to the experimenter\u2019s specifications. The content of the data, both during training and evaluation, can be controlled in detail, which enables tasks to be created that require true generalization abilities, in particular the combination of previously introduced concepts in novel ways. We demonstrate the potential of our methodology by evaluating various visual question answering models on four different tasks, and show how our framework gives us detailed insights into their capabilities and limitations. By opensourcing our framework, we hope to stimulate progress in the field of multimodal language understanding.", "targets": "SHAPEWORLD: A new test methodology for multimodal language understanding"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-db0ee3c237334f0ca561e73507aba62a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Particle Filter (PF) is the most widely used Bayesian sequential estimation method for obtaining hidden states of nonlinear dynamic systems. However, it still suffers from certain problems such as the loss of particle diversity, the need for large number of particles, and the costly selection of the importance density functions. In this paper, a novel PF called Exponential Natural Particle Filter (xNPF) is introduced to solve the above problems. In this approach, a state transitional probability with the use of natural gradient learning is proposed which balances exploration and exploitation more robustly. PF with the proposed density function does not need a large number of particles and it retains particles\u2019 diversity in a course of run. The proposed system is evaluated in a time-varying parameter estimation problem on a dynamic model of HIV virus immune response. This model is used to show the performance of the xNPF in comparison with several state of the art particle filter variants such as Annealed PF, Bootstrap PF, iterative PF, equivalent weight PF, and intelligent PF. The results show that xNPF converges much closer to the true target states than the other methods.", "targets": "Exponential Natural Particle Filter"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1a04dc91a4fd4c0d8db35e356ab50c4c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We introduce the nonparametric metadata dependent relational (NMDR) model, a Bayesian nonparametric stochastic block model for network data. The NMDR allows the entities associated with each node to have mixed membership in an unbounded collection of latent communities. Learned regression models allow these memberships to depend on, and be predicted from, arbitrary node metadata. We develop efficient MCMC algorithms for learning NMDR models from partially observed node relationships. Retrospective MCMC methods allow our sampler to work directly with the infinite stickbreaking representation of the NMDR, avoiding the need for finite truncations. Our results demonstrate recovery of useful latent communities from real-world social and ecological networks, and the usefulness of metadata in link prediction tasks.", "targets": "The Nonparametric Metadata Dependent Relational Model"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-050d0e73a6074828b00059204a73e795", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We consider the problem of online learning in misspecified linear stochastic multi-armed bandit problems. Regret guarantees for state-of-the-art linear bandit algorithms such as Optimism in the Face of Uncertainty Linear bandit (OFUL) hold under the assumption that the arms expected rewards are perfectly linear in their features. It is, however, of interest to investigate the impact of potential misspecification in linear bandit models, where the expected rewards are perturbed away from the linear subspace determined by the arms features. Although OFUL has recently been shown to be robust to relatively small deviations from linearity, we show that any linear bandit algorithm that enjoys optimal regret performance in the perfectly linear setting (e.g., OFUL) must suffer linear regret under a sparse additive perturbation of the linear model. In an attempt to overcome this negative result, we define a natural class of bandit models characterized by a non-sparse deviation from linearity. We argue that the OFUL algorithm can fail to achieve sublinear regret even under models that have non-sparse deviation. We finally develop a novel bandit algorithm, comprising a hypothesis test for linearity followed by a decision to use either the OFUL or Upper Confidence Bound (UCB) algorithm. For perfectly linear bandit models, the algorithm provably exhibits OFULs favorable regret performance, while for misspecified models satisfying the non-sparse deviation property, the algorithm avoids the linear regret phenomenon and falls back on UCBs sublinear regret scaling. Numerical experiments on synthetic data, and on recommendation data from the public Yahoo! Learning to Rank Challenge dataset, empirically support our findings.", "targets": "Misspecified Linear Bandits"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-454ab2120f3a49f7bca6b28301768046", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We consider the problem of accurately recovering a matrix B of size M \u00d7M , which represents a probability distribution overM outcomes, given access to an observed matrix of \u201ccounts\u201d generated by taking independent samples from the distribution B. How can structural properties of the underlying matrix B be leveraged to yield computationally efficient and information theoretically optimal reconstruction algorithms? When can accurate reconstruction be accomplished in the sparse data regime? This basic problem lies at the core of a number of questions that are currently being considered by different communities, including community detection in sparse random graphs, learning structured models such as topic models or hidden Markov models, and the efforts from the natural language processing community to compute \u201cword embeddings\u201d. Many aspects of this problem\u2014both in terms of learning and property testing/estimation and on both the algorithmic and information theoretic sides\u2014remain open. Our results apply to the setting where B has a particular rank 2 structure. For this setting, we propose an efficient (and practically viable) algorithm that accurately recovers the underlying M \u00d7 M matrix using \u0398(M) samples. This result easily translates to \u0398(M) sample algorithms for learning topic models with two topics over dictionaries of size M , and learning hidden Markov Models with two hidden states and observation distributions supported on M elements. These linear sample complexities are optimal, up to constant factors, in an extremely strong sense: even testing basic properties of the underlying matrix (such as whether it has rank 1 or 2) requires \u03a9(M) samples. Furthermore, we provide an even stronger lower bound where distinguishing whether a sequence of observations were drawn from the uniform distribution over M observations versus being generated by an HMM with two hidden states requires \u03a9(M) observations. This precludes sublinear-sample hypothesis tests for basic properties, such as identity or uniformity, as well as sublinear sample estimators for quantities such as the entropy rate of HMMs. This impossibility of sublinear-sample property testing in these settings is intriguing and underscores the significant differences between these structured settings and the standard setting of drawing i.i.d samples from an unstructured distribution of support size M . \u2217MIT. Email: qqh@mit.edu. \u2020University of Washington. Email: sham@cs.washington.edu \u2021Stanford University. Email: kweihao@gmail.com \u00a7Stanford University. Email: valiant@stanford.edu. Gregory and Weihao\u2019s contributions were supported by NSF CAREER Award CCF-1351108, and a research grant from the Okawa Foundation. ar X iv :1 60 2. 06 58 6v 1 [ cs .L G ] 2 1 Fe b 20 16", "targets": "Recovering Structured Probability Matrices"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-abd53d25b82d482489aee8d5379e7f15", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We aim to shed light on the strengths and weaknesses of the newly introduced neural machine translation paradigm. To that end, we conduct a multifaceted evaluation in which we compare outputs produced by state-of-the-art neural machine translation and phrase-based machine translation systems for 9 language directions across a number of dimensions. Specifically, we measure the similarity of the outputs, their fluency and amount of reordering, the effect of sentence length and performance across different error categories. We find out that translations produced by neural machine translation systems are considerably different, more fluent and more accurate in terms of word order compared to those produced by phrase-based systems. Neural machine translation systems are also more accurate at producing inflected forms, but they perform poorly when translating very long sentences.", "targets": "A Multifaceted Evaluation of Neural versus Phrase-Based Machine Translation for 9 Language Directions"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d9c18c785be04435b4f7bb99357c1803", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present a character-level recurrent neural network that generates relevant and coherent text given auxiliary information such as a sentiment or topic. Using a simple input replication strategy, we preserve the signal of auxiliary input across wider sequence intervals than can feasibly be trained by back-propagation through time. Our main results center on a large corpus of 1.5 million beer reviews from BeerAdvocate. In generative mode, our network produces reviews on command, tailored to a star rating or item category. The generative model can also run in reverse, performing classification with surprising accuracy. Performance of the reverse model provides a straightforward way to determine what the generative model knows without relying too heavily on subjective analysis. Given a review, the model can accurately determine the corresponding rating and infer the beer\u2019s category (IPA, Stout, etc.). We exploit this capability, tracking perceived sentiment and class membership as each character in a review is processed. Quantitative and qualitative empirical evaluations demonstrate that the model captures meaning and learns nonlinear dynamics in text, such as the effect of negation on sentiment, despite possessing no a priori notion of words. Because the model operates at the character level, it handles misspellings, slang, and large vocabularies without any machinery explicitly dedicated to the purpose.", "targets": "CHARACTER-LEVEL GENERATIVE TEXT MODELS"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7856a6f5bbd047fb8f313f51f959dd04", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper summarizes the recent progress we have made for the computer vision technologies in physical therapy with the accessible and affordable devices. We first introduce the remote health coaching system we build with Microsoft Kinect. Since the motion data captured by Kinect is noisy, we investigate the data accuracy of Kinect with respect to the high accuracy motion capture system. We also propose an outlier data removal algorithm based on the data distribution. In order to generate the kinematic parameter from the noisy data captured by Kinect, we propose a kinematic filtering algorithm based on Unscented Kalman Filter and the kinematic model of human skeleton. The proposed algorithm can obtain smooth kinematic parameter with reduced noise compared to the kinematic parameter generated from the raw motion data from Kinect.", "targets": "Remote Health Coaching System and Human Motion Data Analysis for Physical Therapy with Microsoft Kinect"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ce32f8118e2348948e92c736e072426a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Many combinatorial problems arising in machine learning can be reduced to the problem of minimizing a submodular function. Submodular functions are a natural discrete analog of convex functions, and can be minimized in strongly polynomial time. Unfortunately, state-of-the-art algorithms for general submodular minimization are intractable for larger problems. In this paper, we introduce a novel subclass of submodular minimization problems that we call decomposable. Decomposable submodular functions are those that can be represented as sums of concave functions applied to modular functions. We develop an algorithm, SLG, that can efficiently minimize decomposable submodular functions with tens of thousands of variables. Our algorithm exploits recent results in smoothed convex minimization. We apply SLG to synthetic benchmarks and a joint classification-and-segmentation task, and show that it outperforms the state-of-the-art general purpose submodular minimization algorithms by several orders of magnitude.", "targets": "Efficient Minimization of Decomposable Submodular Functions"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b04a1c8f83554fcf94253d85f57a3c2a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This work introduces a method to tune a sequence-based generative model for molecular de novo design that through augmented episodic likelihood can learn to generate structures with certain specified desirable properties. We demonstrate how this model can execute a range of tasks such as generating analogues to a query structure and generating compounds predicted to be active against a biological target. As a proof of principle, the model is first trained to generate molecules that do not contain sulphur. As a second example, the model is trained to generate analogues to the drug Celecoxib, a technique that could be used for scaffold hopping or library expansion starting from a single molecule. Finally, when tuning the model towards generating compounds predicted to be active against the dopamine receptor D2, the model generates structures of which more than 95% are predicted to be active, including experimentally confirmed actives that have not been included in either the generative model nor the activity prediction model.", "targets": "Molecular De-Novo Design through Deep Reinforcement Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c5077e7ff76840b1a784fc40a1eaa541", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Existing studies on semantic parsing mainly focus on the in-domain setting. We formulate cross-domain semantic parsing as a domain adaptation problem: train a semantic parser on some source domains and then adapt it to the target domain. Due to the diversity of logical forms in different domains, this problem presents unique and intriguing challenges. By converting logical forms into canonical utterances in natural language, we reduce semantic parsing to paraphrasing, and develop an attentive sequence-to-sequence paraphrase model that is general and flexible to adapt to different domains. We discover two problems, small micro variance and large macro variance, of pretrained word embeddings that hinder their direct use in neural networks, and propose standardization techniques as a remedy. On the popular OVERNIGHT dataset, which contains eight domains, we show that both cross-domain training and standardized pre-trained word embedding can bring significant improvement.", "targets": "Cross-domain Semantic Parsing via Paraphrasing"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7a2ed44e65f24e0cb791bfabe7dcac32", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we present nmtpy, a flexible Python toolkit based on Theano for training Neural Machine Translation and other neural sequence-to-sequence architectures. nmtpy decouples the specification of a network from the training and inference utilities to simplify the addition of a new architecture and reduce the amount of boilerplate code to be written. nmtpy has been used for LIUM\u2019s topranked submissions to WMT Multimodal Machine Translation and News Translation tasks in 2016 and 2017. 1 OVERVIEW nmtpy is a refactored, extended and Python 3 only version of dl4mt-tutorial 1, a Theano (Theano Development Team, 2016) implementation of attentive Neural Machine Translation (NMT) (Bahdanau et al., 2014). The development of nmtpy project which has been open-sourced2 under MIT license in March 2017, started in March 2016 as an effort to adapt dl4mt-tutorial to multimodal translation models. nmtpy has now become a powerful toolkit where adding a new model is as simple as deriving from an abstract base class to fill in a set of fundamental methods and (optionally) implementing a custom data iterator. The training and inference utilities are as model-agnostic as possible allowing one to use them for different sequence generation networks such as multimodal NMT and image captioning to name a few. This flexibility and the rich set of provided architectures (Section 3) is what differentiates nmtpy from Nematus (Sennrich et al., 2017) another NMT software derived from dl4mt-tutorial.", "targets": "NMTPY: A FLEXIBLE TOOLKIT FOR ADVANCED NEURAL MACHINE TRANSLATION SYSTEMS"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-154e1c81d49e4e988c79d4f598488107", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper we address the problem of estimating the ratio q p where p is a density function and q is another density, or, more generally an arbitrary function. Knowing or approximating this ratio is needed in various problems of inference and integration, in particular, when one needs to average a function with respect to one probability distribution, given a sample from another. It is often referred as importance sampling in statistical inference and is also closely related to the problem of covariate shift in transfer learning as well as to various MCMC methods. It may also be useful for separating the underlying geometry of a space, say a manifold, from the density function defined on it. Our approach is based on reformulating the problem of estimating q p as an inverse problem in terms of an integral operator corresponding to a kernel, and thus reducing it to an integral equation, known as the Fredholm problem of the first kind. This formulation, combined with the techniques of regularization and kernel methods, leads to a principled kernel-based framework for constructing algorithms and for analyzing them theoretically. The resulting family of algorithms (FIRE, for Fredholm Inverse Regularized Estimator) is flexible, simple and easy to implement. We provide detailed theoretical analysis including concentration bounds and convergence rates for the Gaussian kernel in the case of densities defined on R, compact domains in R and smooth d-dimensional sub-manifolds of the Euclidean space. We also show experimental results including applications to classification and semi-supervised learning within the covariate shift framework and demonstrate some encouraging experimental comparisons. We also show how the parameters of our algorithms can be chosen in a completely unsupervised manner.", "targets": "Inverse Density as an Inverse Problem: the Fredholm Equation Approach"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-79feee4f1c4e458489a1cfc508f571f9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Consider designing an effective crowdsourcing system for an M -ary classification task. Crowd workers complete simple binary microtasks whose results are aggregated to give the final result. We consider the novel scenario where workers have a reject option so they may skip microtasks when they are unable or choose not to respond. For example, in mismatched speech transcription, workers who do not know the language may not be able to respond to microtasks focused on phonological dimensions outside their categorical perception. We present an aggregation approach using a weighted majority voting rule, where each worker\u2019s response is assigned an optimized weight to maximize the crowd\u2019s classification performance. We evaluate system performance in both exact and asymptotic forms. Further, we consider the setting where there may be a set of greedy workers that complete microtasks even when they are unable to perform it reliably. We consider an oblivious and an expurgation strategy to deal with greedy workers, developing an algorithm to adaptively switch between the two based on the estimated fraction of greedy workers in the anonymous crowd. Simulation results show improved performance compared with conventional majority voting.", "targets": "Multi-object Classification via Crowdsourcing with a Reject Option"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2e58157a76d543c3a91068309cc3d9ef", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A variety of real-world processes (over networks) produce sequences of data whose complex temporal dynamics need to be studied. More especially, the event timestamps can carry important information about the underlying network dynamics, which otherwise are not available from the time-series evenly sampled from continuous signals. Moreover, in most complex processes, event sequences and evenly-sampled times series data can interact with each other, which renders joint modeling of those two sources of data necessary. To tackle the above problems, in this paper, we utilize the rich framework of (temporal) point processes to model event data and timely update its intensity function by the synergic twin Recurrent Neural Networks (RNNs). In the proposed architecture, the intensity function is synergistically modulated by one RNN with asynchronous events as input and another RNN with time series as input. Furthermore, to enhance the interpretability of the model, the attention mechanism for the neural point process is introduced. The whole model with event type and timestamp prediction output layers can be trained end-to-end and allows a black-box treatment for modeling the intensity. We substantiate the superiority of our model in synthetic data and three real-world benchmark datasets.", "targets": "Joint Modeling of Event Sequence and Time Series with Attentional Twin Recurrent Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-988a6dbb54a740d9aa6f581fee1f98ad", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The capability to store data about business processes execution in so-called Event Logs has brought to the diffusion of tools for the analysis of process executions and for the assessment of the goodness of a process model. Nonetheless, these tools are often very rigid in dealing with with Event Logs that include incomplete information about the process execution. Thus, while the ability of handling incomplete event data is one of the challenges mentioned in the process mining manifesto, the evaluation of compliance of an execution trace still requires an endto-end complete trace to be performed. This paper exploits the power of abduction to provide a flexible, yet computationally effective, framework to deal with different forms of incompleteness in an Event Log. Moreover it proposes a refinement of the classical notion of compliance into strong and conditional compliance to take into account incomplete logs. Finally, performances evaluation in an experimental setting shows the feasibility of the presented approach.", "targets": "Abducing Compliance of Incomplete Event Logs"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-770d175d99624a789d6dd646b33746c5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Universal induction is a crucial issue in AGI. Its practical applicability can be achieved by the choice of the reference machine or representation of algorithms agreed with the environment. This machine should be updatable for solving subsequent tasks more efficiently. We study this problem on an example of combinatory logic as the very simple Turing-complete reference machine, which enables modifying program representations by introducing different sets of primitive combinators. Genetic programming system is used to search for combinator expressions, which are easily decomposed into sub-expressions being recombined in crossover. Our experiments show that low-complexity induction or prediction tasks can be solved by the developed system (much more efficiently than using brute force); useful combinators can be revealed and included into the representation simplifying more difficult tasks. However, optimal sets of combinators depend on the specific task, so the reference machine should be adaptively chosen in coordination with the search engine.", "targets": "Universal Induction with Varying Sets of Combinators"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-509b3f6c0b324decaf4146786c742b7e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We consider the problem of repeatedly solving a variant of the same dynamic programming problem in successive trials. An instance of the type of problems we consider is to find the optimal binary search tree. At the beginning of each trial, the learner probabilistically chooses a tree with the n keys at the internal nodes and the n+ 1 gaps between keys at the leaves. It is then told the frequencies of the keys and gaps and is charged by the average search cost for the chosen tree. The problem is online because the frequencies can change between trials. The goal is to develop algorithms with the property that their total average search cost (loss) in all trials is close to the total loss of the best tree chosen in hind sight for all trials. The challenge, of course, is that the algorithm has to deal with exponential number of trees. We develop a methodology for tackling such problems for a wide class of dynamic programming algorithms. Our framework allows us to extend online learning algorithms like Hedge [9] and Component Hedge [15] to a significantly wider class of combinatorial objects than was possible before.", "targets": "Online Dynamic Programming"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-09ead4392ab7458090b92ae91f4ef120", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We introduce the first global recursive neural parsing model with optimality guarantees during decoding. To support global features, we give up dynamic programs and instead search directly in the space of all possible subtrees. Although this space is exponentially large in the sentence length, we show it is possible to learn an efficient A* parser. We augment existing parsing models, which have informative bounds on the outside score, with a global model that has loose bounds but only needs to model non-local phenomena. The global model is trained with a novel objective that encourages the parser to search both efficiently and accurately. The approach is applied to CCG parsing, improving state-of-the-art accuracy by 0.4 F1. The parser finds the optimal parse for 99.9% of held-out sentences, exploring on average only 190 subtrees.", "targets": "Global Neural CCG Parsing with Optimality Guarantees"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6c564840a1bb4f788546a92bb2575788", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a model to learn visually grounded word embeddings (vis-w2v) to capture visual notions of semantic relatedness. While word embeddings trained using text have been extremely successful, they cannot uncover notions of semantic relatedness implicit in our visual world. For instance, visual grounding can help us realize that concepts like eating and staring at are related, since when people are eating something, they also tend to stare at the food. Grounding a rich variety of relations like eating and stare at in vision is a challenging task, despite recent progress in vision. We realize the visual grounding for words depends on the semantics of our visual world, and not the literal pixels. We thus use abstract scenes created from clipart to provide the visual grounding. We find that the embeddings we learn capture fine-grained visually grounded notions of semantic relatedness. We show improvements over text only word embeddings (word2vec) on three tasks: common-sense assertion classification, visual paraphrasing and text-based image retrieval. Our code and datasets will be available online.", "targets": "Visual Word2Vec (vis-w2v): Learning Visually Grounded Word Embeddings Using Abstract Scenes"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3993094978cf4ee3835bd98088a1707c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Dialog act identification plays an important role in understanding conversations. It has been widely applied in many fields such as dialogue systems, automatic machine translation, automatic speech recognition, and especially useful in systems with human-computer natural language dialogue interfaces such as virtual assistants and chatbots. The first step of identifying dialog act is identifying the boundary of the dialog act in utterances. In this paper, we focus on segmenting the utterance according to the dialog act boundaries, i.e. functional segments identification, for Vietnamese utterances. We investigate carefully functional segment identification in two approaches: (1) machine learning approach using maximum entropy (ME) and conditional random fields (CRFs); (2) deep learning approach using bidirectional Long Short-Term Memory (LSTM) with a CRF layer (Bi-LSTM-CRF) on two different conversational datasets: (1) Facebook messages (Message data); (2) transcription from phone conversations (Phone data). To the best of our knowledge, this is the first work that applies deep learning based approach to dialog act segmentation. As the results show, deep learning approach performs appreciably better as to compare with traditional machine learning approaches. Moreover, it is also the first study that tackles dialog act and functional segment identification for Vietnamese. Keywords\u2014Dialog act segmentation, functional segment, Vietnamese conversation.", "targets": "Dialogue Act Segmentation for Vietnamese Human-Human Conversational Texts"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0ff2756a31544d4d9c42bd138bac6bf7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "People exhibit a tendency to generalize a novel noun to the basic-level in a hierarchical taxonomy \u2013 a cognitively salient category such as \u201cdog\u201d \u2013 with the degree of generalization depending on the number and type of exemplars. Recently, a change in the presentation timing of exemplars has also been shown to have an effect, surprisingly reversing the prior observed pattern of basic-level generalization. We explore the precise mechanisms that could lead to such behavior by extending a computational model of word learning and word generalization to integrate cognitive processes of memory and attention. Our results show that the interaction of forgetting and attention to novelty, as well as sensitivity to both type and token frequencies of exemplars, enables the model to replicate the empirical results from different presentation timings. Our results reinforce the need to incorporate general cognitive processes within word learning models to better understand the range of observed behaviors in vocabulary acquisition.", "targets": "The Interaction of Memory and Attention in Novel Word Generalization: A Computational Investigation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-023462ae3cde42ec8c222dbac0afd921", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Current Deep Learning approaches have been very successful using convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers. Three limitations of this approach are: 1) they are based on a simple layered network topology, i.e., highly connected layers, without intra-layer connections; 2) the networks are manually configured to achieve optimal results, and 3) the implementation of neuron model is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determine network topology, and neuromorphic computing for a low-power hardware implementation. We use the MNIST dataset for our experiment, due to input size limitations of current quantum computers. Our results show the feasibility of using the three architectures in tandem to address the above deep learning limitations. We show a quantum computer can find high quality values of intra-layer connections weights, in a tractable time as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. Notice: This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan). 1 ar X iv :1 70 3. 05 36 4v 1 [ cs .N E ] 1 5 M ar 2 01 7", "targets": "A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e815c5200f794815868bd3c4b87cd21d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Machine transliteration is the process of automatically transforming the script of a word from a source language to a target language, while preserving pronunciation. Sequence to sequence learning has recently emerged as a new paradigm in supervised learning. In this paper a character-based encoder-decoder model has been proposed that consists of two Recurrent Neural Networks. The encoder is a Bidirectional recurrent neural network that encodes a sequence of symbols into a fixed-length vector representation, and the decoder generates the target sequence using an attention-based recurrent neural network. The encoder, the decoder and the attention mechanism are jointly trained to maximize the conditional probability of a target sequence given a source sequence. Our experiments on different datasets show that the proposed encoderdecoder model is able to achieve significantly higher transliteration quality over traditional statistical models.", "targets": "Neural Machine Transliteration: Preliminary Results"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d406a837568541b4a84f77b8e4d74e26", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Kernel approximation using randomized feature maps has recently gained a lot of interest. In this work, we identify that previous approaches for polynomial kernel approximation create maps that are rank deficient, and therefore do not utilize the capacity of the projected feature space effectively. To address this challenge, we propose compact random feature maps (CRAFTMaps) to approximate polynomial kernels more concisely and accurately. We prove the error bounds of CRAFTMaps demonstrating their superior kernel reconstruction performance compared to the previous approximation schemes. We show how structured random matrices can be used to efficiently generate CRAFTMaps, and present a single-pass algorithm using CRAFTMaps to learn non-linear multi-class classifiers. We present experiments on multiple standard data-sets with performance competitive with state-of-the-art results.", "targets": "Compact Random Feature Maps"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4a5926c463054e7fbb32e069b763675a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The multi-agent path-finding (MAPF) problem has recently received a lot of attention. However, it does not capture important characteristics of many real-world domains, such as automated warehouses, where agents are constantly engaged with new tasks. In this paper, we therefore study a lifelong version of the MAPF problem, called the multiagent pickup and delivery (MAPD) problem. In the MAPD problem, agents have to attend to a stream of delivery tasks in an online setting. One agent has to be assigned to each delivery task. This agent has to first move to a given pickup location and then to a given delivery location while avoiding collisions with other agents. We present two decoupled MAPD algorithms, Token Passing (TP) and Token Passing with Task Swaps (TPTS). Theoretically, we show that they solve all well-formed MAPD instances, a realistic subclass of MAPD instances. Experimentally, we compare them against a centralized strawman MAPD algorithm without this guarantee in a simulated warehouse system. TP can easily be extended to a fully distributed MAPD algorithm and is the best choice when real-time computation is of primary concern since it remains efficient for MAPD instances with hundreds of agents and tasks. TPTS requires limited communication among agents and balances well between TP and the centralized MAPD algorithm.", "targets": "Lifelong Multi-Agent Path Finding for Online Pickup and Delivery Tasks\u2217"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5bac14cac5694e828b500690e188e6d8", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recommendation systems are emerging as an important business application with significant economic impact. Currently popular systems include Amazon\u2019s book recommendations, Netflix\u2019s movie recommendations, and Pandora\u2019s music recommendations. In this paper we address the problem of estimating probabilities associated with recommendation system data using non-parametric kernel smoothing. In our estimation we interpret missing items as randomly censored observations and obtain efficient computation schemes using combinatorial properties of generating functions. We demonstrate our approach with several case studies involving real world movie recommendation data. The results are comparable with state-of-the-art techniques while also providing probabilistic preference estimates outside the scope of traditional recommender systems.", "targets": "Estimating Probabilities in Recommendation Systems"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-afb4f84c16eb47498d929be76d65c634", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Studies of the overall structure of vocabulary and its dynamics became possible due to creation of diachronic text corpora, especially Google Books Ngram. This article discusses the question of core change rate and the degree to which the core words cover the texts. Different periods of the last three centuries and six main European languages presented in Google Books Ngram are compared. The main result is high stability of core change rate, which is analogous to stability of the Swadesh list.", "targets": "Dynamics of core of language vocabulary"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4d3af9f93461461bb9cbcabd9077f270", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Stochastic gradient-boosted decision trees are widely employed for multivariate classification and regression tasks. This paper presents a speed-optimized and cache-friendly implementation for multivariate classification called FastBDT. FastBDT is one order of magnitude faster during the fitting-phase and application-phase, in comparison with popular implementations in software frameworks like TMVA, scikit-learn and XGBoost. The concepts used to optimize the execution time and performance studies are discussed in detail in this paper. The key ideas include: An equal-frequency binning on the input data, which allows replacing expensive floating-point with integer operations, while at the same time increasing the quality of the classification; a cache-friendly linear access pattern to the input data, in contrast to usual implementations, which exhibit a random access pattern. FastBDT provides interfaces to C/C++, Python and TMVA. It is extensively used in the field of high energy physics by the Belle II experiment.", "targets": "A speed-optimized and cache-friendly implementation of stochastic gradient-boosted decision trees for multivariate classification"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d1156ec2f3a344fba8fdf79b3329c7a3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present LTLS, a technique for multiclass and multilabel prediction that can perform training and inference in logarithmic time and space. LTLS embeds large classification problems into simple structured prediction problems and relies on efficient dynamic programming algorithms for inference. We train LTLS with stochastic gradient descent on a number of multiclass and multilabel datasets and show that despite its small memory footprint it is often competitive with existing approaches.", "targets": "Log-time and Log-space Extreme Classification"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-30acd15fa5dd4033888d6d25a2aa157a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper we investigate the problem of localizing a mobile device based on readings from its embedded sensors utilizing machine learning methodologies. We consider a realworld environment, collect a large dataset of 3110 datapoints, and examine the performance of a substantial number of machine learning algorithms in localizing a mobile device. We have found algorithms that give a mean error as accurate as 0.76 meters, outperforming other indoor localization systems reported in the literature. We also propose a hybrid instance-based approach that results in a speed increase by a factor of ten with no loss of accuracy in a live deployment over standard instance-based methods, allowing for fast and accurate localization. Further, we determine how smaller datasets collected with less density affect accuracy of localization, important for use in real-world environments. Finally, we demonstrate that these approaches are appropriate for real-world deployment by evaluating their performance in an online, in-motion experiment.", "targets": "Machine Learning for Indoor Localization Using Mobile Phone-Based Sensors"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a0a2e369b5c54cc2a1ed3d99086be2e2", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The choice of approximate posterior distribution is one of the core problems in variational inference. Most applications of variational inference employ simple families of posterior approximations in order to allow for efficient inference, focusing on mean-field or other simple structured approximations. This restriction has a significant impact on the quality of inferences made using variational methods. We introduce a new approach for specifying flexible, arbitrarily complex and scalable approximate posterior distributions. Our approximations are distributions constructed through a normalizing flow, whereby a simple initial density is transformed into a more complex one by applying a sequence of invertible transformations until a desired level of complexity is attained. We use this view of normalizing flows to develop categories of finite and infinitesimal flows and provide a unified view of approaches for constructing rich posterior approximations. We demonstrate that the theoretical advantages of having posteriors that better match the true posterior, combined with the scalability of amortized variational approaches, provides a clear improvement in performance and applicability of variational inference.", "targets": "Variational Inference with Normalizing Flows"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9231cd6c6b6746cfa07e22bdf05e97b6", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Sentences are important semantic units of natural language. A generic, distributional representation of sentences that can capture the latent semantics is beneficial to multiple downstream applications. We observe a simple geometry of sentences \u2013 the word representations of a given sentence (on average 10.23 words in all SemEval datasets with a standard deviation 4.84) roughly lie in a low-rank subspace (roughly, rank 4). Motivated by this observation, we represent a sentence by the low-rank subspace spanned by its word vectors. Such an unsupervised representation is empirically validated via semantic textual similarity tasks on 19 different datasets, where it outperforms the sophisticated neural network models, including skip-thought vectors, by 15% on average.", "targets": "Representing Sentences as Low-Rank Subspaces"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3366975bd6d14e24aa2d4110d7dc65b4", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Private Set Intersection (PSI) is usually implemented as a sequence of encryption rounds between pairs of users, whereas the present work implements PSI in a simpler fashion: each set only needs to be encrypted once, after which each pair of users need only one ordinary set comparison. This is typically orders of magnitude faster than ordinary PSI at the cost of some \u201cfuzziness\u201d in the matching, which may nonetheless be tolerable or even desirable. This is demonstrated in the case where the sets consist of English words processed with WordNet. Email: 1054h34@gmail.com", "targets": "Fast and Fuzzy Private Set Intersection"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9dae648edef84406b523457e347b3003", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Submodular functions describe a variety of discrete problems in machine learn-ing, signal processing, and computer vision. However, minimizing submodularfunctions poses a number of algorithmic challenges. Recent work introduced aneasy-to-use, parallelizable algorithm for minimizing submodular functions thatdecompose as the sum of \u201csimple\u201d submodular functions. Empirically, this al-gorithm performs extremely well, but no theoretical analysis was given. In thispaper, we show that the algorithm converges linearly, and we provide upper andlower bounds on the rate of convergence. Our proof relies on the geometry ofsubmodular polyhedra and draws on results from spectral graph theory.", "targets": "On the Convergence Rate of Decomposable Submodular Function Minimization"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-14e651433693406c9e71bdf2c475ebdc", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The holy Quran is the holy book of the Muslims. It contains information about many domains. Often people search for particular concepts of holy Quran based on the relations among concepts. An ontological modeling of holy Quran can be useful in such a scenario. In this paper, we have modeled nature related concepts of holy Quran using OWL (Web Ontology Language) / RDF (Resource Description Framework). Our methodology involves identifying nature related concepts mentioned in holy Quran and identifying relations among those concepts. These concepts and relations are represented as classes/instances and properties of an OWL ontology. Later, in the result section it is shown that, using the Ontological model, SPARQL queries can retrieve verses and concepts of interest. Thus, this modeling helps semantic search and query on the holy Quran. In this work, we have used English translation of the holy Quran by Sahih International, Protege OWL Editor and for querying we have used SPARQL. Keywords\u2014 Quranic Ontology; Semantic Quran; Quranic Knowledge Representation.", "targets": "Applying Ontological Modeling on Quranic \u201cNature\u201d Domain"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2910af8479ee407d9fd798ef1e38e893", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The game of Go is more challenging than other board games, due to the difficulty of constructing a position or move evaluation function. In this paper we investigate whether deep convolutional networks can be used to directly represent and learn this knowledge. We train a large 12-layer convolutional neural network by supervised learning from a database of human professional games. The network correctly predicts the expert move in 55% of positions, equalling the accuracy of a 6 dan human player. When the trained convolutional network was used directly to play games of Go, without any search, it beat the traditional-search program GnuGo in 97% of games, and matched the performance of a state-of-the-art Monte-Carlo tree search that simulates two million positions per move.", "targets": "MOVE EVALUATION IN GO USING DEEP CONVOLUTIONAL NEURAL NETWORKS"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9c0fe4e5b4f941bea54c033ace32cff4", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Cluster analysis plays an important role in decision making process for many knowledge-based systems. There exist a wide variety of different approaches for clustering applications including the heuristic techniques, probabilistic models, and traditional hierarchical algorithms. In this paper, a novel heuristic approach based on big bang-big crunch algorithm is proposed for clustering problems. The proposed method not only takes advantage of heuristic nature to alleviate typical clustering algorithms such as k-means, but it also benefits from the memory based scheme as compared to its similar heuristic techniques. Furthermore, the performance of the proposed algorithm is investigated based on several benchmark test functions as well as on the well-known datasets. The experimental results shows the significant superiority of the proposed method over the similar algorithms.", "targets": "Memory Enriched Big Bang Big Crunch Optimization Algorithm for Data Clustering"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d395df9f4088459a916725054d5ec683", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Performing sensitivity analysis for influence diagrams using the decision circuit framework is particularly convenient, since the partial derivatives with respect to every parameter are readily available [Bhattacharjya and Shachter, 2007; 2008]. In this paper we present three non-linear sensitivity analysis methods that utilize this partial derivative information and therefore do not require re-evaluating the decision situation multiple times. Specifically, we show how to efficiently compare strategies in decision situations, perform sensitivity to risk aversion and compute the value of perfect hedging [Seyller, 2008].", "targets": "Three new sensitivity analysis methods for influence diagrams"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7af12ec9a971438bb9571ca0c696658b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present two related methods for creating MasterPrints, synthetic fingerprints that a fingerprint verification system identifies as many different people. Both methods start with training a Generative Adversarial Network (GAN) on a set of real fingerprint images. The generator network is then used to search for images that can be recognized as multiple individuals. The first method uses evolutionary optimization in the space of latent variables, and the second uses gradient-based search. Our method is able to design a MasterPrint that a commercial fingerprint system matches to 22% of all users in a strict security setting, and 75% of all users at a looser security setting.", "targets": "DeepMasterPrint: Generating Fingerprints for Presentation Attacks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e583e1adceb448e1a3eb796404eab86c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper develops the idea of membership function assignment for OWL (Web Ontology Language) ontology elements in order to subsequently generate fuzzy rules from this ontology. The task of membership function assignment for OWL ontology elements had already been partially described, but this concerned the case , when several OWL ontologies of the same domain were available, and they were merged into a single ontology. The purpose of this paper is to present the way of membership function assignment for OWL ontology elements in the case, when there is the only one available ontology. Fuzzy rules, generated from the OWL ontology, are necessary for supplement of the SWES (Semantic Web Expert System) knowledge base. SWES is an expert system, which will be able to extract knowledge from OWL ontologies , found in the Web, and will serve as a universal expert for the user.", "targets": "Membership Function Assignment for Elements of Single OWL Ontology"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f0e374c0198040188889952c2873b007", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes the journal title. However, use of a template does not certify that the paper has been accepted for publication in the named journal. INFORMS journal templates are for the exclusive purpose of submitting to an INFORMS journal and should not be used to distribute the papers in print or online or to submit the papers to another publication.", "targets": "A Dynamic Near-Optimal Algorithm for Online Linear Programming"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7cdbb4ace1af47a5b3da8b10d83e9007", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "For agents and robots to become more useful, they must be able to quickly learn from non-technical users. This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner\u2019s current policy. We present empirical results that show this assumption to be false\u2014whether human trainers give a positive or negative feedback for a decision is influenced by the learner\u2019s current policy. We argue that policy-dependent feedback, in addition to being commonplace, enables useful training strategies from which agents should benefit. Based on this insight, we introduce Convergent Actor-Critic by Humans (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot, even with noisy image features.", "targets": "Interactive Learning from Policy-Dependent Human Feedback"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8673b5d2dd164d928fd766856574cd80", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper considers a general data-fitting problem over a networked system, in which many computing nodes are connected by an undirected graph. This kind of problem can find many real-world applications and has been studied extensively in the literature. However, existing solutions either need a central controller for information sharing or requires slot synchronization among different nodes, which increases the difficulty of practical implementations, especially for a very large and heterogeneous system. As a contrast, in this paper, we treat the data-fitting problem over the network as a stochastic programming problem with many constraints. By adapting the results in a recent paper [18], we design a fully distributed and asynchronized stochastic gradient descent (SGD) algorithm. We show that our algorithm can achieve global optimality and consensus asymptotically by only local computations and communications. Additionally, we provide a sharp lower bound for the convergence speed in the regular graph case. This result fits the intuition and provides guidance to design a \u2018good\u2019 network topology to speed up the convergence. Also, the merit of our design is validated by experiments on both synthetic and real-world datasets.", "targets": "Fully Distributed and Asynchronized Stochastic Gradient Descent for Networked Systems"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-edfa218aaed644538cb9910e13057cd0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Data-to-text systems are powerful in generating reports from data automatically and thus they simplify the presentation of complex data. Rather than presenting data using visualisation techniques, datato-text systems use natural (human) language, which is the most common way for human-human communication. In addition, data-to-text systems can adapt their output content to users\u2019 preferences, background or interests and therefore they can be pleasant for users to interact with. Content selection is an important part of every data-to-text system, because it is the module that determines which from the available information should be conveyed to the user. This survey initially introduces the field of data-to-text generation, describes the general data-to-text system architecture and then it reviews the state-ofthe-art content selection methods. Finally, it provides recommendations for choosing an approach and discusses opportunities for future research.", "targets": "Content Selection in Data-to-Text Systems: A Survey"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-93b2969648ab4c98b015c6dda482f7a0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Semi-supervised learning based on the low-density separation principle such asthe cluster and manifold assumptions has been extensively studied in the lastdecades. However, such semi-supervised learning methods do not always per-form well due to violation of the cluster and manifold assumptions. In this paper,we propose a novel approach to semi-supervised learning that does not requiresuch restrictive assumptions. Our key idea is to combine learning from positiveand negative data (standard supervised learning) and learning from positive andunlabeled data (PU learning), the latter is guaranteed to be able to utilize unla-beled data without the cluster and manifold assumptions. We theoretically andexperimentally show the usefulness of our approach.", "targets": "Beyond the Low-density Separation Principle: A Novel Approach to Semi-supervised Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7d388dd8a7584cf797be6cfa0e1c8555", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper we propose a neural network model with a novel Sequential Attention layer that extends soft attention by assigning weights to words in an input sequence in a way that takes into account not just how well that word matches a query, but how well surrounding words match. We evaluate this approach on the task of reading comprehension (Who did What and CNN) and show that it dramatically improves a strong baseline like the Stanford Reader. The resulting model is competitive with the state of the art.", "targets": "Sequential Attention"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d1b4a47780664ee085254d9446181739", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose two novel techniques \u2014 stacking bottleneck features and minimum generation error training criterion \u2014 to improve the performance of deep neural network (DNN)based speech synthesis. The techniques address the related issues of frame-by-frame independence and ignorance of the relationship between static and dynamic features, within current typical DNNbased synthesis frameworks. Stacking bottleneck features, which are an acoustically\u2013informed linguistic representation, provides an efficient way to include more detailed linguistic context at the input. The minimum generation error training criterion minimises overall output trajectory error across an utterance, rather than minimising the error per frame independently, and thus takes into account the interaction between static and dynamic features. The two techniques can be easily combined to further improve performance. We present both objective and subjective results that demonstrate the effectiveness of the proposed techniques. The subjective results show that combining the two techniques leads to significantly more natural synthetic speech than from conventional DNN or long short-term memory (LSTM) recurrent neural network (RNN) systems.", "targets": "Improving Trajectory Modelling for DNN-based Speech Synthesis by using Stacked Bottleneck Features and Minimum Generation Error Training"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f421cc922e2a41d697ea063e9f52e678", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "When training deep neural networks, it is typically assumed that the training examples are uniformly difficult to learn. Or, to restate, it is assumed that the training error will be uniformly distributed across the training examples. Based on these assumptions, each training example is used an equal number of times. However, this assumption may not be valid in many cases. \u201cOddball SGD\u201d (novelty-driven stochastic gradient descent) was recently introduced to drive training probabilistically according to the error distribution \u2013 training frequency is proportional to training error magnitude. In this article, using a deep neural network to encode a video, we show that oddball SGD can be used to enforce uniform error across the training set.", "targets": "Uniform Learning in a Deep Neural Network via \"Oddball\" Stochastic Gradient Descent"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5a3e1a7bce6247dfb2549839e806ddc0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this project, a rather complete proof-theoretical formalization of Lambek Calculus (non-associative with arbitrary extensions) has been ported from Coq proof assistent to HOL4 theorem prover, with some improvements and new theorems. Three deduction systems (Syntactic Calculus, Natural Deduction and Sequent Calculus) of Lambek Calculus are defined with many related theorems proved. The equivalance between these systems are formally proved. Finally, a formalization of Sequent Calculus proofs (where Coq has built-in supports) has been designed and implemented in HOL4. Some basic results including the subformula properties of the so-called \u201ccut-free\u201d proofs are formally proved. This work can be considered as the preliminary work towards a language parser based on category grammars which is not multimodal but still has ability to support context-sensitive languages through customized extensions.", "targets": "Formalized Lambek Calculus in Higher Order Logic (HOL4)"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8d04456071de4b35b18c23eb9abd035b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper addresses the problem of predicting the k events that are most likely to occur next, over historical real-time event streams. Existing approaches to causal prediction queries have a number of limitations. First, they exhaustively search over an acyclic causal network to find the most likely k effect events; however, data from real event streams frequently reflect cyclic causality. Second, they contain conservative assumptions intended to exclude all possible non-causal links in the causal network; it leads to the omission of many less-frequent but important causal links. We overcome these limitations by proposing a novel event precedence model and a runtime causal inference mechanism. The event precedence model constructs a first order absorbing Markov chain incrementally over event streams, where an edge between two events signifies a temporal precedence relationship between them, which is a necessary condition for causality. Then, the run-time causal inference mechanism learns causal relationships dynamically during query processing. This is done by removing some of the temporal precedence relationships that do not exhibit causality in the presence of other events in the event precedence model. This paper presents two query processing algorithms \u2013 one performs exhaustive search on the model and the other performs a more efficient reduced search with early termination. Experiments using two real datasets (cascading blackouts in power systems and web page views) verify the effectiveness of the probabilistic top-k prediction queries and the efficiency of the algorithms. Specifically, the reduced search algorithm reduced runtime, relative to exhaustive search, by 25\u2212 80% (depending on the application) with only a small reduction in accuracy.", "targets": "Real-time Top-K Predictive Query Processing over Event Streams"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e3e4a3fa540f416bb31fdaec187fbc69", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We develop a probabilistic latent-variable model to discover semantic frames\u2014types of events and their participants\u2014from corpora. We present a Dirichlet-multinomial model in which frames are latent categories that explain the linking of verb-subject-object triples, given document-level sparsity. We analyze what the model learns, and compare it to FrameNet, noting it learns some novel and interesting frames. This document also contains a discussion of inference issues, including concentration parameter learning; and a small-scale error analysis of syntactic parsing accuracy. Note: this work was originally posted online October 2012 as part of CMU MLD\u2019s Data Analysis Project requirement. This version has no new experiments or results, but has added some discussion of new related work.", "targets": "Learning Frames from Text with an Unsupervised Latent Variable Model"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1326bd7530624e1896ff3247cafb98e0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Drugs are frequently prescribed to patients with the aim of improving each patient\u2019s medical state, but an unfortunate consequence of most prescription drugs is the occurrence of undesirable side effects. Side effects that occur in more than one in a thousand patients are likely to be signalled efficiently by current drug surveillance methods, however, these same methods may take decades before generating signals for rarer side effects, risking medical morbidity or mortality in patients prescribed the drug while the rare side effect is undiscovered. In this paper we propose a novel computational meta-analysis framework for signalling rare side effects that integrates existing methods, knowledge from the web, metric learning and semi-supervised clustering. The novel framework was able to signal many known rare and serious side effects for the selection of drugs investigated, such as tendon rupture when prescribed Ciprofloxacin or Levofloxacin, renal failure with Naproxen and depression associated with Rimonabant. Furthermore, for the majority of the drug investigated it generated signals for rare side effects at a more stringent signalling threshold than existing methods and shows the potential to become a fundamental part of post marketing surveillance to detect rare side effects.", "targets": "A Novel Semi-Supervised Algorithm for Rare Prescription Side Effect Discovery"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d081da5ec9bc465cbda3d7dfa6678f20", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We exhibit a strong link between frequentist PAC-Bayesian bounds and the Bayesian marginal likelihood. That is, for the negative log-likelihood loss function, we show that the minimization of PAC-Bayesian generalization bounds maximizes the Bayesian marginal likelihood. This provides an alternative explanation to the Bayesian Occam\u2019s razor criteria, under the assumption that the data is generated by a i.i.d. distribution. Moreover, as the negative log-likelihood is an unbounded loss function, we motivate and propose a PAC-Bayesian theorem tailored for the sub-Gamma loss family, and we show that our approach is sound on classical Bayesian linear regression tasks.", "targets": "PAC-Bayesian Theory Meets Bayesian Inference"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9aa928d915d6497e803a9c23204eb647", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In educational technology and learning sciences, there are multiple uses for a predictive model of whether a student will perform a task correctly or not. For example, an intelligent tutoring system may use such a model to estimate whether or not a student has mastered a skill. We analyze the significance of data recency in making such predictions, i.e., asking whether relatively more recent observations of a student\u2019s performance matter more than relatively older observations. We develop a new Recent-Performance Factors Analysis model that takes data recency into account. The new model significantly improves predictive accuracy over both existing logistic-regression performance models and over novel baseline models in evaluations on real-world and synthetic datasets. As a secondary contribution, we demonstrate how the widely used cross-validation with 0-1 loss is inferior to AIC and to cross-validation with L1 prediction error loss as a measure of model performance.", "targets": "Predicting Performance During Tutoring with Models of Recent Performance"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b9161eee080a4c909e58179186e485de", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a supervised machine learning approach for boosting existing signal and image recovery methods and demonstrate its efficacy on example of image reconstruction in computed tomography. Our technique is based on a local nonlinear fusion of several image estimates, all obtained by applying a chosen reconstruction algorithm with different values of its control parameters. Usually such output images have different bias/variance trade-off. The fusion of the images is performed by feed-forward neural network trained on a set of known examples. Numerical experiments show an improvement in reconstruction quality relatively to existing direct and iterative reconstruction methods.", "targets": "Spatially-Adaptive Reconstruction in Computed Tomography using Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1418a5f1ac3e4b6faaa5d76d49f69908", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We introduce a new interpretation of two re\u00ad lated notions conditional utility and utility independence. Unlike the traditional inter\u00ad pretation, the new interpretation render the notions the direct analogues of their prob\u00ad abilistic counterparts. To capture these no\u00ad tions formally, we appeal to the notion of util\u00ad ity distribution, introduced in previous paper. We show that utility distributions, which have a structure that is identical to that of probability distributions, can be viewed as a special case of an additive multiattribute utility functions, and show how this special case permits us to capture the novel senses of conditional utility and utility independence. Finally, we present the notion of utility net\u00ad works, which do for utilities what Bayesian networks do for probabilities. Specifically, utility networks exploit the new interpreta\u00ad tion of conditional utility and utility indepen\u00ad dence to compactly represent a utility distri\u00ad bution.", "targets": "Conditional Utility, Utility Independence, and Utility Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-45a0e7a9142c44ef9f5b3e88a9198654", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present the first differentially private algorithms for reinforcement learning, which apply to the task of evaluating a fixed policy. We establish two approaches for achieving differential privacy, provide a theoretical analysis of the privacy and utility of the two algorithms, and show promising results on simple empirical examples.", "targets": "Differentially Private Policy Evaluation\u2217"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-edde7526ab724e66b38f2b3fe36a5f90", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The high probability of hardware failures prevents many advanced robots (e.g. legged robots) to be confidently deployed in real-world situations (e.g post-disaster rescue). Instead of attempting to diagnose the failure(s), robots could adapt by trial-and-error in order to be able to complete their tasks. However, the best trial-and-error algorithms for robotics are all episodic: between each trial, the robot needs to be put back in the same state, that is, the robot is not learning autonomously. In this paper, we introduce a novel learning algorithm called \u201cReset-free Trial-and-Error\u201d (RTE) that allows robots to recover from damage while completing their tasks. We evaluate it on a hexapod robot that is damaged in several ways (e.g. a missing leg, a shortened leg, etc.) and whose objective is to reach a sequence of targets in an arena. Our experiments show that the robot can recover most of its locomotion abilities in a few minutes, in an environment with obstacles, and without any human intervention. Overall, this new algorithm makes it possible to contemplate sending robots to places that are truly too dangerous for humans and in which robots cannot be rescued.", "targets": "Reset-free Trial-and-Error Learning for Data-Efficient Robot Damage Recovery"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-101a9bca2e6d4634bbf81d7540e59822", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Coalition formation is a key topic in multi-agent systems. Coalitions enable agents to achieve goals that they may not have been able to achieve on their own. Previous work has shown problems in coalition games to be computationally hard. Wooldridge and Dunne (Artificial Intelligence 2006) studied the classical computational complexity of several natural decision problems in Coalitional Resource Games (CRG) games in which each agent is endowed with a set of resources and coalitions can bring about a set of goals if they are collectively endowed with the necessary amount of resources. The input of coalitional resource games bundles together several elements, e.g., the agent set Ag, the goal set G, the resource set R, etc. Shrot, Aumann and Kraus (AAMAS 2009) examine coalition formation problems in the CRG model using the theory of Parameterized Complexity. Their refined analysis shows that not all parts of input act equal some instances of the problem are indeed tractable while others still remain intractable. We answer an important question left open by Shrot, Aumann and Kraus by showing that the SC Problem (checking whether a Coalition is Successful) is W[1]-hard when parameterized by the size of the coalition. Then via a single theme of reduction from SC, we are able to show that various problems related to resources, resource bounds and resource conflicts introduced by Wooldridge et al are 1. W[1]-hard or co-W[1]-hard when parameterized by the size of the coalition. 2. para-NP-hard or co-para-NP-hard when parameterized by |R|. 3. FPT when parameterized by either |G| or |Ag|+ |R|. \u2217Supported in part by Google Faculty Research Award, ONR Young Investigator Award and NSF CAREER award. \u2020Department of Computer Science , University of Maryland at College Park, USA, email: rchitnis@cs.umd.edu \u2021Department of Computer Science , University of Maryland at College Park, USA. email: hajiagha@cs.umd.edu \u00a7Department of Computer Science , University of Maryland at College Park, USA. email: vliaghat@cs.umd.edu", "targets": "Parameterized Complexity of Problems in Coalitional Resource Games"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-72e8147d9d6e403280db82b644b38e6c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A Verbal Autopsy is the record of an interview about the circumstances of an uncertified death. In developing countries, if a death occurs away from health facilities, a field-worker interviews a relative of the deceased about the circumstances of the death; this Verbal Autopsy can be reviewed offsite. We report on a comparative study of the processes involved in Text Classification applied to classifying Cause of Death: feature value representation; machine learning classification algorithms; and feature reduction strategies in order to identify the suitable approaches applicable to the classification of Verbal Autopsy text. We demonstrate that normalised term frequency and the standard TFiDF achieve comparable performance across a number of classifiers. The results also show Support Vector Machine is superior to other classification algorithms employed in this research. Finally, we demonstrate the effectiveness of employing a \u2019locally-semisupervised\u2019 feature reduction strategy in order to increase performance accuracy.", "targets": "A Comparative Study of Machine Learning Methods for Verbal Autopsy Text Classification"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d3fcffe8d11841d796a833980d9f28ca", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We consider principal component analysis for contaminated data-set in the high dimensional regime, where the dimensionality of each observation is comparable or even more than the number of observations. We propose a deterministic high-dimensional robust PCA algorithm which inherits all theoretical properties of its randomized counterpart, i.e., it is tractable, robust to contaminated points, easily kernelizable, asymptotic consistent and achieves maximal robustness \u2013 a breakdown point of 50%. More importantly, the proposed method exhibits significantly better computational efficiency, which makes it suitable for large-scale real applications.", "targets": "Robust PCA in High-dimension: A Deterministic Approach"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a9d786fb570543c39c4842eb24e68f2e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Background: Lung cancer was known as primary cancers and the survival rate of cancer is about 15%. Early detection of lung cancer is the leading factor in survival rate. All symptoms (features) of lung cancer do not appear until the cancer spreads to other areas. It needs an accurate early detection of lung cancer, for increasing the survival rate. For accurate detection, it need characterizes efficient features and delete redundancy features among all features.Feature selection is the problem of selecting informative features among all features. Materialsand Methods: Lung cancer database consist of 32 patient records with 57 features. This database collected by Hong and Youngand indexed in the University of California Irvine repository. Experimental contents include the extracted from the clinical data and X-ray data, etc. The data described 3 types of pathological lung cancers and all features are taking an integer value 0-3. In our study, new method is proposed for identify efficient features of lung cancer. It is based on Hyper-Heuristic. Results:We obtained an accuracy of 80.63% using reduced 11 feature set.The proposed method compare to the accuracy of 5 machine learning feature selections.The accuracy of these 5 methods are 60.94, 57.81, 68.75, 60.94 and 68.75. Conclusions: The proposed method has better performance with the highest level of accuracy. Therefore, the proposed model is recommended for identifying an efficient symptom of Disease. These finding are very important in health research, particularly in allocation of medical resources for patients who predicted as high-risks", "targets": "Hyper-Heuristic Algorithm for Finding Efficient Features in Diagnose of Lung Cancer Disease"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d95058c3b71e4e5cb317c0184419c0af", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We consider the question of the stability of evolutionary algorithms to gradual changes,or drift, in the target concept. We define an algorithm to be resistant to drift if, forsome inverse polynomial drift rate in the target function, it converges to accuracy 1 \u2212 \u01ebwith polynomial resources, and then stays within that accuracy indefinitely, except withprobability \u01eb at any one time. We show that every evolution algorithm, in the sense ofValiant [19], can be converted using the Correlational Query technique of Feldman [9], intosuch a drift resistant algorithm. For certain evolutionary algorithms, such as for Booleanconjunctions, we give bounds on the rates of drift that they can resist. We develop somenew evolution algorithms that are resistant to significant drift. In particular, we give analgorithm for evolving linear separators over the spherically symmetric distribution that isresistant to a drift rate of O(\u01eb/n), and another algorithm over the more general productnormal distributions that resists a smaller drift rate. The above translation result can be also interpreted as one on the robustness of the notion ofevolvability itself under changes of definition. As a second result in that direction we showthat every evolution algorithm can be converted to a quasi-monotonic one that can evolvefrom any starting point without the performance ever dipping significantly below that ofthe starting point. This permits the somewhat unnatural feature of arbitrary performancedegradations to be removed from several known robustness translations.", "targets": "Evolution with Drifting Targets"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-cba2b82444e047a19d3b0b1d36f7be94", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The goal of two-sample tests is to assess whether two samples, SP \u223c P and SQ \u223c Q, are drawn from the same distribution. Perhaps intriguingly, one relatively unexplored method to build two-sample tests is the use of binary classifiers. In particular, construct a dataset by pairing the n examples in SP with a positive label, and by pairing the m examples in SQ with a negative label. If the null hypothesis \u201cP = Q\u201d is true, then the classification accuracy of a binary classifier on a held-out subset of this dataset should remain near chance-level. As we will show, such Classifier Two-Sample Tests (C2ST) learn a suitable representation of the data on the fly, return test statistics in interpretable units, have a simple null distribution, and their predictive uncertainty allow to interpret where P and Q differ. The goal of this paper is to establish the properties, performance, and uses of C2ST. First, we analyze their main theoretical properties. Second, we compare their performance against a variety of state-of-the-art alternatives. Third, we propose their use to evaluate the sample quality of generative models with intractable likelihoods, such as Generative Adversarial Networks (GANs). Fourth, we showcase the novel application of GANs together with C2ST for causal discovery.", "targets": "REVISITING CLASSIFIER TWO-SAMPLE TESTS"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-80a1b85f028c4f63bf89dadadb6e52a8", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Data can be acquired, shared, and processed by an increasingly larger number of entities, in particular people. The distributed nature of this phenomenon has contributed to the development of many crowdsourcing projects. This scenario is prevalent in most forms of expert/non-expert group opinion and rating tasks (including many forms of internet or on-line user behavior), where a key element is the aggregation of observations-opinions from multiple sources.", "targets": "Evaluating Crowdsourcing Participants in the Absence of Ground-Truth"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-758abae4f57d47b7b0be88de579209a9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, a new technique for the optimization of (partially) bound queries over disjunctive Datalog programs with stratified negation is presented. The technique exploits the propagation of query bindings and extends the Magic Set optimization technique (originally defined for non-disjunctive programs). An important feature of disjunctive Datalog programs is nonmonotonicity, which calls for nondeterministic implementations, such as backtracking search. A distinguishing characteristic of the new method is that the optimization can be exploited also during the nondeterministic phase. In particular, after some assumptions have been made during the computation, parts of the program may become irrelevant to a query under these assumptions. This allows for dynamic pruning of the search space. In contrast, the effect of the previously defined Magic Set methods for disjunctive Datalog is limited to the deterministic portion of the process. In this way, the potential performance gain by using the proposed method can be exponential, as could be observed empirically. The correctness of the method is established and proved in a formal way thanks to a strong relationship between Magic Sets and unfounded sets that has not been studied in the literature before. This knowledge allows for extending the method and the correctness proof also to programs with stratified negation in a natural way. The proposed method has been implemented in the DLV system and various experiments on synthetic as well as on real-world data have been conducted. The experimental results on synthetic data confirm the utility of Magic Sets for disjunctive Datalog, and they highlight the computational gain that may be obtained by the new method with respect to the previously proposed Magic Set method for disjunctive Datalog programs. Further experiments on data taken from a real-life application show the benefits of the Magic Set method within an application scenario that has received considerable attention in recent years, the problem of answering user queries over possibly inconsistent databases originating from integration of autonomous sources of information.", "targets": "Magic Sets for Disjunctive Datalog Programs"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9b5569d70cb94134924e90d24dbc5723", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Zero-sum stochastic games are easy to solve as they can be cast as simple Markov decision processes. This is however not the case with general-sum stochastic games. A fairly general optimization problem formulation is available for general-sum stochastic games by Filar and Vrieze [2004]. However, the optimization problem there has a non-linear objective and non-linear constraints with special structure. Since gradients of both the objective as well as constraints of this optimization problem are well defined, gradient based schemes seem to be a natural choice. We discuss a gradient scheme tuned for two-player stochastic games. We show in simulations that this scheme indeed converges to a Nash equilibrium, for a simple terrain exploration problem modelled as a general-sum stochastic game. However, it turns out that only global minima of the optimization problem correspond to Nash equilibria of the underlying general-sum stochastic game, while gradient schemes only guarantee convergence to local minima. We then provide important necessary conditions for gradient schemes to converge to Nash equilibria in general-sum stochastic games.", "targets": "A Study of Gradient Descent Schemes for General-Sum Stochastic Games"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9617f9fd248f46ad8096a858985c0744", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Training generative adversarial networks is unstable in high-dimensions when the true data distribution lies on a lower-dimensional manifold. The discriminator is then easily able to separate nearly all generated samples leaving the generator without meaningful gradients. We propose training a single generator simultaneously against an array of discriminators, each of which looks at a different random low-dimensional projection of the data. We show that individual discriminators then provide stable gradients to the generator, and that the generator learns to produce samples consistent with the full data distribution to satisfy all discriminators. We demonstrate the practical utility of this approach experimentally, and show that it is able to produce image samples with higher quality than traditional training with a single discriminator.", "targets": "Stabilizing GAN Training with Multiple Random Projections"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c194f4f836a54546b6c9e3acbc78ed98", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Transcription of broadcast news is an interesting and challenging application for large-vocabulary continuous speech recognition (LVCSR). We present in detail the structure of a manually segmented and annotated corpus including over 160 hours of German broadcast news, and propose it as an evaluation framework of LVCSR systems. We show our own experimental results on the corpus, achieved with a state-of-the-art LVCSR decoder, measuring the effect of different feature sets and decoding parameters, and thereby demonstrate that real-time decoding of our test set is feasible on a desktop PC at 9.2 % word error rate.", "targets": "A Broadcast News Corpus for Evaluation and Tuning of German LVCSR Systems"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-67a46a437946421b824a79b236d3d043", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Spectral methods have greatly advanced the estimation of latent variable models, generating a sequence of novel and efficient algorithms with strong theoretical guarantees. However, current spectral algorithms are largely restricted to mixtures of discrete or Gaussian distributions. In this paper, we propose a kernel method for learning multi-view latent variable models, allowing each mixture component to be nonparametric. The key idea of the method is to embed the joint distribution of a multi-view latent variable into a reproducing kernel Hilbert space, and then the latent parameters are recovered using a robust tensor power method. We establish that the sample complexity for the proposed method is quadratic in the number of latent components and is a low order polynomial in the other relevant parameters. Thus, our non-parametric tensor approach to learning latent variable models enjoys good sample and computational efficiencies. Moreover, the non-parametric tensor power method compares favorably to EM algorithm and other existing spectral algorithms in our experiments.", "targets": "Nonparametric Estimation of Multi-View Latent Variable Models"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-717b0ef4dd7444eea5c5ddee9354539b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Proposed methods for prediction interval estimation so far focus on cases where input variables are numerical. In datasets with solely nominal input variables, we observe records with the exact same input x, but different real valued outputs due to the inherent noise in the system. Existing prediction interval estimation methods do not use representations that can accurately model such inherent noise in the case of nominal inputs. We propose a new prediction interval estimation method tailored for this type of data, which is prevalent in biology and medicine. We call this method Distribution Adaptive Prediction Interval Estimation given Nominal inputs (DAPIEN) and has four main phases. First, we select a distribution function that can best represent the inherent noise of the system for all unique inputs. Then we infer the parameters \u03b8i (e.g. \u03b8i = [meani, variancei]) of the selected distribution function for all unique input vectors xi and generate a new corresponding training set using pairs of xi , \u03b8i. III). Then, we train a model to predict \u03b8 given a new xu. Finally, we calculate the prediction interval for a new sample using the inverse of the cumulative distribution function once the parameters \u03b8 is predicted by the trained model. We compared DAPIEN to the commonly used Bootstrap method on three synthetic datasets. Our results show that DAPIEN provides tighter prediction intervals while preserving the requested coverage when compared to Bootstrap. This work can facilitate broader usage of regression methods in medicine and biology where it is necessary to provide tight prediction intervals while preserving coverage when input variables are nominal.", "targets": "DICTION INTERVAL ESTIMATION USING NOMINAL VARIABLES"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-483ac270a5c2438498ecac4e0203a0c9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper explores two separate questions: Can we perform natural language processing tasks without a lexicon?; and, Should we? Existing natural language processing techniques are either based on words as units or use units such as grams only for basic classification tasks. How close can a machine come to reasoning about the meanings of words and phrases in a corpus without using any lexicon, based only on grams? Our own motivation for posing this question is based on our efforts to find popular trends in words and phrases from online Chinese social media. This form of written Chinese uses so many neologisms, creative character placements, and combinations of writing systems that it has been dubbed the \u201cMartian Language.\u201d Readers must often use visual queues, audible queues from reading out loud, and their knowledge and understanding of current events to understand a post. For analysis of popular trends, the specific problem is that it is difficult to build a lexicon when the invention of new ways to refer to a word or concept is easy and common. For natural language processing in general, we argue in this paper that new uses of language in social media will challenge machines\u2019 abilities to operate with words as the basic unit of understanding, not only in Chinese but potentially in other languages.", "targets": "Language Without Words: A Pointillist Model for Natural Language Processing"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5ec1caf974ae4998b1e9ac35a8c6f7d9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Imputation of missing attribute values in medical datasets for extracting hidden knowledge from medical datasets is an interesting research topic of interest which is very challenging. One cannot eliminate missing values in medical records. The reason may be because some tests may not been conducted as they are cost effective, values missed when conducting clinical trials, values may not have been recorded to name some of the reasons. Data mining researchers have been proposing various approaches to find and impute missing values to increase classification accuracies so that disease may be predicted accurately. In this paper, we propose a novel imputation approach for imputation of missing values and performing classification after fixing missing values. The approach is based on clustering concept and aims at dimensionality reduction of the records. The case study discussed shows that missing values can be fixed and imputed efficiently by achieving dimensionality reduction. The importance of proposed approach for classification is visible in the case study which assigns single class label in contrary to multi-label assignment if dimensionality reduction is not performed. Keywords\u2014 imputation; missing values; prediction; nearest neighbor, cluster, medical records, dimensionality reduction", "targets": "An Innovative Imputation and Classification Approach for Accurate Disease Prediction"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-73444ce2013a40108d7709e3e222e474", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Bilattice-based triangle provides an elegant algebraic structure for reasoning with vague and uncertain information. But the truth and knowledge ordering of intervals in bilattice-based triangle can not handle repetitive belief revisions which is an essential characteristic of nonmonotonic reasoning. Moreover the ordering induced over the intervals by the bilattice-based triangle is not sometimes intuitive. In this work, we construct an alternative algebraic structure, namely preorder-based triangle and we formulate proper logical connectives for this. It is an enhancement of the bilattice-based triangle to handle belief revision in nonmonotonic reasoning.", "targets": "Preorder-Based Triangle: A Modified Version of Bilattice-Based Triangle for Belief Revision in Nonmonotonic Reasoning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-18109dbe577c411483837155e19d807a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Distributional semantic models learn vector representations of words through the contexts they occur in. Although the choice of context (which often takes the form of a sliding window) has a direct influence on the resulting embeddings, the exact role of this model component is still not fully understood. This paper presents a systematic analysis of context windows based on a set of four distinct hyperparameters. We train continuous SkipGram models on two English-language corpora for various combinations of these hyper-parameters, and evaluate them on both lexical similarity and analogy tasks. Notable experimental results are the positive impact of cross-sentential contexts and the surprisingly good performance of right-context windows.", "targets": "Redefining Context Windows for Word Embedding Models: An Experimental Study"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-98a4fdab8f0a4a11b7dbe6bb7709241a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We introduce a method to train Quantized Neural Networks (QNNs) \u2014 neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At traintime the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves 51% top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online. 1 ar X iv :1 60 9. 07 06 1v 1 [ cs .N E ] 2 2 Se p 20 16 Hubara, Courbariaux, Soudry, El-Yaniv and Bengio", "targets": "Quantized Neural Networks Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4595029c820f4d7c9571985c03d0e5d5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Nowadays, geographic information related to Twitter is crucially important for fine-grained applications. However, the amount of geographic information available on Twitter is low, which makes the pursuit of many applications challenging. Under such circumstances, estimating the location of a tweet is an important goal of the study. Unlike most previous studies that estimate the pre-defined district as the classification task, this study employs a probability distribution to represent richer information of the tweet, not only the location but also its ambiguity. To realize this modeling, we propose the convolutional mixture density network (CMDN), which uses text data to estimate the mixture model parameters. Experimentally obtained results reveal that CMDN achieved the highest prediction performance among the method for predicting the exact coordinates. It also provides a quantitative representation of the location ambiguity for each tweet that properly works for extracting the reliable location estimations.", "targets": "Density Estimation for Geolocation via Convolutional Mixture Density Network"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-146de8426df644888175bb65a874a918", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Kernel-based clustering algorithms have the ability to capture the non-linear structure in real world data. Among various kernel-based clustering algorithms, kernel k -means has gained popularity due to its simple iterative nature and ease of implementation. However, its run-time complexity and memory footprint increase quadratically in terms of the size of the data set, and hence, large data sets cannot be clustered efficiently. In this paper, we propose an approximation scheme based on randomization, called the Approximate Kernel k-means. We approximate the cluster centers using the kernel similarity between a few sampled points and all the points in the data set. We show that the proposed method achieves better clustering performance than the traditional low rank kernel approximation based clustering schemes. We also demonstrate that it\u2019s running time and memory requirements are significantly lower than those of kernel k -means, with only a small reduction in the clustering quality on several public domain large data sets. We then employ ensemble clustering techniques to further enhance the performance of our algorithm.", "targets": "Scalable Kernel Clustering: Approximate Kernel k -means"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-70c7650fbdb845d88416902195f732fa", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "With the popularity of massive open online courses (MOOCs), grading through crowdsourcing has become a prevalent approach towards large scale classes. However, for getting grades for complex tasks, which require specific skills and efforts for grading, crowdsourcing encounters a restriction of insufficient knowledge of the workers from the crowd. Due to knowledge limitation of the crowd graders, grading based on partial perspectives becomes a big challenge for evaluating complex tasks through crowdsourcing. Especially for those tasks which not only need specific knowledge for grading, but also should be graded as a whole instead of being decomposed into smaller and simpler sub-tasks. We propose a framework for grading complex tasks via multiple views, which are different grading perspectives defined by experts for the task, to provide uniformity. Aggregation algorithm based on graders\u2019 variances are used to combine the grades for each view. We also detect bias patterns of the graders, and de-bias them regarding each view of the task. Bias pattern determines how the behavior is biased among graders, which is detected by a statistical technique. The proposed approach is analyzed on a synthetic data set. We show that our model gives more accurate results compared to the grading approaches without different views and de-biasing algorithm. Keywords\u2014complex task; crowdsourcing; view; bias pattern; debias; Vancouver algorithm", "targets": "Evaluating Complex Task through Crowdsourcing: Multiple Views Approach"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c8491bcfacfb405da344842dff991cc3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The project of the Ontology Web Search Engine is presented in this paper. The main purpose of this paper is to develop such a project that can be easily implemented. Ontology Web Search Engine is software to look for and index ontologies in the Web. OWL (Web Ontology Languages) ontologies are meant, and they are necessary for the functioning of the SWES (Semantic Web Expert System). SWES is an expert system that will use found ontologies from the Web, generating rules from them, and will supplement its knowledge base with these generated rules. It is expected that the SWES will serve as a universal expert system for the average user.", "targets": "TOWARDS THE ONTOLOGY WEB SEARCH ENGINE"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ddb571fcfed74277b06676d2284565a1", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "With the rise of big data sets, the popularity of kernel methods declined and neural networks took over again. The main problem with kernel methods is that the kernel matrix grows quadratically with the number of data points. Most attempts to scale up kernel methods solve this problem by discarding data points or basis functions of some approximation of the kernel map. Here we present a simple yet effective alternative for scaling up kernel methods that takes into account the entire data set via doubly stochastic optimization of the emprical kernel map. The algorithm is straightforward to implement, in particular in parallel execution settings; it leverages the full power and versatility of classical kernel functions without the need to explicitly formulate a kernel map approximation. We provide empirical evidence that the algorithm works on large data sets.", "targets": "Doubly stochastic large scale kernel learning with the empirical kernel map"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ee9db39feb5346f5a38dd5cc47935e3b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Since its inception, the modus operandi of multi-task learning (MTL) has been to minimize the task-wise mean of the empirical risks. We introduce a generalized loss-compositional paradigm for MTL that includes a spectrum of formulations as a subfamily. One endpoint of this spectrum is minimax MTL: a new MTL formulation that minimizes the maximum of the tasks\u2019 empirical risks. Via a certain relaxation of minimax MTL, we obtain a continuum of MTL formulations spanning minimax MTL and classical MTL. The full paradigm itself is loss-compositional, operating on the vector of empirical risks. It incorporates minimax MTL, its relaxations, and many new MTL formulations as special cases. We show theoretically that minimax MTL tends to avoid worst case outcomes on newly drawn test tasks in the learning to learn (LTL) test setting. The results of several MTL formulations on synthetic and real problems in the MTL and LTL test settings are encouraging.", "targets": "Minimax Multi-Task Learning and a Generalized Loss-Compositional Paradigm for MTL"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-728e142eb33b4a31b29a64403205d64c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Interactive topic models are powerful tools for those seeking to understand large collections of text. However, existing sampling-based interactive topic modeling approaches scale poorly to large data sets. Anchor methods, which use a single word to uniquely identify a topic, offer the speed needed for interactive work but lack both a mechanism to inject prior knowledge and lack the intuitive semantics needed for user-facing applications. We propose combinations of words as anchors, going beyond existing single word anchor algorithms\u2014an approach we call \u201cTandem Anchors\u201d. We begin with a synthetic investigation of this approach then apply the approach to interactive topic modeling in a user study and compare it to interactive and non-interactive approaches. Tandem anchors are faster and more intuitive than existing interactive approaches. Topic models distill large collections of text into topics, giving a high-level summary of the thematic structure of the data without manual annotation. In addition to facilitating discovery of topical trends (Gardner et al., 2010), topic modeling is used for a wide variety of problems including document classification (Rubin et al., 2012), information retrieval (Wei and Croft, 2006), author identification (Rosen-Zvi et al., 2004), and sentiment analysis (Titov and McDonald, 2008). However, the most compelling use of topic models is to help users understand large datasets (Chuang et al., 2012). Interactive topic modeling (Hu et al., 2014) allows non-experts to refine automatically generated topics, making topic models less of a \u201ctake it or leave it\u201d proposition. Including humans input during training improves the quality of the model and allows users to guide topics in a specific way, custom tailoring the model for a specific downstream task or analysis. The downside is that interactive topic modeling is slow\u2014algorithms typically scale with the size of the corpus\u2014and requires non-intuitive information from the user in the form of must-link and cannot-link constraints (Andrzejewski et al., 2009). We address these shortcomings of interactive topic modeling by using an interactive version of the anchor words algorithm for topic models. The anchor algorithm (Arora et al., 2013) is an alternative topic modeling algorithm which scales with the number of unique word types in the data rather than the number of documents or tokens (Section 1). This makes the anchor algorithm fast enough for interactive use, even in web-scale document collections. A drawback of the anchor method is that anchor words\u2014words that have high probability of being in a single topic\u2014are not intuitive. We extend the anchor algorithm to use multiple anchor words in tandem (Section 2). Tandem anchors not only improve interactive refinement, but also make the underlying anchor-based method more intuitive. For interactive topic modeling, tandem anchors produce higher quality topics than single word anchors (Section 3). Tandem anchors provide a framework for fast interactive topic modeling: users improve and refine an existing model through multiword anchors (Section 4). Compared to existing methods such as Interactive Topic Models (Hu et al., 2014), our method is much faster.", "targets": "Tandem Anchoring: a Multiword Anchor Approach for Interactive Topic Modeling"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c4eb7660bea9426a90d88e4cecb959b8", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We investigate the direct-sum problem in the context of differentially private PAC learning: Whatis the sample complexity of solving k learning tasks simultaneously under differential privacy, and howdoes this cost compare to that of solving k learning tasks without privacy? In our setting, an individualexample consists of a domain element x labeled by k unknown concepts (c1, . . . ,ck). The goal of amulti-learner is to output k hypotheses (h1, . . . , hk) that generalize the input examples.Without concern for privacy, the sample complexity needed to simultaneously learn k concepts isessentially the same as needed for learning a single concept. Under differential privacy, the basic strategyof learning each hypothesis independently yields sample complexity that grows polynomially with k.For some concept classes, we give multi-learners that require fewer samples than the basic strategy.Unfortunately, however, we also give lower bounds showing that even for very simple concept classes,the sample cost of private multi-learning must grow polynomially in k.", "targets": "Simultaneous Private Learning of Multiple Concepts"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e05eaf6f889e45298edb11da7b5e89cf", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Causal inference from observational data is a subject of active research and development in statistics and computer science. Many toolkits have been developed for this purpose that depends on statistical software. However, these toolkits do not scale to large datasets. In this paper we describe a suite of techniques for expressing causal inference tasks from observational data in SQL. This suite supports the state-ofthe-art methods for causal inference and run at scale within a database engine. In addition, we introduce several optimization techniques that significantly speedup causal inference, both in the online and offline setting. We evaluate the quality and performance of our techniques by experiments of real datasets.", "targets": "ZaliQL: A SQL-Based Framework for Drawing Causal Inference from Big Data"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7f985d3ae38f4c5f9c046308d26d4437", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The inclusion problem deals with how to characterize (in graphical terms) whether all independence statements in the model in\u00ad duced by a DAG K are in the model induced by a second DAG L. Meek (1997) conjec\u00ad tured that this inclusion holds iff there exists a sequence of DAGs from L to K such that only certain 'legal' arrow reversal and 'legal' arrow adding operations are performed to get the next DAG in the sequence. In this paper we give several characterizations of inclusion of DAG models and verify Meek's conjecture in the case that the DAGs K and L differ in at most one adjacency. As a warming up a rigorous proof of graphical characterizations of equivalence of DAGs is given.", "targets": "On characterizing Inclusion of Bayesian Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-201f888ab28f42a5a831ae1a3462ea5d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "1 Ranks, processes and flat grammar ....................................................................................... 2 2 The Rank-Interpretation Architecture for Multilinear Grammars ......................................... 6 2.1 Outline of the Rank-Interpretation Architecture framework .............................................. 6 2.2 A preliminary note on linearity and hierarchy at the phrasal rank ..................................... 7 2.3 Characterisation of the Rank-Interpretation Architecture .................................................. 9 2.3.1 Background ..................................................................................................................... 9 2.3.2 Formal summary ........................................................................................................... 10 2.3.3 Contrast with traditional views of language architecture .............................................. 11 2.3.4 Search for sui generis properties of ranks ..................................................................... 11 2.4 Procedural perspectives on the rank hierarchy ................................................................. 12 3 The discourse rank .............................................................................................................. 15 3.1 The primacy of discourse patterning ................................................................................ 15 3.2 Intonation of an adjacency pair ........................................................................................ 16 3.3 Chanted \u2018call\u2019 intonation .................................................................................................. 18 4 The utterance or text rank ................................................................................................... 20 5 The phrase rank ................................................................................................................... 22 5.1 Characteristics of phrasal structure .................................................................................. 22 5.2 Linear sequences, iteration: regular and subregular grammars ........................................ 23 5.3 A note on long-distance and cross-serial dependencies ................................................... 26 5.4 Prosodic-phonetic interpretation at the phrasal rank ........................................................ 27 6 The word rank ..................................................................................................................... 30 6.1 Flat words ......................................................................................................................... 30 6.2 Flat derivations ................................................................................................................. 30 6.3 Flat compounds ................................................................................................................ 31 6.4 Prosodic-phonetic interpretation at the word rank ........................................................... 31 7 Summary and conclusion .................................................................................................... 33 7.1 From Duality to Multilinear Grammar and Rank Interpretation Architecture ................. 33 7.2 Generalisation to stochastic flat linear models ................................................................. 34 7.3 Future work ...................................................................................................................... 35 8 References ........................................................................................................................... 36", "targets": "Multilinear Grammar: Ranks and Interpretations"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2b61157550624df485f6f68e47a16599", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Biological neural networks are systems of extraordinary computational capabilities shaped by evolution, development, and lifetime learning. The interplay of these elements leads to the emergence of adaptive behavior and intelligence, but the complexity of the whole system of interactions is an obstacle to the understanding of the key factors at play. Inspired by such intricate natural phenomena, Evolved Plastic Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed plastic neural networks, artificial systems composed of sensors, outputs, and plastic components that change in response to sensory-output experiences in an environment. These systems may reveal key algorithmic ingredients of adaptation, autonomously discover novel adaptive algorithms, and lead to hypotheses on the emergence of biological adaptation. EPANNs have seen considerable progress over the last two decades. Current scientific and technological advances in artificial neural networks are now setting the conditions for radically new approaches and results. In particular, the limitations of hand-designed structures and algorithms currently used in most deep neural networks could be overcome by more flexible and innovative solutions. This paper brings together a variety of inspiring ideas that define the field of EPANNs. The main computational methods and results are reviewed. Finally, new opportunities and developments are presented.", "targets": "Born to Learn: the Inspiration, Progress, and Future of Evolved Plastic Artificial Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-913b614c8f934afabbab880eeefc8d4d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We describe a simple scheme that allows an agent to explore its environment in an unsupervised manner. Our scheme pits two versions of the same agent, Alice and Bob, against one another. Alice proposes a task for Bob to complete; and then Bob attempts to complete the task. In this work we will focus on (nearly) reversible environments, or environments that can be reset, and Alice will \u201cpropose\u201d the task by running a set of actions and then Bob must partially undo, or repeat them, respectively. Via an appropriate reward structure, Alice and Bob automatically generate a curriculum of exploration, enabling unsupervised training of the agent. When deployed on an RL task within the environment, this unsupervised training reduces the number of episodes needed to learn.", "targets": "Intrinsic Motivation and Automatic Curricula via Asymmetric Self-Play"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-46b220a6abb343039287e5001e0643e0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present an approach for the detection of coordinateterm relationships between entities from the software domain, that refer to Java classes. Usually, relations are found by examining corpus statistics associated with text entities. In some technical domains, however, we have access to additional information about the real-world objects named by the entities, suggesting that coupling information about the \u201cgrounded\u201d entities with corpus statistics might lead to improved methods for relation discovery. To this end, we develop a similarity measure for Java classes using distributional information about how they are used in software, which we combine with corpus statistics on the distribution of contexts in which the classes appear in text. Using our approach, cross-validation accuracy on this dataset can be improved dramatically, from around 60% to 88%. Human labeling results show that our classifier has an F1 score of 86% over the top 1000 predicted pairs.", "targets": "Grounded Discovery of Coordinate Term Relationships between Software Entities"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2b51c1c96a044df58beba28fc537cd0b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The multi-armed bandit problem (MBP) is the problem of finding, as accurately and quickly as possible, the most profitable option from a set of options that gives stochastic rewards by referring to past experiences. Inspired by fluctuated movements of a rigid body in a tug-of-war game, we formulated a unique search algorithm that we call the \u2018tug-of-war (TOW) dynamics\u2019 for solving the MBP efficiently [1-5]. The cognitive medium access, which refers to multiuser channel allocations in cognitive radio, can be interpreted as the competitive multi-armed bandit problem (CMBP); the problem is to determine the optimal strategy for allocating channels to users which yields maximum total rewards gained by all users [6]. Here we show that it is possible to construct a physical device for solving the CMBP, which we call the \u2018TOW Bombe\u2019, by exploiting the TOW dynamics existed in coupled incompressible-fluid cylinders. This analog computing device achieves the \u2018socially-maximum\u2019 resource allocation that maximizes the total rewards in cognitive medium access without paying a huge computational cost that grows exponentially as a function of the problem size.", "targets": "Decision Maker using Coupled Incompressible-Fluid Cylinders"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4cfcbaf9643e4c97b7e617c990a35580", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In the data mining field many clustering methods have been proposed, yet standard versions do not take into account uncertain databases. This paper deals with a new approach to cluster uncertain data by using a hierarchical clustering defined within the belief function framework. The main objective of the belief hierarchical clustering is to allow an object to belong to one or several clusters. To each belonging, a degree of belief is associated, and clusters are combined based on the pignistic properties. Experiments with real uncertain data show that our proposed method can be considered as a propitious tool.", "targets": "Belief Hierarchical Clustering"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7486bed4c54742a2b0121a9b73e3502e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We address a problem of area protection in graphbased scenarios with multiple mobile agents where connectivity is maintained among agents to ensure they can communicate. The problem consists of two adversarial teams of agents that move in an undirected graph shared by both teams. Agents are placed in vertices of the graph; at most one agent can occupy a vertex; and they can move into adjacent vertices in a conflict free way. Teams have asymmetric goals: the aim of one team attackers is to invade into given area while the aim of the opponent team defenders is to protect the area from being entered by attackers by occupying selected vertices. The team of defenders need to maintain connectivity of vertices occupied by its own agents in a visibility graph. The visibility graph models possibility of communication between pairs of vertices. We study strategies for allocating vertices to be occupied by the team of defenders to block attacking agents where connectivity is maintained at the same time. To do this we reserve a subset of defending agents that do not try to block the attackers but instead are placed to support connectivity of the team. The performance of strategies is tested in multiple benchmarks. The success of a strategy is heavily dependent on the type of the instance, and so one of the contributions of this work is that we identify suitable strategies for diverse instance types.", "targets": "Maintaining Ad-Hoc Communication Network in Area Protection Scenarios with Adversarial Agents"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-dd5a177ec9be45809845a4189cd03ccd", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We study the properties of common loss surfaces through their Hessian matrix. In particular, in the context of deep learning, we empirically show that the spectrum of the Hessian is composed of two parts: (1) the bulk centered near zero, (2) and outliers away from the bulk. We present numerical evidence and mathematical justifications to the following conjectures laid out by Sagun et al. [2016]: Fixing data, increasing the number of parameters merely scales the bulk of the spectrum; fixing the dimension and changing the data (for instance adding more clusters or making the data less separable) only affects the outliers. We believe that our observations have striking implications for non-convex optimization in high dimensions. First, the flatness of such landscapes (which can be measured by the singularity of the Hessian) implies that classical notions of basins of attraction may be quite misleading. And that the discussion of wide/narrow basins may be in need of a new perspective around over-parametrization and redundancy that are able to create large connected components at the bottom of the landscape. Second, the dependence of small number of large eigenvalues to the data distribution can be linked to the spectrum of the covariance matrix of gradients of model outputs. With this in mind, we may reevaluate the connections within the data-architecturealgorithm framework of a model, hoping that it would shed light into the geometry of high-dimensional and non-convex spaces in modern applications. In particular, we present a case that links the two observations: a gradient based method appears to be first climbing uphill and then falling downhill between two points; whereas, in fact, they lie in the same basin.", "targets": "Empirical Analysis of the Hessian of Over-Parametrized Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-286810855d614f0ab7eac202a91035ba", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The development of methods to deal with the informative contents of the text units in the matching process is a major challenge in automatic summary evaluation systems that use fixed n-gram matching. The limitation causes inaccurate matching between units in a peer and reference summaries. The present study introduces a new Keyphrase based Summary Evaluator (KpEval) for evaluating automatic summaries. The KpEval relies on the keyphrases since they convey the most important concepts of a text. In the evaluation process, the keyphrases are used in their lemma form as the matching text unit. The system was applied to evaluate different summaries of Arabic multi-document data set presented at TAC2011. The results showed that the new evaluation technique correlates well with the known evaluation systems: Rouge-1, Rouge-2, Rouge-SU4, and AutoSummENG\u2013MeMoG. KpEval has the strongest correlation with AutoSummENG\u2013MeMoG, Pearson and spearman correlation coefficient measures are 0.8840, 0.9667 respectively. General Terms Automatic summary evaluation, Automatic summarization, Keyphrase extraction, Natural language processing, computational linguistics, Information retrieval.", "targets": "Keyphrase based Evaluation of Automatic Text Summarization"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-85dd973c254c4ea7bdd3387bdd81e8f1", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We extend the theory of boosting for regression problems to the online learning setting. Generalizing from the batch setting for boosting, the notion of a weak learning algorithm is modeled as an online learning algorithm with linear loss functions that competes with a base class of regression functions, while a strong learning algorithm is an online learning algorithm with smooth convex loss functions that competes with a larger class of regression functions. Our main result is an online gradient boosting algorithm that converts a weak online learning algorithm into a strong one where the larger class of functions is the linear span of the base class. We also give a simpler boosting algorithm that converts a weak online learning algorithm into a strong one where the larger class of functions is the convex hull of the base class, and prove its optimality.", "targets": "Online Gradient Boosting"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ac30685f74ed45b580567da05f1266ba", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Gaussian processes are rich distributions over functions, which provide a Bayesian nonparametric approach to smoothing and interpolation. We introduce simple closed form kernels that can be used with Gaussian processes to discover patterns and enable extrapolation. These kernels are derived by modelling a spectral density \u2013 the Fourier transform of a kernel \u2013 with a Gaussian mixture. The proposed kernels support a broad class of stationary covariances, but Gaussian process inference remains simple and analytic. We demonstrate the proposed kernels by discovering patterns and performing long range extrapolation on synthetic examples, as well as atmospheric CO2 trends and airline passenger data. We also show that we can reconstruct standard covariances within our framework.", "targets": "Gaussian Process Covariance Kernels for Pattern Discovery and Extrapolation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b6a09d61b5654ba3aa6559bcbe1d5c05", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Accurate software development effort estimation is critical to the success of software projects. Although many techniques and algorithmic models have been developed and implemented by practitioners, accurate software development effort prediction is still a challenging endeavor in the field of software engineering, especially in handling uncertain and imprecise inputs and collinear characteristics. In this paper, a hybrid intelligent model combining a neural network model integrated with fuzzy model (neuro-fuzzy model) has been used to improve the accuracy of estimating software cost. The performance of the proposed model is assessed by designing and conducting evaluation with published project and industrial data. Results have shown that the proposed model demonstrates the ability of improving the estimation accuracy by 18% based on the Mean Magnitude of Relative Error (MMRE) criterion.", "targets": "A HYBRID INTELLIGENT MODEL FOR SOFTWARE COST ESTIMATION"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3006bbb7bcfc41d4a8c032d998d3f487", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We first observe a potential weakness of continuous vector representations of symbols in neural machine translation. That is, the continuous vector representation, or a word embedding vector, of a symbol encodes multiple dimensions of similarity, equivalent to encoding more than one meaning of the word. This has the consequence that the encoder and decoder recurrent networks in neural machine translation need to spend substantial amount of their capacity in disambiguating source and target words based on the context which is defined by a source sentence. Based on this observation, in this paper we propose to contextualize the word embedding vectors using a nonlinear bag-of-words representation of the source sentence. Additionally, we propose to represent special tokens (such as numbers, proper nouns and acronyms) with typed symbols to facilitate translating those words that are not well-suited to be translated via continuous vectors. The experiments on En-Fr and En-De reveal that the proposed approaches of contextualization and symbolization improves the translation quality of neural machine translation systems significantly.", "targets": "Context-Dependent Word Representation for Neural Machine Translation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d31d097cf6984810969d599b0720b500", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. We focus on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic. Unlike most work on generative models, our primary goal is not to train a model that assigns high likelihood to test data, nor do we require the model to be able to learn well without using any labels. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3%. We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes.", "targets": "Improved Techniques for Training GANs"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-cb40e0150ead4f7fa1641f9fd491e35b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Sarcasm occurring due to the presence of numerical portions in text has been quoted as an error made by automatic sarcasm detection approaches in the past. We present a first study in detecting sarcasm in numbers, as in the case of the sentence \u2018Love waking up at 4 am\u2019. We analyze the challenges of the problem, and present Rulebased, Machine Learning and Deep Learning approaches to detect sarcasm in numerical portions of text. Our Deep Learning approach outperforms four past works for sarcasm detection and Rule-based and Machine learning approaches on a dataset of tweets, obtaining an F1-score of 0.93. This shows that special attention to text containing numbers may be useful to improve state-of-the-art in sarcasm detection.", "targets": "\u201cHaving 2 hours to write a paper is fun!\u201d: Detecting Sarcasm in Numerical Portions of Text"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7a289acba635489b9e4819ad9744fe48", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Toby Walsh in \u201cThe Singularity May Never Be Near\u201d gives six arguments to support his point of view that technological singularity may happen but that it is unlikely. In this paper, we provide analysis of each one of his arguments and arrive at similar conclusions, but with more weight given to the \u201clikely to happen\u201d probability.", "targets": "The Singularity May Be Near"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-58f00837e55242db992d6b1ce0334b1d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Rapid crisis response requires real-time analysis of messages. After a disaster happens, volunteers attempt to classify tweets to determine needs, e.g., supplies, infrastructure damage, etc. Given labeled data, supervised machine learning can help classify these messages. Scarcity of labeled data causes poor performance in machine training. Can we reuse old tweets to train classifiers? How can we choose labeled tweets for training? Specifically, we study the usefulness of labeled data of past events. Do labeled tweets in different language help? We observe the performance of our classifiers trained using different combinations of training sets obtained from past disasters. We perform extensive experimentation on real crisis datasets and show that the past labels are useful when both source and target events are of the same type (e.g. both earthquakes). For similar languages (e.g., Italian and Spanish), cross-language domain adaptation was useful, however, when for different languages (e.g., Italian and English), the performance decreased.", "targets": "Cross-Language Domain Adaptation for Classifying Crisis-Related Short Messages"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8e013171c89340bea6a56df2920baa68", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recurrent Neural Networks (RNN) have recently achieved the best performance in off-line Handwriting Text Recognition. At the same time, learning RNN by gradient descent leads to slow convergence, and training times are particularly long when the training database consists of full lines of text. In this paper, we propose an easy way to accelerate stochastic gradient descent in this set-up, and in the general context of learning to recognize sequences. The principle is called Curriculum Learning, or shaping. The idea is to first learn to recognize short sequences before training on all available training sequences. Experiments on three different handwritten text databases (Rimes, IAM, OpenHaRT) show that a simple implementation of this strategy can significantly speed up the training of RNN for Text Recognition, and even significantly improve performance in some cases.", "targets": "Curriculum Learning for Handwritten Text Line Recognition"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d34382504c0b49b38b1760955408ce25", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Methods of deep machine learning enable to to reuse low-level representations efficiently for generating more abstract high-level representations. Originally, deep learning has been applied passively (e.g., for classification purposes). Recently, it has been extended to estimate the value of actions for autonomous agents within the framework of reinforcement learning (RL). Explicit models of the environment can be learned to augment such a value function. Although \u201cflat\u201d connectionist methods have already been used for model-based RL, up to now, only modelfree variants of RL have been equipped with methods from deep learning. We propose a variant of deep model-based RL that enables an agent to learn arbitrarily abstract hierarchical representations of its environment. In this paper, we present research on how such hierarchical representations can be grounded in sensorimotor interaction between an agent and its environment.", "targets": "Grounding Hierarchical Reinforcement Learning Models for Knowledge Transfer"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-da91c63dc32a4ca3a9b85b4b52139ab9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose Edward, a Turing-complete probabilistic programming language. Edward defines two compositional representations\u2014random variables and inference. By treating inference as a first class citizen, on a par with modeling, we show that probabilistic programming can be as flexible and computationally efficient as traditional deep learning. For flexibility, Edward makes it easy to fit the same model using a variety of composable inference methods, ranging from point estimation to variational inference to MCMC. In addition, Edward can reuse the modeling representation as part of inference, facilitating the design of rich variational models and generative adversarial networks. For efficiency, Edward is integrated into TensorFlow, providing significant speedups over existing probabilistic systems. For example, we show on a benchmark logistic regression task that Edward is at least 35x faster than Stan and 6x faster than PyMC3. Further, Edward incurs no runtime overhead: it is as fast as handwritten TensorFlow.", "targets": "DEEP PROBABILISTIC PROGRAMMING"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4acdc3c10c0e4a88aba158f31b314cdf", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Deriving prior polarity lexica for sentiment analysis \u2013 where positive or negative scores are associated with words out of context \u2013 is a challenging task. Usually, a trade-off between precision and coverage is hard to find, and it depends on the methodology used to build the lexicon. Manually annotated lexica provide a high precision but lack in coverage, whereas automatic derivation from pre-existing knowledge guarantees high coverage at the cost of a lower precision. Since the automatic derivation of prior polarities is less time consuming than manual annotation, there has been a great bloom of these approaches, in particular based on the SentiWordNet resource. In this paper, we compare the most frequently used techniques based on SentiWordNet with newer ones and blend them in a learning framework (a so called \u2018ensemble method\u2019). By taking advantage of manually built prior polarity lexica, our ensemble method is better able to predict the prior value of unseen words and to outperform all the other SentiWordNet approaches. Using this technique we have built SentiWords, a prior polarity lexicon of approximately 155,000 words, that has both a high precision and a high coverage. We finally show that in sentiment analysis tasks, using our lexicon allows us to outperform both the single metrics derived from SentiWordNet and popular manually annotated sentiment lexica.", "targets": "SentiWords: Deriving a High Precision and High Coverage Lexicon for Sentiment Analysis"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b076cccaad484d1b902fb7c1a7bc6978", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The logistic loss function is often advocated in machine learning and statistics as a smooth and strictly convex surrogate for the 0-1 loss. In this paper we investigate the question of whether these smoothness and convexity properties make the logistic loss preferable to other widely considered options such as the hinge loss. We show that in contrast to known asymptotic bounds, as long as the number of prediction/optimization iterations is sub exponential, the logistic loss provides no improvement over a generic non-smooth loss function such as the hinge loss. In particular we show that the convergence rate of stochastic logistic optimization is bounded from below by a polynomial in the diameter of the decision set and the number of prediction iterations, and provide a matching tight upper bound. This resolves the COLT open problem of McMahan and Streeter (2012).", "targets": "Logistic Regression: Tight Bounds for Stochastic and Online Optimization\u2217"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1a5ca36713c94a07af0dc549374a51db", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The first ever human vs. computer no-limit Texas hold \u2019em competition took place from April 24\u2013 May 8, 2015 at River\u2019s Casino in Pittsburgh, PA. In this article I present my thoughts on the competition design, agent architecture, and lessons learned.", "targets": "My Reflections on the First Man vs. Machine No-Limit Texas Hold \u2019em Competition\u2217"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c6d4986b2e7144b8a933211de53c84a0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We study several questions in the reliable agnostic learning framework of Kalai et al. (2009), which captures learning tasks in which one type of error is costlier than other types. A positive reliable classifier is one that makes no false positive errors. The goal in the positive reliable agnostic framework is to output a hypothesis with the following properties: (i) its false positive error rate is at most \u01eb, (ii) its false negative error rate is at most \u01eb more than that of the best positive reliable classifier from the class. A closely related notion is fully reliable agnostic learning, which considers partial classifiers that are allowed to predict \u201cunknown\u201d on some inputs. The best fully reliable partial classifier is one that makes no errors and minimizes the probability of predicting \u201cunknown\u201d, and the goal in fully reliable learning is to output a hypothesis that is almost as good as the best fully reliable partial classifier from a class. For distribution-independent learning, the best known algorithms for PAC learning typically utilize polynomial threshold representations, while the state of the art agnostic learning algorithms use pointwise polynomial approximations. We show that one-sided polynomial approximations, an intermediate notion between polynomial threshold representations and point-wise polynomial approximations, suffice for learning in the reliable agnostic settings. We then show that majorities can be fully reliably learned and disjunctions of majorities can be positive reliably learned, through constructions of appropriate onesided polynomial approximations. Our fully reliable algorithm for majorities provides the first evidence that fully reliable learning may be strictly easier than agnostic learning. Our algorithms also satisfy strong attribute-efficiency properties, and in many cases they provide smooth tradeoffs between sample complexity and running time. University of California, Berkeley. Email: vkanade@eecs.berkeley.edu The Simons Institute for the Theory of Computing at UC Berkeley. Email: jthaler@seas.harvard.edu", "targets": "Distribution-Independent Reliable Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3bb53b6c282849d88748787e8f3a1a29", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Can one parallelize complex exploration\u2013 exploitation tradeoffs? As an example, consider the problem of optimal highthroughput experimental design, where we wish to sequentially design batches of experiments in order to simultaneously learn a surrogate function mapping stimulus to response and identify the maximum of the function. We formalize the task as a multiarmed bandit problem, where the unknown payoff function is sampled from a Gaussian process (GP), and instead of a single arm, in each round we pull a batch of several arms in parallel. We develop GP-BUCB, a principled algorithm for choosing batches, based on the GP-UCB algorithm for sequential GP optimization. We prove a surprising result; as compared to the sequential approach, the cumulative regret of the parallel algorithm only increases by a constant factor independent of the batch size B. Our results provide rigorous theoretical support for exploiting parallelism in Bayesian global optimization. We demonstrate the effectiveness of our approach on two real-world applications.", "targets": "Parallelizing Exploration\u2013Exploitation Tradeoffs with Gaussian Process Bandit Optimization"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3064d771bbc2482b8b5912c8ccec0824", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Learning useful information across long time lags is a critical and difficult problem for temporal neural models in tasks like language modeling. Existing architectures that address the issue are often complex and costly to train. The Delta Recurrent Neural Network (Delta-RNN) framework is a simple and highperforming design that unifies previously proposed gated neural models. The DeltaRNN models maintain longer-term memory by learning to interpolate between a fast-changing data-driven representation and a slowly changing, implicitly stable state. This requires hardly any more parameters than a classical simple recurrent network. The models outperform popular complex architectures, such as the Long Short Term Memory (LSTM) and the Gated Recurrent Unit (GRU) and achieve state-of-the art performance in language modeling at character and word levels and yield comparable performance at the subword level.", "targets": "Learning Simpler Language Models with the Delta Recurrent Neural Network Framework"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-35aa6f842f714be3a8651b3096bcd60d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Answer Set Programming (ASP) is a well-established formalism for nonmonotonic reasoning. An ASP program can have no answer set due to cyclic default negation. In this case, it is not possible to draw any conclusion, even if this is not intended. Recently, several paracoherent semantics have been proposed that address this issue, and several potential applications for these semantics have been identified. However, paracoherent semantics have essentially been inapplicable in practice, due to the lack of efficient algorithms and implementations. In this paper, this lack is addressed, and several different algorithms to compute semi-stable and semi-equilibrium models are proposed and implemented into an answer set solving framework. An empirical performance comparison among the new algorithms on benchmarks from ASP competitions is given as well.", "targets": "On the Computation of Paracoherent Answer Sets"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-af0e8e5c5e0e4a87baf424a337bc6a67", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Deep neural nets have caused a revolution in many classification tasks. A related ongoing revolution\u2014also theoretically not understood\u2014concerns their ability to serve as generative models for complicated types of data such as images and texts. These models are trained using ideas like variational autoencoders and Generative Adversarial Networks. We take a first cut at explaining the expressivity of multilayer nets by giving a sufficient criterion for a function to be approximable by a neural network with n hidden layers. A key ingredient is Barron\u2019s Theorem [Bar93], which gives a Fourier criterion for approximability of a function by a neural network with 1 hidden layer. We show that a composition of n functions which satisfy certain Fourier conditions (\u201cBarron functions\u201d) can be approximated by a n+ 1layer neural network. For probability distributions, this translates into a criterion for a probability distribution to be approximable in Wasserstein distance\u2014a natural metric on probability distributions\u2014by a neural network applied to a fixed base distribution (e.g., multivariate gaussian). Building up recent lower bound work, we also give an example function that shows that composition of Barron functions is more expressive than Barron functions alone.", "targets": "On the ability of neural nets to express distributions"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4f59a1c60f2543478f2d8b0dd2eac0bc", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper presents the computational logic foundations of a model of agency called the KGP (Knowledge, Goals and Plan) model. This model allows the specification of heterogeneous agents that can interact with each other, and can exhibit both proactive and reactive behaviour allowing them to function in dynamic environments by adjusting their goals and plans when changes happen in such environments. KGP provides a highly modular agent architecture that integrates a collection of reasoning and physical capabilities, synthesised within transitions that update the agent\u2019s state in response to reasoning, sensing and acting. Transitions are orchestrated by cycle theories that specify the order in which transitions are executed while taking into account the dynamic context and agent preferences, as well as selection operators for providing inputs to transitions.", "targets": "Computational Logic Foundations of KGP Agents"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c1df32c07d1c458db9907a3faf643c63", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The proximal problem for structured penalties obtained via convex relaxations of submodular functions is known to be equivalent to minimizing separable convex functions over the corresponding submodular polyhedra. In this paper, we reveal a comprehensive class of structured penalties for which penalties this problem can be solved via an efficiently solvable class of parametric maxflow optimization. We then show that the parametric maxflow algorithm proposed by Gallo et al. [17] and its variants, which runs, in the worst-case, at the cost of only a constant factor of a single computation of the corresponding maxflow optimization, can be adapted to solve the proximal problems for those penalties. Several existing structured penalties satisfy these conditions; thus, regularized learning with these penalties is solvable quickly using the parametric maxflow algorithm. We also investigate the empirical runtime performance of the proposed framework.", "targets": "Parametric Maxflows for Structured Sparse Learning with Convex Relaxations of Submodular Functions"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-edbf9e7e6d8d49dfaaccf3a3ae0ab1c8", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Cloze-style reading comprehension is a representative problem in mining relationship between document and query. In this paper, we present a simple but novel model called attention-over-attention reader for better solving cloze-style reading comprehension task. Our model aims to place another attention mechanism over the document-level attention and induces \u201cattended attention\u201d for final answer predictions. One advantage of our model is that it is simpler than related works while giving excellent performance. We also propose an N-best re-ranking strategy to double check the validity of the candidates and further improve the performance. Experimental results show that the proposed methods significantly outperform various state-of-the-art systems by a large margin in public datasets, such as CNN and Children\u2019s Book Test.", "targets": "Attention-over-Attention Neural Networks for Reading Comprehension"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-dbfc13519543422098b60aeb6c2b89de", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The paper steps outside the comfort-zone of the traditional NLP tasks like automatic speech recognition (ASR) and machine translation (MT) to addresses two novel problems arising in the automated multilingual news monitoring: segmentation of the TV and radio program ASR transcripts into individual stories, and clustering of the individual stories coming from various sources and languages into storylines. Storyline clustering of stories covering the same events is an essential task for inquisitorial media monitoring. We address these two problems jointly by engaging the low-dimensional semantic representation capabilities of the sequence to sequence neural translation models. To enable joint multi-task learning for multilingual neural translation of morphologically rich languages we replace the attention mechanism with the sliding-window mechanism and operate the sequence to sequence neural translation model on the character-level rather than on the word-level. The story segmentation and storyline clustering problem is tackled by examining the low-dimensional vectors produced as a side-product of the neural translation process. The results of this paper describe a novel approach to the automatic story segmentation and storyline clustering problem.", "targets": "Character-Level Neural Translation for Multilingual Media Monitoring in the SUMMA Project"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e9b44654cb624e04b66fe01c1059a784", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper describes the USTC NELSLIP systems submitted to the Trilingual Entity Detection and Linking (EDL) track in 2016 TAC Knowledge Base Population (KBP) contests. We have built two systems for entity discovery and mention detection (MD): one uses the conditional RNNLM and the other one uses the attention-based encoder-decoder framework. The entity linking (EL) system consists of two modules: a rule based candidate generation and a neural networks probability ranking model. Moreover, some simple string matching rules are used for NIL clustering. At the end, our best system has achieved an F1 score of 0.624 in the end-to-end typed mention ceaf plus metric.", "targets": "The USTC NELSLIP Systems for Trilingual Entity Detection and Linking Tasks at TAC KBP 2016"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b07359fed3a540358fb10196a67bcf81", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Words can be represented by composing the representations of subword units such as word segments, characters, and/or character n-grams. While such representations are effective and may capture the morphological regularities of words, they have not been systematically compared, and it is not understood how they interact with different morphological typologies. On a language modeling task, we present experiments that systematically vary (1) the basic unit of representation, (2) the composition of these representations, and (3) the morphological typology of the language modeled. Our results largely confirm previous findings that character representations are effective across many languages, though we find that a previously unstudied combination of character trigram representations composed with bi-LSTMs outperforms most other settings. However, we also find room for improvement: character models do not match the predictive accuracy of a model with access to explicit morphological analyses.", "targets": "From Characters to Words to in Between: Do We Capture Morphology?"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c1a848e9b9424fca9031c18fa6b7e4cb", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A modification of the neo-fuzzy neuron is proposed (an extended neo-fuzzy neuron (ENFN)) that is characterized by improved approximating properties. An adaptive learning algorithm is proposed that has both tracking and smoothing properties and solves prediction, filtering and smoothing tasks of non-stationary \u201cnoisy\u201d stochastic and chaotic signals. An ENFN distinctive feature is its computational simplicity compared to other artificial neural networks and neuro-fuzzy systems.", "targets": "An Extended Neo-Fuzzy Neuron and its Adaptive Learning Algorithm"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f1bf38e0711d41899e2e0db0132c1727", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Classification is widely used technique in the data mining domain, where scalability and efficiency are the immediate problems in classification algorithms for large databases. We suggest improvements to the existing C4.5 decision tree algorithm. In this paper attribute oriented induction (AOI) and relevance analysis are incorporated with concept hierarchy\u201fs knowledge and HeightBalancePriority algorithm for construction of decision tree along with Multi level mining. The assignment of priorities to attributes is done by evaluating information entropy, at different levels of abstraction for building decision tree using HeightBalancePriority algorithm. Modified DMQL queries are used to understand and explore the shortcomings of the decision trees generated by C4.5 classifier for education dataset and the results are compared with the proposed approach.", "targets": "EXTRACTING USEFUL RULES THROUGH IMPROVED DECISION TREE INDUCTION USING INFORMATION ENTROPY"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-93564d79710f4025bf2d27254a5731bb", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "With the large volume of new information created every day, determining the validity of information in a knowledge graph and filling in its missing parts are crucial tasks for many researchers and practitioners. To address this challenge, a number of knowledge graph completion methods have been developed using low-dimensional graph embeddings. Although researchers continue to improve these models using an increasingly complex feature space, we show that simple changes in the architecture of the underlying model can outperform state-of-the-art models without the need for complex feature engineering. In this work, we present a shared variable neural network model called ProjE that fills-in missing information in a knowledge graph by learning joint embeddings of the knowledge graph\u2019s entities and edges, and through subtle, but important, changes to the standard loss function. In doing so, ProjE has a parameter size that is smaller than 11 out of 15 existing methods while performing 37% better than the current-best method on standard datasets. We also show, via a new fact checking task, that ProjE is capable of accurately determining the veracity of many declarative statements. Knowledge Graphs (KGs) have become a crucial resource for many tasks in machine learning, data mining, and artificial intelligence applications including question answering [34], entity disambiguation [7], named entity linking [14], fact checking [32], and link prediction [28] to name a few. In our view, KGs are an example of a heterogeneous information network containing entity-nodes and relationship-edges corresponding to RDF-style triples \u3008h, r, t\u3009 where h represents a head entity, and r is a relationship that connects h to a tail entity t. KGs are widely used for many practical tasks, however, their correctness and completeness are not guaranteed. Therefore, it is necessary to develop knowledge graph completion (KGC) methods to find missing or errant relationships with the goal of improving the general quality of KGs, which, in turn, can be used to improve or create interesting downstream applications. The KGC task can be divided into two non-mutually exclusive sub-tasks: (i) entity prediction and (ii) relationship prediction. The entity prediction task takes a partial triple \u3008h, r, ?\u3009 as input and produces a ranked list of candidate entities as output: Definition 1. (Entity Ranking Problem) Given a Knowledge Graph G = {E,R} and an input triple \u3008h, r, ?\u3009, the entity ranking problem attempts to find the optimal ordered list such that \u2200ej\u2200ei ((ej \u2208 E\u2212 \u2227 ei \u2208 E+)\u2192 ei \u227a ej), where E+ = {e \u2208 {e1, e2, . . . , el}|\u3008h, r, e\u3009 \u2208 G} and E\u2212 = {e \u2208 {el+1, el+2, . . . , e|E|}|\u3008h, r, e\u3009 / \u2208 G}. Distinguishing between head and tail-entities is usually arbitrary, so we can easily substitute \u3008h, r, ?\u3009 for \u3008?, r, t\u3009. The relationship prediction task aims to find a ranked list of relationships that connect a head-entity with a tail-entity, i.e., \u3008h, ?, t\u3009. When discussing the details of the present work, we focus specifically on the entity prediction task; however, it is straightforward to adapt the methodology to the relationship prediction task by changing the input. A number of KGC algorithms have been developed in recent years, and the most successful models all have one thing in common: they use low-dimensional embedding vectors to represent entities and relationships. Many embedding models, e.g., Unstructured [3], TransE [4], TransH [35], and TransR [25], use a margin-based pairwise ranking loss function, which measures the score of each possible result as the Ln-distance between h+ r and t. In these models the loss functions are all the same, so models differ in how they transform the 1 ar X iv :1 61 1. 05 42 5v 1 [ cs .A I] 1 6 N ov 2 01 6 entity embeddings h and t with respect to the relationship embeddings r. Instead of simply adding h + r, more expressive combination operators are learned by Knowledge Vault [8] and HolE [29] in order to predict the existence of \u3008h, r, t\u3009 in the KG. Other models, such as the Neural Tensor Network (NTN) [33] and the Compositional Vector Space Model (CVSM) [27], incorporate a multilayer neural network solution into the existing models. Unfortunately, due to their extremely large parameter size, these models either (i) do not scale well or (2) consider only a single relationship at a time [10] thereby limiting their usefulness on large, real-world KGs. Despite their large model size, the aforementioned methods only use singleton triples, i.e., length-1 paths in the KG. PTransE [24] and RTransE [10] employ extended path information from 2 and 3-hop trails over the knowledge graph. These extended models achieve excellent performance due to the richness of the input data; unfortunately, their model-size grows exponentially as the path-length increases, which further exacerbates the scalability issues associated with the already high number of parameters of the underlying-models. Another curious finding is that some of the existing models are not self-contained models, i.e., they require pre-trained KG embeddings (RTransE, CVSM), pre-selected paths (PTransE, RTransE), or pre-computed content embeddings of each node (DKRL [36]) before their model training can even begin. TransR and TransH are self-contained models, but their experiments only report results using pre-trained TransE embeddings as input. With these considerations in mind, in the present work we rethink some of the basic decisions made by previous models to create a projection embedding model (ProjE) for KGC. ProjE has four parts that distinguish it from the related work: 1. Instead of measuring the distance between input triple \u3008h, r, ?\u3009 and entity candidates on a unified or a relationship-specific plane, we choose to project the entity candidates onto a target vector representing the input data. 2. Unlike existing models that use transformation matrices, we combine the embedding vectors representing the input data into a target vector using a learnable combination operator. This avoids the addition of a large number of transformation matrices by reusing the entity-embeddings. 3. Rather than optimizing the margin-based pairwise ranking loss, we optimize a ranking loss of the list of candidate-entities (or relationships) collectively. We further use candidate sampling to handle very large data sets. 4. Unlike many of the related models that require pre-trained data from prerequisite models or explore expensive multi-hop paths through the knowledge graph, ProjE is a self-contained model over length-1 edges.", "targets": "ProjE: Embedding Projection for Knowledge Graph Completion"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-76e8fbd2bc42455597c4dc9a6b7fc68b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Generative models defining joint distributions over parse trees and sentences are useful for parsing and language modeling, but impose restrictions on the scope of features and are often outperformed by discriminative models. We propose a framework for parsing and language modeling which marries a generative model with a discriminative recognition model in an encoder-decoder setting. We provide interpretations of the framework based on expectation maximization and variational inference, and show that it enables parsing and language modeling within a single implementation. On the English Penn Treenbank, our framework obtains competitive performance on constituency parsing while matching the state-of-the-art singlemodel language modeling score.", "targets": "A Generative Parser with a Discriminative Recognition Algorithm"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-76584fd236e146f5b5e3493f9fb4b9cf", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Scalable and effective exploration remains a key challenge in reinforcement learning (RL). While there are methods with optimality guarantees in the setting of discrete state and action spaces, these methods cannot be applied in high-dimensional deep RL scenarios. As such, most contemporary RL relies on simple heuristics such as -greedy exploration or adding Gaussian noise to the controls. This paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent\u2019s belief of environment dynamics. We propose a practical implementation, using variational inference in Bayesian neural networks which efficiently handles continuous state and action spaces. VIME modifies the MDP reward function, and can be applied with several different underlying RL algorithms. We demonstrate that VIME achieves significantly better performance compared to heuristic exploration methods across a variety of continuous control tasks and algorithms, including tasks with very sparse rewards.", "targets": "Curiosity-driven Exploration in Deep Reinforcement Learning via Bayesian Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3288c9c305c64f06816b6d8df76fdac9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Latent force models (LFMs) are flexible models that combine mechanistic modelling principles (i.e., physical models) with nonparametric data-driven components. Several key applications of LFMs need nonlinearities, which results in analytically intractable inference. In this work we show how non-linear LFMs can be represented as nonlinear white noise driven state-space models and present an efficient non-linear Kalman filtering and smoothing based method for approximate state and parameter inference. We illustrate the performance of the proposed methodology via two simulated examples, and apply it to a real-world problem of long-term prediction of GPS satellite orbits.", "targets": "State-Space Inference for Non-Linear Latent Force Models with Application to Satellite Orbit Prediction"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-59562c39c1504b5aa8b4885cd6f4c908", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The research of attribute characters in information system which contains core, necessary, unnecessary is a basic and important issue in attribute reduct. Many methods for the judgement of attribute characters are based on the relationship between the objects and attributes. In this paper, a new type of judgement theorems which are absolutely based on the relationship among attributes is proposed for the judgement of attribute characters. The method is through comparing the two new attribute sets E(a) and N(a) with respect to the designated attribute a which is proposed in this paper. We conclude that which type of the attribute a belongs to is determined by the relationship between E(a) and N(a) in essence. Secondly, more concise and clear results are given about the judgment of the attribute characters through analyzing the properties of refinement and precise-refinement between E(a) andN(a) in topology. In addition, the relationship among attributes are discussed which is useful for constructing a reduct in the last section of this paper. In the last, we propose a reduct algorithm based on E(a), and this algorithm is an extended application of the analysis of attribute characters above.", "targets": "A new type of judgement theorems for attribute characters in information system"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9193e6d264704b098b3d2e84cbe88f26", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The goal of open information extraction (OIE) is to extract surface relations and their arguments from naturallanguage text in an unsupervised, domainindependent manner. In this paper, we explore how overly-specific extractions can be reduced in OIE systems without producing uninformative or inaccurate results. We propose MinIE, an OIE system that produces minimized, annotated extractions. At its heart, MinIE rewrites OIE extractions by (1) identifying and removing parts that are considered overly specific; (2) representing information about polarity, modality, attribution, and quantities with suitable annotations instead of in the actual extraction. We conducted an experimental study with several real-world datasets and found that MinIE achieves competitive or higher precision and recall than most prior systems, while at the same time producing much shorter extractions.", "targets": "MinIE: Minimizing Facts in Open Information Extraction"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-72946525683c4de5b45e6f38e6121060", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Words in some natural languages can have a composite structure. Elements of this structure include the root (that could also be composite), prefixes and suffixes with which various nuances and relations to other words can be expressed. Thus, in order to build a proper word representation one must take into account its internal structure. From a corpus of texts we extract a set of frequent subwords and from the latter set we select patterns, i.e. subwords which encapsulate information on character n-gram regularities. The selection is made using the patternbased Conditional Random Field model [23,19] with l1 regularization. Further, for every word we construct a new sequence over an alphabet of patterns. The new alphabet\u2019s symbols confine a local statistical context stronger than the characters, therefore they allow better representations in R and are better building blocks for word representation. In the task of subword-aware language modeling, pattern-based models outperform character-based analogues by 2-20 perplexity points. Also, a recurrent neural network in which a word is represented as a sum of embeddings of its patterns is on par with a competitive and significantly more sophisticated character-based convolutional architecture.", "targets": "Patterns versus Characters in Subword-aware Neural Language Modeling"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-149e13c4a81e495995e6f09c40f485c0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recommendation and collaborative filtering systems are important in modern information and e-commerce applications. As these systems are becoming increasingly popular in the industry, their outputs could affect business decision making, introducing incentives for an adversarial party to compromise the availability or integrity of such systems. We introduce a data poisoning attack on collaborative filtering systems. We demonstrate how a powerful attacker with full knowledge of the learner can generate malicious data so as to maximize his/her malicious objectives, while at the same time mimicking normal user behavior to avoid being detected. While the complete knowledge assumption seems extreme, it enables a robust assessment of the vulnerability of collaborative filtering schemes to highly motivated attacks. We present efficient solutions for two popular factorizationbased collaborative filtering algorithms: the alternative minimization formulation and the nuclear norm minimization method. Finally, we test the effectiveness of our proposed algorithms on real-world data and discuss potential defensive strategies.", "targets": "Data Poisoning Attacks on Factorization-Based Collaborative Filtering"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-137d93d36c3641b0bb6c79de41ec7cf7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present an efficient method for training slackrescaled structural SVM. Although finding the most violating label in a margin-rescaled formulation is often easy since the target function decomposes with respect to the structure, this is not the case for a slack-rescaled formulation, and finding the most violated label might be very difficult. Our core contribution is an efficient method for finding the most-violatinglabel in a slack-rescaled formulation, given an oracle that returns the most-violating-label in a (slightly modified) margin-rescaled formulation. We show that our method enables accurate and scalable training for slack-rescaled SVMs, reducing runtime by an order of magnitude compared to previous approaches to slack-rescaled SVMs.", "targets": "Fast and Scalable Structural SVM with Slack Rescaling"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6b1ac633f8b14e73855efcdd2b7966f3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Text analysis includes lexical analysis of the text and has been widely studied and used in diverse applications. In the last decade, researchers have proposed many efficient solutions to analyze / classify large text dataset, however, analysis / classification of short text is still a challenge because 1) the data is very sparse 2) It contains noise words and 3) It is difficult to understand the syntactical structure of the text. Short Messaging Service (SMS) is a text messaging service for mobile/smart phone and this service is frequently used by all mobile users. Because of the popularity of SMS service, marketing companies nowadays are also using this service for direct marketing also known as SMS marketing.In this paper, we have proposed Ontology based SMS Controller which analyze the text message and classify it using ontology aslegitimate or spam. The proposed system has been tested on different scenarios and experimental results shows that the proposed solution is effective both in terms of efficiency and time. Keywords\u2014Short Text Classification; SMS Spam; Text Analysis; Ontology based SMS Spam; Text Analysis and Ontology", "targets": "Ontology Based SMS Controller for Smart Phones"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-edc988a8d26f47af827570d19dee02ab", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we study a variant of the framework of online learning using expert advice with limited/bandit feedback. We consider each expert as a learning entity, seeking to more accurately reflecting certain real-world applications. In our setting, the feedback at any time t is limited in a sense that it is only available to the expert i that has been selected by the central algorithm (forecaster), i.e., only the expert i receives feedback from the environment and gets to learn at time t. We consider a generic black-box approach whereby the forecaster does not control or know the learning dynamics of the experts apart from knowing the following no-regret learning property: the average regret of any expert j vanishes at a rate of at leastO(t j ) with tj learning steps where \u03b2 \u2208 [0, 1] is a parameter. In the spirit of competing against the best action in hindsight in multi-armed bandits problem, our goal here is to be competitive w.r.t. the cumulative losses the algorithm could receive by following the policy of always selecting one expert. We prove the following hardness result: without any coordination between the forecaster and the experts, it is impossible to design a forecaster achieving no-regret guarantees. In order to circumvent this hardness result, we consider a practical assumption allowing the forecaster to \u201cguide\u201d the learning process of the experts by filtering/blocking some of the feedbacks observed by them from the environment, i.e., not allowing the selected expert i to learn at time t for some time steps. Then, we design a novel no-regret learning algorithm LEARNEXP for this problem setting by carefully guiding the feedbacks observed by experts. We prove that LEARNEXP achieves the worst-case expected cumulative regret ofO(T 1 2\u2212\u03b2 ) after T time steps and matches the regret bound of \u0398(T 1 2 ) for the special case of multi-armed bandits.", "targets": "Learning to Use Learners\u2019 Advice"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-038eb25aa1174da288d76d782031f5e3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A commonly used learning rule is to approximately minimize the average loss over the training set. Other learning algorithms, such as AdaBoost and hard-SVM, aim at minimizing the maximal loss over the training set. The average loss is more popular, particularly in deep learning, due to three main reasons. First, it can be conveniently minimized using online algorithms, that process few examples at each iteration. Second, it is often argued that there is no sense to minimize the loss on the training set too much, as it will not be reflected in the generalization loss. Last, the maximal loss is not robust to outliers. In this paper we describe and analyze an algorithm that can convert any online algorithm to a minimizer of the maximal loss. We prove that in some situations better accuracy on the training set is crucial to obtain good performance on unseen examples. Last, we propose robust versions of the approach that can handle outliers.", "targets": "Minimizing the Maximal Loss: How and Why"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4f0290fc291e41469982ba0edaf133dc", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this article using Cuckoo Optimization Algorithm and simple additive weighting method the hybrid COAW algorithm is presented to solve multi-objective problems. Cuckoo algorithm is an efficient and structured method for solving nonlinear continuous problems. The created Pareto frontiers of the COAW proposed algorithm are exact and have good dispersion. This method has a high speed in finding the Pareto frontiers and identifies the beginning and end points of Pareto frontiers properly. In order to validation the proposed algorithm, several experimental problems were analyzed. The results of which indicate the proper effectiveness of COAW algorithm for solving multi-objective problems.", "targets": "THE NEW HYBRID COAW METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMS"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-777a258d102241c19ab151d07b8ad472", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This article constructs a Turing Machine which can solve for \u03b2 \u2032 which is REcomplete. Such a machine is only possible if there is something wrong with the foundations of computer science and mathematics. We therefore check our work by looking very closely at Cantor\u2019s diagonalization and construct a novel formal language as an Abelian group which allows us, through equivalence relations, to provide a non-trivial counterexample to Cantor\u2019s argument. As if that wasn\u2019t enough, we then discover that the impredicative nature of G\u00f6del\u2019s diagonalization lemma leads to logical tautology, invalidating any meaning behind the method, leaving no doubt that diagonalization is flawed. Our discovery in regards to these foundational arguments opens the door to solving the P vs NP problem. 1 Turing\u2019s Proof on the Entscheidungsproblem has", "targets": "A Stronger Foundation for Computer Science and P=NP"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-73a6ba4bad00459a9389e4026b55d0c5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Here, I review current state-of-the-arts in many areas of AI to estimate when it\u2019s reasonable to expect human level AI development. Predictions of prominent AI researchers vary broadly from very pessimistic predictions of Andrew Ng to much more moderate predictions of Geoffrey Hinton and optimistic predictions of Shane Legg, DeepMind cofounder. Given huge rate of progress in recent years and this broad range of predictions of AI experts, AI safety questions are also discussed.", "targets": "Review of state-of-the-arts in artificial intelligence with application to AI safety problem"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-449af93377e543a0a28785e83c11c7a3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Several approaches have recently been proposed for learning decentralized deep multiagent policies that coordinate via a differentiable communication channel. While these policies are effective for many tasks, interpretation of their induced communication strategies has remained a challenge. Here we propose to interpret agents\u2019 messages by translating them. Unlike in typical machine translation problems, we have no parallel data to learn from. Instead we develop a translation model based on the insight that agent messages and natural language strings mean the same thing if they induce the same belief about the world in a listener. We present theoretical guarantees and empirical evidence that our approach preserves both the semantics and pragmatics of messages by ensuring that players communicating through a translation layer do not suffer a substantial loss in reward relative to players with a common language.1", "targets": "Translating Neuralese"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ded63d723aa2463baf14e5c453b3a120", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Circumscription is a representative example of a nonmonotonic reasoning inference technique. Circumscription has often been studied for first order theories, but its propositional version has also been the subject of extensive research, having been shown equivalent to extended closed world assumption (ECWA). Moreover, entailment in propositional circumscription is a well-known example of a decision problem in the second level of the polynomial hierarchy. This paper proposes a new Boolean Satisfiability (SAT)-based algorithm for entailment in propositional circumscription that explores the relationship of propositional circumscription to minimal models. The new algorithm is inspired by ideas commonly used in SAT-based model checking, namely counterexample guided abstraction refinement. In addition, the new algorithm is refined to compute the theory closure for generalized close world assumption (GCWA). Experimental results show that the new algorithm can solve problem instances that other solutions are unable to solve.", "targets": "Counterexample Guided Abstraction Refinement Algorithm for Propositional Circumscription"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9f55e4460665415a8565588fb4dd4dac", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Abstract Stochastic gradient descent (SGD) on a low-rank factorization [9] is commonly employed to speed up matrix problems including matrix completion, subspace tracking, and SDP relaxation. In this paper, we exhibit a step size scheme for SGD on a low-rank least-squares problem, and we prove that, under broad sampling conditions, our method converges globally from a random starting point within O(\u01ebn logn) steps with constant probability for constant-rank problems. Our modification of SGD relates it to stochastic power iteration. We also show experiments to illustrate the runtime and convergence of the algorithm.", "targets": "Global Convergence of Stochastic Gradient Descent for Some Non-convex Matrix Problems"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3de654e58fc14405a4ff75386b901ce2", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper describes our deep learning-based approach to sentiment analysis in Twitter as part of SemEval-2016 Task 4. We use a convolutional neural network to determine sentiment and participate in all subtasks, i.e. two-point, three-point, and five-point scale sentiment classification and two-point and five-point scale sentiment quantification. We achieve competitive results for two-point scale sentiment classification and quantification, ranking fifth and a close fourth (third and second by alternative metrics) respectively despite using only pre-trained embeddings that contain no sentiment information. We achieve good performance on three-point scale sentiment classification, ranking eighth out of 35, while performing poorly on fivepoint scale sentiment classification and quantification. An error analysis reveals that this is due to low expressiveness of the model to capture negative sentiment as well as an inability to take into account ordinal information. We propose improvements in order to address these and other issues.", "targets": "INSIGHT-1 at SemEval-2016 Task 4: Convolutional Neural Networks for Sentiment Classification and Quantification"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c48f27d2e429424db414f151c7aa546e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The standard approach to supervised classification involves the minimization of a log-loss as an upper bound to the classification error. While this is a tight bound early on in the optimization, it overemphasizes the influence of incorrectly classified examples far from the decision boundary. Updating the upper bound during the optimization leads to improved classification rates while transforming the learning into a sequence of minimization problems. In addition, in the context where the classifier is part of a larger system, this modification makes it possible to link the performance of the classifier to that of the whole system, allowing the seamless introduction of external constraints.", "targets": "TIGHTER BOUNDS LEAD TO IMPROVED CLASSIFIERS"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1e2b527d63e14aa5b7c4e7b9659de44f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Embeddings are generic representations that are useful for many NLP tasks. In this paper, we introduce DENSIFIER, a method that learns an orthogonal transformation of the embedding space that focuses the information relevant for a task in an ultradense subspace of a dimensionality that is smaller by a factor of 100 than the original space. We show that ultradense embeddings generated by DENSIFIER reach state of the art on a lexicon creation task in which words are annotated with three types of lexical information \u2013 sentiment, concreteness and frequency. On the SemEval2015 10B sentiment analysis task we show that no information is lost when the ultradense subspace is used, but training is an order of magnitude more efficient due to the compactness of the ultradense space.", "targets": "Ultradense Word Embeddings by Orthogonal Transformation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d8d689cd0a4b44f9a3ee0f295e014bbf", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper presents the development of several models of a deep convolutional auto-encoder in the Caffe deep learning framework and their experimental evaluation on the example of MNIST dataset. We have created five models of a convolutional auto-encoder which differ architecturally by the presence or absence of pooling and unpooling layers in the auto-encoder\u2019s encoder and decoder parts. Our results show that the developed models provide very good results in dimensionality reduction and unsupervised clustering tasks, and small classification errors when we used the learned internal code as an input of a supervised linear classifier and multi-layer perceptron. The best results were provided by a model where the encoder part contains convolutional and pooling layers, followed by an analogous decoder part with deconvolution and unpooling layers without the use of switch variables in the decoder part. The paper also discusses practical details of the creation of a deep convolutional auto-encoder in the very popular Caffe deep learning framework. We believe that our approach and results presented in this paper could help other researchers to build efficient deep neural network architectures in the future.", "targets": "A Deep Convolutional Auto-Encoder with Pooling - Unpooling Layers in Caffe"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-093d26b8f6ac4b06b646fb97b9959d64", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose an online convex optimization algorithm (RESCALEDEXP) that achievesoptimal regret in the unconstrained setting without prior knowledge of any boundson the loss functions. We prove a lower bound showing an exponential sep-aration between the regret of existing algorithms that require a known boundon the loss functions and any algorithm that does not require such knowledge.RESCALEDEXP matches this lower bound asymptotically in the number of itera-tions. RESCALEDEXP is naturally hyperparameter-free and we demonstrate empir-ically that it matches prior optimization algorithms that require hyperparameteroptimization. 1 Online Convex Optimization Online Convex Optimization (OCO) [1, 2] provides an elegant framework for modeling noisy,antagonistic or changing environments. The problem can be stated formally with the help of thefollowing definitions:Convex Set: A setW is convex ifW is contained in some real vector space and tw+(1\u2212 t)w\u2032 \u2208Wfor all w,w\u2032 \u2208W and t \u2208 [0, 1].Convex Function: f :W \u2192 R is a convex function if f(tw + (1\u2212 t)w\u2032) \u2264 tf(w) + (1\u2212 t)f(w\u2032)for all w,w\u2032 \u2208W and t \u2208 [0, 1]. An OCO problem is a game of repeated rounds in which on round t a learner first chooses an elementwt in some convex space W , then receives a convex loss function `t, and suffers loss `t(wt). Theregret of the learner with respect to some other u \u2208W is defined by", "targets": "Online Convex Optimization with Unconstrained Domains and Losses"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-58965754189f467ea00aebe1963f64cc", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We formalize Simplified Boardgames language, which describes a subclass of arbitrary board games. The language structure is based on the regular expressions, which makes the rules easily machine-processable while keeping the rules concise and fairly human-readable.", "targets": "SIMPLIFIED BOARDGAMES"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b06cec63b22e48b28e8f9191f20bbc9c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a new tensor factorization method, called the Sparse Hierarchical Tucker (Sparse H-Tucker), for sparse and high-order data tensors. Sparse H-Tucker is inspired by its namesake, the classical Hierarchical Tucker method, which aims to compute a tree-structured factorization of an input data set that may be readily interpreted by a domain expert. However, Sparse H-Tucker uses a nested sampling technique to overcome a key scalability problem in Hierarchical Tucker, which is the creation of an unwieldy intermediate dense core tensor; the result of our approach is a faster, more space-efficient, and more accurate method. We extensively test our method on a real healthcare dataset, which is collected from 30K patients and results in an 18th order sparse data tensor. Unlike competing methods, Sparse H-Tucker can analyze the full data set on a single multi-threaded machine. It can also do so more accurately and in less time than the state-of-the-art: on a 12th order subset of the input data, Sparse H-Tucker is 18\u00d7 more accurate and 7.5\u00d7 faster than a previously state-of-the-art method. Even for analyzing low order tensors (e.g., 4-order), our method requires close to an order of magnitude less time and over two orders of magnitude less memory, as compared to traditional tensor factorization methods such as CP and Tucker. Moreover, we observe that Sparse H-Tucker scales nearly linearly in the number of non-zero tensor elements. The resulting model also provides an interpretable disease hierarchy, which is confirmed by a clinical expert.", "targets": "Sparse Hierarchical Tucker Factorization and its Application to Healthcare"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7b833986c93a4e248bfb29c73fa40c10", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Text alignment and text quality are critical to the accuracy of Machine Translation (MT) systems, some NLP tools, and any other text processing tasks requiring bilingual data. This research proposes a language independent bi-sentence filtering approach based on Polish (not a positionsensitive language) to English experiments. This cleaning approach was developed on the TED Talks corpus and also initially tested on the Wikipedia comparable corpus, but it can be used for any text domain or language pair. The proposed approach implements various heuristics for sentence comparison. Some of them leverage synonyms and semantic and structural analysis of text as additional information. Minimization of data loss was ensured. An improvement in MT system score with text processed using the tool is discussed.", "targets": "Noisy-parallel and comparable corpora filtering methodology for the extraction of bi-lingual equivalent data at sentence level"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-922ee293824146d7b1afe038520dce3c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The explosive growth of the location-enabled devices coupled with the increasing use of Internet services has led to an increasing awareness of the importance and usage of geospatial information in many applications. The navigation apps (often called \u201cMaps\u201d), use a variety of available data sources to calculate and predict the travel time as well as several options for routing in public transportation, car or pedestrian modes. This paper evaluates the pedestrian mode of Maps apps in three major smartphone operating systems (Android, iOS and Windows Phone). In the paper, we will show that the Maps apps on iOS, Android and Windows Phone in pedestrian mode, predict travel time without learning from the individual\u2019s movement profile. In addition, we will exemplify that those apps suffer from a specific data quality issue which relates to the absence of information about location and type of pedestrian crossings. Finally, we will illustrate learning from movement profile of individuals using various predictive analytics models to improve the accuracy of travel time estimation.", "targets": "Predictive Analytics for Enhancing Travel Time Estimation in Navigation Apps of Apple, Google, and Microsoft"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e2c700b875514b64b2823c41745a1ca6", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A series of monte carlo studies were performed to compare the behavior of some alternative procedures for reasoning under uncertainty. The behavior of several Bayesian, linear model and default reasoning procedures were examined in the context of increasing levels of calibration error. The most interesting result is that Bayesian procedures tended to output more extreme posterior belief values (posterior beliefs near 0.0 or 1.0) than other techniques, but the linear models were relatively less likely to output strong support for an erroneous conclusion. Also, accounting for the probabilistic dependencies between evidence items was important for both Bayesian and linear updating procedures.", "targets": "Reasoning under Uncertainty:"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-855c5337ca37404ba8410884c38252da", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper describes pre-processing phase of ontology graph generation system from Punjabi text documents of different domains. This research paper focuses on pre-processing of Punjabi text documents. Pre-processing is structured representation of the input text. Pre-processing of ontology graph generation includes allowing input restrictions to the text, removal of special symbols and punctuation marks, removal of duplicate terms, removal of stop words, extract terms by matching input terms with dictionary and gazetteer lists terms. KeywordsOntology, Pre-processing phase, Ontology Graph, Knowledge Representation, Natural Language Processing.", "targets": "Pre-processing of Domain Ontology Graph Generation System in Punjabi"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6a9c9c9eeea740e1837b1138da16902d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We investigate the usage of convolutional neural networks (CNNs) for the slot filling task in spoken language understanding. We propose a novel CNN architecture for sequence labeling which takes into account the previous context words with preserved order information and pays special attention to the current word with its surrounding context. Moreover, it combines the information from the past and the future words for classification. Our proposed CNN architecture outperforms even the previously best ensembling recurrent neural network model and achieves state-of-the-art results with an F1-score of 95.61% on the ATIS benchmark dataset without using any additional linguistic knowledge and resources.", "targets": "Sequential Convolutional Neural Networks for Slot Filling in Spoken Language Understanding"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-aa9f42530cec46ffabb86ba92fec5b61", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This work proposes a low complexity nonlinearity model and develops adaptive algorithms over it. The model is based on the decomposable\u2014or rank-one, in tensor language\u2014 Volterra kernels. It may also be described as a product of FIR filters, which explains its low-complexity. The rank-one model is also interesting because it comes from a well-posed problem in approximation theory. The paper uses such model in an estimation theory context to develop an exact gradienttype algorithm, from which adaptive algorithms such as the least mean squares (LMS) filter and its data-reuse version\u2014the TRUE-LMS\u2014are derived. Stability and convergence issues are addressed. The algorithms are then tested in simulations, which show its good performance when compared to other nonlinear processing algorithms in the literature.", "targets": "Nonlinear Adaptive Algorithms on Rank-One Tensor Models"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5ddac4af416d447d8e39178cb274f27a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper takes an approach to clustering domestic electricity load profiles that has been successfully used with data from Portugal and applies it to UK data. Clustering techniques are applied and it is found that the preferred technique in the Portuguese work (a two stage process combining Self Organised Maps and Kmeans) is not appropriate for the UK data. The work shows that up to nine clusters of households can be identified with the differences in usage profiles being visually striking. This demonstrates the appropriateness of breaking the electricity usage patterns down to more detail than the two load profiles currently published by the electricity industry. The paper details initial results using data collected in Milton Keynes around 1990. Further work is described and will concentrate on building accurate and meaningful clusters of similar electricity users in order to better direct demand side management initiatives to the most relevant target customers.", "targets": "Application of a clustering framework to UK domestic electricity data"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-fe04cf65869e4c08b9c8f598c6eec265", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In Bayesian networks, a Most Probable Explanation (MPE) is a complete variable instantiation with the highest probability given the current evidence. In this paper, we discuss the problem of finding robustness conditions of the MPE under single parameter changes. Specifically, we ask the question: How much change in a single network parameter can we afford to apply while keeping the MPE unchanged? We will describe a procedure, which is the first of its kind, that computes this answer for all parameters in the Bayesian network in time O(n exp(w)), where n is the number of network variables and w is its treewidth.", "targets": "On the Robustness of Most Probable Explanations"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-71775f378a5a44f1b6ede1e0e3f15d23", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We learn rich natural sound representations by capitalizing on large amounts of unlabeled sound data collected in the wild. We leverage the natural synchronization between vision and sound to learn an acoustic representation using two-million unlabeled videos. Unlabeled video has the advantage that it can be economically acquired at massive scales, yet contains useful signals about natural sound. We propose a student-teacher training procedure which transfers discriminative visual knowledge from well established visual recognition models into the sound modality using unlabeled video as a bridge. Our sound representation yields significant performance improvements over the state-of-the-art results on standard benchmarks for acoustic scene/object classification. Visualizations suggest some high-level semantics automatically emerge in the sound network, even though it is trained without ground truth labels.", "targets": "SoundNet: Learning Sound Representations from Unlabeled Video"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-033fb9f94e1a47429a28635f504c58a8", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We describe MITRE\u2019s submission to the SemEval-2016 Task 6, Detecting Stance in Tweets. This effort achieved the top score in Task A on supervised stance detection, producing an average F1 score of 67.8 when assessing whether a tweet author was in favor or against a topic. We employed a recurrent neural network initialized with features learned via distant supervision on two large unlabeled datasets. We trained embeddings of words and phrases with the word2vec skip-gram method, then used those features to learn sentence representations via a hashtag prediction auxiliary task. These sentence vectors were then finetuned for stance detection on several hundred labeled examples. The result was a high performing system that used transfer learning to maximize the value of the available training data.", "targets": "MITRE at SemEval-2016 Task 6: Transfer Learning for Stance Detection"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-504f877684da4527acac2832177e550a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Using current reinforcement learning methods, it has recently become possible to learn to play unknown 3D games from raw pixels. In this work, we study the challenges that arise in such complex environments, and summarize current methods to approach these. We choose a task within the Doom game, that has not been approached yet. The goal for the agent is to fight enemies in a 3D world consisting of five rooms. We train the DQN and LSTMA3C algorithms on this task. Results show that both algorithms learn sensible policies, but fail to achieve high scores given the amount of training. We provide insights into the learned behavior, which can serve as a valuable starting point for further research in the Doom domain.", "targets": "Deep Reinforcement Learning From Raw Pixels in Doom"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-40de837961a14cc9b655749b7a51cf0f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This work focuses on answering singlerelation factoid questions over Freebase. Each question can acquire the answer from a single fact of form (subject, predicate, object) in Freebase. This task, simple question answering (SimpleQA), can be addressed via a twostep pipeline: entity linking and fact selection. In fact selection, we match the subject entity in fact with the entity mention in question by a character-level convolutional neural network (char-CNN), and match the predicate in fact with the question by a word-level CNN (wordCNN). This work makes two main contributions. (i) A simple and effective entity linker over Freebase is proposed. Our entity linker outperforms the state-of-the-art entity linker of SimpleQA task. (ii) A novel attentive maxpooling is stacked over word-CNN, so that the predicate representation can be matched with the predicate-focused question representation more effectively. Experiments show that our system sets new state-of-the-art in this task.", "targets": "Simple Question Answering by Attentive Convolutional Neural Network"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8f6c655c043f493aa990fe4c29032d9b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Artificial intelligence methods have often been applied to perform specific functions or tasks in the cyber\u2013 defense realm. However, as adversary methods become more complex and difficult to divine, piecemeal efforts to understand cyber\u2013attacks, and malware\u2013based attacks in particular, are not providing sufficient means for malware analysts to understand the past, present and future characteristics of malware. In this paper, we present the Malware Analysis and Attributed using Genetic Information (MAAGI) system. The underlying idea behind the MAAGI system is that there are strong similarities between malware behavior and biological organism behavior, and applying biologically inspired methods to corpora of malware can help analysts better understand the ecosystem of malware attacks. Due to the sophistication of the malware and the analysis, the MAAGI system relies heavily on artificial intelligence techniques to provide this capability. It has already yielded promising results over its development life, and will hopefully inspire more integration between the artificial intelligence and cyber\u2013defense communities.", "targets": "Artificial Intelligence Based Malware Analysis"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f33189c1b47246cdb59b559232867856", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We show how eye-tracking corpora can be used to improve sentence compression models, presenting a novel multi-task learning algorithm based on multi-layer LSTMs. We obtain performance competitive with or better than state-of-the-art approaches.", "targets": "Improving sentence compression by learning to predict gaze"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8116619091ea44b09c316964a1080ec4", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present the discriminative recurrent sparse auto-encoder model, comprising a recurrent encoder of rectified linear units, unrolled for a fixed number of iterations, and connected to two linear decoders that reconstruct the input and predict its supervised classification. Training via backpropagation-through-time initially minimizes an unsupervised sparse reconstruction error; the loss function is then augmented with a discriminative term on the supervised classification. The depth implicit in the temporally-unrolled form allows the system to exhibit far more representational power, while keeping the number of trainable parameters fixed. From an initially unstructured network the hidden units differentiate into categorical-units, each of which represents an input prototype with a well-defined class; and part-units representing deformations of these prototypes. The learned organization of the recurrent encoder is hierarchical: part-units are driven directly by the input, whereas the activity of categorical-units builds up over time through interactions with the part-units. Even using a small number of hidden units per layer, discriminative recurrent sparse auto-encoders achieve excellent performance on MNIST.", "targets": "Discriminative Recurrent Sparse Auto-Encoders"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-26bef879b29447ae878401e759b78a2e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In many natural language processing (NLP) tasks, a document is commonly modeled as a bag of words using the term frequencyinverse document frequency (TF-IDF) vector. One major shortcoming of the frequencybased TF-IDF feature vector is that it ignores word orders that carry syntactic and semantic relationships among the words in a document, and they can be important in some NLP tasks such as genre classification. This paper proposes a novel distributed vector representation of a document: a simple recurrentneural-network language model (RNN-LM) or a long short-term memory RNN language model (LSTM-LM) is first created from all documents in a task; some of the LM parameters are then adapted by each document, and the adapted parameters are vectorized to represent the document. The new document vectors are labeled as DV-RNN and DV-LSTM respectively. We believe that our new document vectors can capture some high-level sequential information in the documents, which other current document representations fail to capture. The new document vectors were evaluated in the genre classification of documents in three corpora: the Brown Corpus, the BNC Baby Corpus and an artificially created Penn Treebank dataset. Their classification performances are compared with the performance of TF-IDF vector and the state-of-the-art distributed memory model of paragraph vector (PV-DM). The results show that DV-LSTM significantly outperforms TF-IDF and PV-DM in most cases, and combinations of the proposed document vectors with TF-IDF or PVDM may further improve performance.", "targets": "Recurrent Neural Network Language Model Adaptation Derived Document Vector"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-17c41827abb8490dbc0263f73921db88", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We extend Stochastic Gradient Variational Bayes to perform posterior inference for the weights of Stick-Breaking processes. This development allows us to define a Stick-Breaking Variational Autoencoder (SB-VAE), a Bayesian nonparametric version of the variational autoencoder that has a latent representation with stochastic dimensionality. We experimentally demonstrate that the SB-VAE, and a semisupervised variant, learn highly discriminative latent representations that often outperform the Gaussian VAE\u2019s.", "targets": "STICK-BREAKING VARIATIONAL AUTOENCODERS"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-24f1bf92e88e4f6281d6d5f0587c416d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We formalize synthesis of shared control protocols with correctness guarantees for temporal logic specifications. More specifically, we introduce a modeling formalism in which both a human and an autonomy protocol can issue commands to a robot towards performing a certain task. These commands are blended into a joint input to the robot. The autonomy protocol is synthesized using an abstraction of possible human commands accounting for randomness in decisions caused by factors such as fatigue or incomprehensibility of the problem at hand. The synthesis is designed to ensure that the resulting robot behavior satisfies given safety and performance specifications, e.g., in temporal logic. Our solution is based on nonlinear programming and we address the inherent scalability issue by presenting alternative methods. We assess the feasibility and the scalability of the approach by an experimental evaluation.", "targets": "Synthesis of Shared Control Protocols with Provable Safety and Performance Guarantees"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6886492287ac4d56b75ed6954d841325", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The work presented here involves the design of a Multi Layer Perceptron (MLP) based pattern classifier for recognition of handwritten Bangla digits using a 76 element feature vector. Bangla is the second most popular script and language in the Indian subcontinent and the fifth most popular language in the world. The feature set developed for representing handwritten Bangla numerals here includes 24 shadow features, 16 centroid features and 36 longest-run features. On experimentation with a database of 6000 samples, the technique yields an average recognition rate of 96.67% evaluated after three-fold cross validation of results. It is useful for applications related to OCR of handwritten Bangla Digit and can also be extended to include OCR of handwritten characters of Bangla alphabet.", "targets": "An MLP based Approach for Recognition of Handwritten \u2018Bangla\u2019 Numerals"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d4e6dedd6c894a9e8a90c5a708c603f3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "LSTMs have become a basic building block for many deep NLP models. In recent years, many improvements and variations have been proposed for deep sequence models in general, and LSTMs in particular. We propose and analyze a series of augmentations and modifications to LSTM networks resulting in improved performance for text classification datasets. We observe compounding improvements on traditional LSTMs using Monte Carlo test-time model averaging, average pooling, and residual connections, along with four other suggested modifications. Our analysis provides a simple, reliable, and high quality baseline model.", "targets": "BINING RECENT INSIGHTS FOR LSTMS"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f635cfc0757040aba94da6e0b68913df", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Social media and data mining are increasingly being used to analyse political and societal issues. Here we undertake the classification of social media users as supporting or opposing ongoing independence movements in their territories. Independence movements occur in territories whose citizens have conflicting national identities; users with opposing national identities will then support or oppose the sense of being part of an independent nation that differs from the officially recognised country. We describe a methodology that relies on users\u2019 self-reported location to build datasets for three territories \u2013 Catalonia, the Basque Country and Scotland \u2013 and we test language-independent classifiers using four types of features. We show the effectiveness of the approach to build large annotated datasets, and the ability to achieve accurate, language-independent classification performances ranging from 85% to 97% for the three territories under study. A data analysis shows the existence of echo chambers that isolate opposing national identities from each other.", "targets": "Stance Classification of Social Media Users in Independence Movements"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a17bd58869bb49a3988c24e4d519adbb", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Text Classification is a challenging and a red hot field in the current scenario and has great importance in text categorization applications. A lot of research work has been done in this field but there is a need to categorize a collection of text documents into mutually exclusive categories by extracting the concepts or features using supervised learning paradigm and different classification algorithms. In this paper, a new Fuzzy Similarity Based Concept Mining Model (FSCMM) is proposed to classify a set of text documents into pre defined Category Groups (CG) by providing them training and preparing on the sentence, document and integrated corpora levels along with feature reduction, ambiguity removal on each level to achieve high system performance. Fuzzy Feature Category Similarity Analyzer (FFCSA) is used to analyze each extracted feature of Integrated Corpora Feature Vector (ICFV) with the corresponding categories or classes. This model uses Support Vector Machine Classifier (SVMC) to classify correctly the training data patterns into two groups; i. e., + 1 and \u2013 1, thereby producing accurate and correct results. The proposed model works efficiently and effectively with great performance and high accuracy results. Keywords-Text Classification; Natural Language Processing; Feature Extraction; Concept Mining; Fuzzy Similarity Analyzer; Dimensionality Reduction; Sentence Level; Document Level; Integrated Corpora Level Processing.", "targets": "A Fuzzy Similarity Based Concept Mining Model for Text Classification"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-23f0a4953b1348769a062307a4cd1a64", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Distortion of the underlying speech is a common problem for single-channel speech enhancement algorithms, and hinders such methods from being used more extensively. A dictionary based speech enhancement method that emphasizes preserving the underlying speech is proposed. Spectral patches of clean speech are sampled and clustered to train a dictionary. Given a noisy speech spectral patch, the best matching dictionary entry is selected and used to estimate the noise power at each time-frequency bin. The noise estimation step is formulated as an outlier detection problem, where the noise at each bin is assumed present only if it is an outlier to the corresponding bin of the best matching dictionary entry. This framework assigns higher priority in removing spectral elements that strongly deviate from a typical spoken unit stored in the trained dictionary. Even without the aid of a separate noise model, this method can achieve significant noise reduction for various non-stationary noises, while effectively preserving the underlying speech in more challenging noisy environments.", "targets": "SINGLE CHANNEL SPEECH ENHANCEMENT USING OUTLIER DETECTION"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-866aa386f0c9404aa67e10bf1af67def", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Decision theoretical troubleshooting is about minimizing the expected cost of solving a certain problem like repairing a complicated man-made device. In this paper we consider situations where you have to take apart some of the device to get access to certain clusters and actions. Specifically, we investigate troubleshooting with independent actions in a tree of clusters where actions inside a cluster cannot be performed before the cluster is opened. The problem is non-trivial because there is a cost associated with opening and closing a cluster. Troubleshooting with independent actions and no clusters can be solved in O(n \u00b7 lg n) time (n being the number of actions) by the well-known \u201dP-over-C\u201d algorithm due to Kadane and Simon, but an efficient and optimal algorithm for a tree cluster model has not yet been found. In this paper we describe a \u201dbottom-up P-over-C\u201d O(n \u00b7 lg n) time algorithm and show that it is optimal when the clusters do not need to be closed to test whether the actions solved the problem.", "targets": "The Cost of Troubleshooting Cost Clusters with Inside Information"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ac7b09460a1b44fb9c301f1881863ff8", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The stable model (SM) semantics lacks the properties of existence, relevance and cumulativity. If we prospectively consider the class of conservative extensions of the SM semantics (i.e., semantics that for each normal logic program P retrieve a superset of the set of stable models of P), one may wander how do the semantics of this class behave in what concerns the aforementioned properties. That is the type of issue dealt with in this paper. We define a large class of conservative extensions of the SM semantics, dubbed affix stable model semantics (ASM), and study the above referred properties into two non-disjoint subfamilies of the class ASM, here dubbed ASMh and ASMm. From this study a number of results stem which facilitate the assessment of semantics in the class ASMh \u222aASMm with respect to the properties of existence, relevance and cumulativity, whilst unveiling relations among these properties. As a result of the approach taken in our work, light is shed on the characterization of the SM semantics, as we show that the properties of (lack of) existence and (lack of) cautious monotony are equivalent, which opposes statements on this issue that may be found in the literature. We also characterize the relevance failure of SM semantics in a more clear way than usually stated in the literature.", "targets": "Properties of Stable Model Semantics Extensions"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e6bfc459578a4fa192051e279b6acfc4", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper discusses a method for im\u00ad plementing a probabilistic inference system based on an extended relational data model. This model provides a unified approach for a variety of applications such as dynamic pro\u00ad gramming, solving sparse linear equations, and constraint propagation. In this frame\u00ad work, the probability model is represented as a generalized relational database. Subse\u00ad quent probabilistic requests can be processed as standard relational queries. Conventional database management systems can be easily adopted for implementing such an approxi\u00ad mate reasoning system.", "targets": "A Method for Implementing a Probabilistic Model as a Relational Database"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2d38c991646c4fd9a98f19c3d5b10d94", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In multilingual question answering, either the question needs to be translated into the document language, or vice versa. In addition to direction, there are multiple methods to perform the translation, four of which we explore in this paper: word-based, 10-best, contextbased, and grammar-based. We build a feature for each combination of translation direction and method, and train a model that learns optimal feature weights. On a large forum dataset consisting of posts in English, Arabic, and Chinese, our novel learn-to-translate approach was more effective than a strong baseline (p < 0.05): translating all text into English, then training a classifier based only on English (original or translated) text.", "targets": "Learning to Translate for Multilingual Question Answering"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-04c8f1f43c7746ba9a08386db48b2054", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Statistical Relational Learning (SRL) methods have shown that classification accuracy can be improved by integrating relations between samples. Techniques such as iterative classification or relaxation labeling achieve this by propagating information between related samples during the inference process. When only a few samples are labeled and connections between samples are sparse, collective inference methods have shown large improvements over standard feature-based ML methods. However, in contrast to feature based ML, collective inference methods require complex inference procedures and often depend on the strong assumption of label consistency among related samples. In this paper, we introduce new relational features for standard ML methods by extracting information from direct and indirect relations. We show empirically on three standard benchmark datasets that our relational features yield results comparable to collective inference methods. Finally we show that our proposal outperforms these methods when additional information is available.", "targets": "Graph Based Relational Features for Collective Classification"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1a6b5860c48a4c5795b23c28bca80a88", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper describes a new approach allowing the generation of a simplified Biped gait. This approach combines a classical dynamic modeling with an inverse kinematics\u2019 solver based on particle swarm optimization, PSO. First, an inverted pendulum, IP, is used to obtain a simplified dynamic model of the robot and to compute the target position of a key point in biped locomotion, the Centre Of Mass, COM. The proposed algorithm, called IK-PSO, Inverse Kinematics PSO, returns and inverse kinematics solution corresponding to that COM respecting the joints constraints. In This paper the inertia weight PSO variant is used to generate a possible solution according to the stability based fitness function and a set of joints motions constraints. The method is applied with success to a leg motion generation. Since based on a precalculated COM, that satisfied the biped stability, the proposal allowed also to plan a walk with application on a small size biped robot. General Terms Robotics, Robotic Modeling, Computational Intelligence", "targets": "IK-PSO, PSO Inverse Kinematics Solver with Application to Biped Gait Generation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f336fe02dd3d4d27abcae8059f161dd5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We consider manipulation problems when the manipulator only has partial information about the votes of the nonmanipulators. Such partial information is described by an information set, which is the set of profiles of the nonmanipulators that are indistinguishable to the manipulator. Given such an information set, a dominating manipulation is a non-truthful vote that the manipulator can cast which makes the winner at least as preferable (and sometimes more preferable) as the winner when the manipulator votes truthfully. When the manipulator has full information, computing whether or not there exists a dominating manipulation is in P for many common voting rules (by known results). We show that when the manipulator has no information, there is no dominating manipulation for many common voting rules. When the manipulator\u2019s information is represented by partial orders and only a small portion of the preferences are unknown, computing a dominating manipulation is NP-hard for many common voting rules. Our results thus throw light on whether we can prevent strategic behavior by limiting information about the votes of other voters.", "targets": "Dominating Manipulations in Voting with Partial Information"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e2f2f6688cd34051b8e0e84c6a25f0a1", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Distributed optimization algorithms for largescale machine learning suffer from a communication bottleneck. Reducing communication makes the efficient aggregation of partial work from different machines more challenging. In this paper we present a novel generalization of the recent communication efficient primal-dual coordinate ascent framework (COCOA). Our framework, COCOA+, allows for additive combination of local updates to the global parameters at each iteration, whereas previous schemes only allowed conservative averaging. We give stronger (primal-dual) convergence rate guarantees for both COCOA as well as our new variants, and generalize the theory for both methods to also cover non-smooth convex loss functions. We provide an extensive experimental comparison on several real-world distributed datasets, showing markedly improved performance, especially when scaling up the number of machines.", "targets": "Adding vs. Averaging in Distributed Primal-Dual Optimization "} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-fd8593f18aff41d386c829a9eca6e9bf", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Understanding language goes hand in hand with the ability to integrate complex contextual information obtained via perception. In this work, we present a novel task for grounded language understanding: disambiguating a sentence given a visual scene which depicts one of the possible interpretations of that sentence. To this end, we introduce a new multimodal corpus containing ambiguous sentences, representing a wide range of syntactic, semantic and discourse ambiguities, coupled with videos that visualize the different interpretations for each sentence. We address this task by extending a vision model which determines if a sentence is depicted by a video. We demonstrate how such a model can be adjusted to recognize different interpretations of the same underlying sentence, allowing to disambiguate sentences in a unified fashion across the different ambiguity types.", "targets": "Do You See What I Mean? Visual Resolution of Linguistic Ambiguities"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-027474bea82e4b48bcfec9f5e74c186f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Multimodal sentiment analysis is an increasingly popular research area, which extends the conventional language-based definition of sentiment analysis to a multimodal setup where other relevant modalities accompany language. In this paper, we pose the problem of multimodal sentiment analysis as modeling intra-modality and inter-modality dynamics. We introduce a novel model, termed Tensor Fusion Network, which learns both such dynamics end-to-end. The proposed approach is tailored for the volatile nature of spoken language in online videos as well as accompanying gestures and voice. In the experiments, our model outperforms state-ofthe-art approaches for both multimodal and unimodal sentiment analysis.", "targets": "Tensor Fusion Network for Multimodal Sentiment Analysis"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-602ec94ae18c41a980e8ea587053a582", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a multiagent system that have feedforward networks as its subset while free from layer structure with matrix-vector scheme. Deep networks are often compared to the brain neocortex or visual perception system. One of the largest difference from human brain is the use of matrix-vector multiplication based on layer architecture. It would help understanding the way human brain works if we manage to develop good deep network model without the layer architecture while preserving their performance. The brain neocortex works as an aggregation of the local level interactions between neurons, which is rather similar to multiagent system consists of autonomous partially observing agents than units aligned in column vectors and manipulated by global level algorithm. Therefore we suppose that it is an effective approach for developing more biologically plausible model while preserving compatibility with deep networks to alternate units with multiple agents. Our method also has advantage in scalability and memory efficiency. We reimplemented Stacked Denoising Autoencoder(SDAE) as a concrete instance with our multiagent system and verified its equivalence with the standard SDAE from both theoritical and empirical perspectives. Additionary, we also proposed a variant of our multiagent SDAE named \"Sparse Connect SDAE\", and showed its computational advantage with the MNIST dataset.", "targets": "MULTIAGENT SYSTEM FOR LAYER FREE NETWORK"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-bfbe5e52973c42cebf7832675021615d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recursive neural models, which use syntactic parse trees to recursively generate representations bottom-up from parse children, are a popular new architecture, promising to capture structural properties like the scope of negation or long-distance semantic dependencies. But understanding exactly which tasks this parse-based method is appropriate for remains an open question. In this paper we benchmark recursive neural models against sequential recurrent neural models, which are structured solely on word sequences. We investigate 5 tasks: sentiment classification on (1) sentences and (2) syntactic phrases; (3) question answering; (4) discourse parsing; (5) semantic relations (e.g., component-whole between nouns); We find that recurrent models have equal or superior performance to recursive models on all tasks except one: semantic relations between nominals. Our analysis suggests that tasks relying on the scope of negation (like sentiment) are well-handled by sequential models. Recursive models help only with tasks that require representing long-distance relations between words. Our results offer insights on the design of neural architectures for representation learning.", "targets": "When Are Tree Structures Necessary for Deep Learning of Representations?"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-282d6fa4fac4408a9d8612a8487177f9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we propose and investigate a novel memory architecture for neural networks called Hierarchical Attentive Memory (HAM). It is based on a binary tree with leaves corresponding to memory cells. This allows HAM to perform memory access in \u0398(logn) complexity, which is a significant improvement over the standard attention mechanism that requires \u0398(n) operations, where n is the size of the memory. We show that an LSTM network augmented with HAM can learn algorithms for problems like merging, sorting or binary searching from pure input-output examples. In particular, it learns to sort n numbers in time \u0398(n logn) and generalizes well to input sequences much longer than the ones seen during the training. We also show that HAM can be trained to act like classic data structures: a stack, a FIFO queue and a priority queue.", "targets": "Learning Efficient Algorithms with Hierarchical Attentive Memory"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f3777e50a8994b09afd95891aae6795e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a new algorithm for minimizing regularized empirical loss: Stochastic Dual Newton Ascent (SDNA). Our method is dual in nature: in each iteration we update a random subset of the dual variables. However, unlike existing methods such as stochastic dual coordinate ascent, SDNA is capable of utilizing all curvature information contained in the examples, which leads to striking improvements in both theory and practice \u2013 sometimes by orders of magnitude. In the special case when an L2-regularizer is used in the primal, the dual problem is a concave quadratic maximization problem plus a separable term. In this regime, SDNA in each step solves a proximal subproblem involving a random principal submatrix of the Hessian of the quadratic function; whence the name of the method. If, in addition, the loss functions are quadratic, our method can be interpreted as a novel variant of the recently introduced Iterative Hessian Sketch.", "targets": "SDNA: Stochastic Dual Newton Ascent for Empirical Risk Minimization"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1ed67470024f4cb3923c85a75e354da8", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Clustering is an effective technique in data mining to generate groups that are the matter of interest. Among various clustering approaches, the family of k-means algorithms and min-cut algorithms gain most popularity due to their simplicity and efficacy. The classical k-means algorithm partitions a number of data points into several subsets by iteratively updating the clustering centers and the associated data points. By contrast, a weighted undirected graph is constructed in min-cut algorithms which partition the vertices of the graph into two sets. However, existing clustering algorithms tend to cluster minority of data points into a subset, which shall be avoided when the target dataset is balanced. To achieve more accurate clustering for balanced dataset, we propose to leverage exclusive lasso on k-means and min-cut to regulate the balance degree of the clustering results. By optimizing our objective functions that build atop the exclusive lasso, we can make the clustering result as much balanced as possible. Extensive experiments on several large-scale datasets validate the advantage of the proposed algorithms compared to the state-of-the-art clustering algorithms.", "targets": "Balanced k-Means and Min-Cut Clustering"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d15dcce487024eee8a84f9dd8378f577", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this report, we will be interested at Dynamic Bayesian Network (DBNs) as a model that tries to incorporate temporal dimension with uncertainty. We start with basics of DBN where we especially focus in Inference and Learning concepts and algorithms. Then we will present different levels and methods of creating DBNs as well as approaches of incorporating temporal dimension in static Bayesian network. KeywordsDBN, DAG, Inference, Learning, HMM, EM Algorithm, SEM, MLE, coupled HMMs", "targets": "Characterization of Dynamic Bayesian Network The Dynamic Bayesian Network as temporal network"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9611534e5a38438e9d95b0710e08d7b7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Logic programs with aggregates (LP) are one of the major linguistic extensions to Logic Programming (LP). In this work, we propose a generalization of the notions of unfounded set and well-founded semantics for programs with monotone and antimonotone aggregates (LPm,a programs). In particular, we present a new notion of unfounded set for LPm,a programs, which is a sound generalization of the original definition for standard (aggregate-free) LP. On this basis, we define a well-founded operator for LPm,a programs, the fixpoint of which is called well-founded model (or well-founded semantics) for LPm,a programs. The most important properties of unfounded sets and the well-founded semantics for standard LP are retained by this generalization, notably existence and uniqueness of the well-founded model, together with a strong relationship to the answer set semantics for LPm,a programs. We show that one of the D\u0303-well-founded semantics, defined by Pelov, Denecker, and Bruynooghe for a broader class of aggregates using approximating operators, coincides with the well-founded model as defined in this work on LPm,a programs. We also discuss some complexity issues, most importantly we give a formal proof of tractable computation of the well-founded model for LPm,a programs. Moreover, we prove that for general LP programs, which may contain aggregates that are neither monotone nor antimonotone, deciding satisfaction of aggregate expressions with respect to partial interpretations is coNP-complete. As a consequence, a well-founded semantics for general LP programs that allows for tractable computation is unlikely to exist, which justifies the restriction on LPm,a programs. Finally, we present a prototype system extending DLV, which supports the well-founded semantics for LPm,a programs, at the time of writing the only implemented system that does so. Experiments with this prototype show significant computational advantages of aggregate constructs over equivalent aggregate-free encodings.", "targets": "Unfounded Sets and Well-Founded Semantics of Answer Set Programs with Aggregates"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-dc77b3ecb7d6418dabf0762e69ab7516", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Stochastic gradient MCMC (SG-MCMC) has played an important role in largescale Bayesian learning, with well-developed theoretical convergence properties. In such applications of SG-MCMC, it is becoming increasingly popular to employ distributed systems, where stochastic gradients are computed based on some outdated parameters, yielding what are termed stale gradients. While stale gradients could be directly used in SG-MCMC, their impact on convergence properties has not been well studied. In this paper we develop theory to show that while the bias and MSE of an SG-MCMC algorithm depend on the staleness of stochastic gradients, its estimation variance (relative to the expected estimate, based on a prescribed number of samples) is independent of it. In a simple Bayesian distributed system with SG-MCMC, where stale gradients are computed asynchronously by a set of workers, our theory indicates a linear speedup on the decrease of estimation variance w.r.t. the number of workers. Experiments on synthetic data and deep neural networks validate our theory, demonstrating the effectiveness and scalability of SG-MCMC with stale gradients.", "targets": "Stochastic Gradient MCMC with Stale Gradients"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8d038b51b93f4fadb57ecb8f0a01452c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We extend the standard rough set-based approach to deal with huge amounts of numeric attributes versus small amount of available objects. Here, a novel approach of clustering along with dimensionality reduction; Hybrid Fuzzy C Means-Quick Reduct (FCMQR) algorithm is proposed for single gene selection. Gene selection is a process to select genes which are more informative. It is one of the important steps in knowledge discovery. The problem is that all genes are not important in gene expression data. Some of the genes may be redundant, and others may be irrelevant and noisy. In this study, the entire dataset is divided in proper grouping of similar genes by applying Fuzzy C Means (FCM) algorithm. A high class discriminated genes has been selected based on their degree of dependenc e by applying Quick Reduct algorithm based on Rough Set Theory to all the resultant clusters. Average Correlation Value (ACV) is calculated f or the high class discriminated genes. The clusters which have the ACV value a s 1 is determined as significant clusters, whose classification accuracy will be equal or high when comparing to the accuracy of the entire dataset. The proposed algorithm is evaluated using WEKA classifiers and compared. Finally, experimental results related to the leukemia cancer data confirm that our approach is quite promising, though it surely requires further research.", "targets": "A Novel Approach for Single Gene Selection Using Clustering and Dimensionality Reduction"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f8be7b1130ee46b59a4d7811b264cdb8", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A word\u2019s sentiment depends on the domain in which it is used. Computational social science research thus requires sentiment lexicons that are specific to the domains being studied. We combine domain-specific word embeddings with a label propagation framework to induce accurate domain-specific sentiment lexicons using small sets of seed words. We show that our approach achieves state-of-the-art performance on inducing sentiment lexicons from domain-specific corpora and that our purely corpus-based approach outperforms methods that rely on hand-curated resources (e.g., WordNet). Using our framework, we induce and release historical sentiment lexicons for 150 years of English and community-specific sentiment lexicons for 250 online communities from the social media forum Reddit. The historical lexicons we induce show that more than 5% of sentiment-bearing (nonneutral) English words completely switched polarity during the last 150 years, and the community-specific lexicons highlight how sentiment varies drastically between different communities.", "targets": "Inducing Domain-Specific Sentiment Lexicons from Unlabeled Corpora"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-30431adb59cb422e8ff6ba05b67c4e3f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "There have been multiple attempts to resolve various inflection matching problems in information retrieval. Stemming is a common approach to this end. Among many techniques for stemming, statistical stemming has been shown to be effective in a number of languages, particularly highly inflected languages. In this paper we propose a method for finding affixes in different positions of a word. Common statistical techniques heavily rely on string similarity in terms of prefix and suffix matching. Since infixes are common in irregular/informal inflections in morphologically complex texts, it is required to find infixes for stemming. In this paper we propose a method whose aim is to find statistical inflectional rules based on minimum edit distance table of word pairs and the likelihoods of the rules in a language. These rules are used to statistically stem words and can be used in different text mining tasks. Experimental results on CLEF 2008 and CLEF 2009 English-Persian CLIR tasks indicate that the proposed method significantly outperforms all the baselines in terms of MAP.", "targets": "SS4MCT: A Statistical Stemmer for Morphologically Complex Texts"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0cda0cc6b1cb4a14a5d64332de427bdc", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The recent surge in interest in ethics in artificial intelligence may leave many educators wondering how to address moral, ethical, and philosophical issues in their AI courses. As instructors we want to develop curriculum that not only prepares students to be artificial intelligence practitioners, but also to understand the moral, ethical, and philosophical impacts that artificial intelligence will have on society. In this article we provide practical case studies and links to resources for use by AI educators. We also provide concrete suggestions on how to integrate AI ethics into a general artificial intelligence course and how to teach a stand-alone artificial intelligence ethics course.", "targets": "Ethical Considerations in Artificial Intelligence Courses"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-815a766001d94f48be64310c5c33ae31", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The impact of culture in visual emotion perception has recently captured the attention of multimedia research. In this study, we provide powerful computational linguistics tools to explore, retrieve and browse a dataset of 16K multilingual affective visual concepts and 7.3M Flickr images. First, we design an effective crowdsourcing experiment to collect human judgements of sentiment connected to the visual concepts. We then use word embeddings to represent these concepts in a low dimensional vector space, allowing us to expand the meaning around concepts, and thus enabling insight about commonalities and differences among different languages. We compare a variety of concept representations through a novel evaluation task based on the notion of visual semantic relatedness. Based on these representations, we design clustering schemes to group multilingual visual concepts, and evaluate them with novel metrics based on the crowdsourced sentiment annotations as well as visual semantic relatedness. The proposed clustering framework enables us to analyze the full multilingual dataset in-depth and also show an application on a facial data subset, exploring cultural insights of portrait-related affective visual concepts.", "targets": "Multilingual Visual Sentiment Concept Matching"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ae2de2adefeb4c5f929f5c74a37c9216", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper presents machine learning solutions to a practical problem of Natural Language Generation (NLG), particularly the word formation in agglutinative languages like Tamil, in a supervised manner. The morphological generator is an important component of Natural Language Processing in Artificial Intelligence. It generates word forms given a root and affixes. The morphophonemic changes like addition, deletion, alternation etc., occur when two or more morphemes or words joined together. The Sandhi rules should be explicitly specified in the rule based morphological analyzers and generators. In machine learning framework, these rules can be learned automatically by the system from the training samples and subsequently be applied for new inputs. In this paper we proposed the machine learning models which learn the morphophonemic rules for noun declensions from the given training data. These models are trained to learn sandhi rules using various learning algorithms and the performance of those algorithms are presented. From this we conclude that machine learning of morphological processing such as word form generation can be successfully learned in a supervised manner, without explicit description of rules. The performance of Decision trees and Bayesian machine learning algorithms on noun declensions are discussed.", "targets": "MACHINE LEARNING OF PHONOLOGICALLY CONDITIONED NOUN DECLENSIONS FOR TAMIL MORPHOLOGICAL GENERATORS"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-19a55029061740d29e3109ec602c433a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A famous biologically inspired hierarchical model firstly proposed by Riesenhuber and Poggio has been successfully applied to multiple visual recognition tasks. The model is able to achieve a set of positionand scale-tolerant recognition, which is a central problem in pattern recognition. In this paper, based on some other biological experimental results, we introduce the Memory and Association Mechanisms into the above biologically inspired model. The main motivations of the work are (a) to mimic the active memory and association mechanism and add the \u2019top down\u2019 adjustment to the above biologically inspired hierarchical model and (b) to build up an algorithm which can save the space and keep a good recognition performance. More details of the work could be explained as follows: (1) In objects memorizing process: Our proposed model mimics some characteristics of human\u2019s memory mechanism as follows: (a) In our model, one object is memorized by semantic attributes and special image patches (corresponding to episodic memory). The semantic attributes describe each part of the object with clear physical meaning, for example, if eyes and mouths of faces are \u2019big\u2019 or \u2019small\u2019 and so on. One special patch is selected if the value of the corresponding semantic feature is far from average one. The patch should be the most prominent part of the object. (b) In our model, different features (semantic attributes and special patches) of one object are stored in distributed places and the common feature of different objects is saved aggregately, which can learn to classify the difference of similar features of different objects. The similarity thresholds to each object can be learnt when new objects are learnt. (2) In object recognition process: In biological process, the associated recognition including familiarity discrimination and recollective matching. In our proposed model, firstly mimicking familiarity discrimination (\u2019knowing\u2019 though the episode), we compare the special patches of candidates with that of saved objects using above mentioned biologically inspired hierarchical model, where the candidates and saved objects have the same prominent semantic features. Then mimicking recollective matching, the comparison results of special patches are combined with semantic feature comparison. The new model is also applied to object recognition processes. The primary experimental results show that our method is efficient with much less memory requirement.", "targets": "Introducing Memory and Association Mechanism Into a Biologically Inspired Visual Model"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b94db145c86c4943a691aea7cf3c8a1c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose an efficient technique for multilabel classification based on calibration, a term we use to mean learning a link function that maps independent predictions to joint predictions. Though a naive implementation of our proposal would require training individual classifiers for each label, we show that for natural datasets and linear classifiers we can sidestep this by leveraging techniques from randomized linear algebra. Moreover, our algorithm applies equally well to multiclass classification. The end result is an algorithm that scales to very large multilabel and multiclass problems, and offers state of the art accuracy on many datasets.", "targets": "Multilabel Prediction via Calibration"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d2c1928dc2694d2eb93ecd7eedcbda4d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper discusses SYNTAGMA, a rule based NLP system addressing the tricky issues of syntactic ambiguity reduction and word sense disambiguation as well as providing innovative and original solutions for constituent generation and constraints management. To provide an insight into how it operates, the system's general architecture and components, as well as its lexical, syntactic and semantic resources are described. After that, the paper addresses the mechanism that performs selective parsing through an interaction between syntactic and semantic information, leading the parser to a coherent and accurate interpretation of the input text.", "targets": "Syntax-Semantics Interaction Parsing Strategies. Inside SYNTAGMA"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-777e9345097d410e9ea4d85eb7114b07", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The historic background of algorithmic processing with regard to etymology and methodology is translated into terms of mathematical logic and Computer Science. A formal logic structure is introduced by exemplary questions posed to Fiqh-chapters to define a logic query language. As a foundation, a generic algorithm for deciding Fiqhrulings is designed to enable and further leverage rule of law (vs. rule by law) with full transparency and complete algorithmic coverage of Islamic law eventually providing legal security, legal equality, and full legal accountability. This is implemented by disentangling and reinstating classic Fiqh-methodology (usul al-Fiqh) with the expressive power of subsets of First Order Logic (FOL) sustainably substituting ad hoc reasoning with falsifiable rational argumentation. The results are discussed in formal terms of completeness, decidability and complexity of formal Fiqh-systems. An Entscheidungsproblem for formal Fiqh-Systems is formulated and validated.", "targets": "The Algorithm of Islamic Jurisprudence (Fiqh) with Validation of an Entscheidungsproblem"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-23eb7c983589412e971bb994c498a976", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Applying deep reinforcement learning (RL) on real systems suffers from slow data sampling. We propose an enhanced generative adversarial network (EGAN) to initialize an RL agent in order to achieve faster learning. The EGAN utilizes the relation between states and actions to enhance the quality of data samples generated by a GAN. Pre-training the agent with the EGAN shows a steeper learning curve with a 20% improvement of training time in the beginning of learning, compared to no pre-training, and an improvement compared to training with GAN by about 5% with smaller variations. For real time systems with sparse and slow data sampling the EGAN could be used to speed up the early phases of the training process.", "targets": "Enhanced Experience Replay Generation for Efficient Reinforcement Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7d126bcc6b684c369c368a035ca85f43", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Many logistic activities are concerned with linking material flows among companies and processes. In such applications, we find a combination of quantity decisions, e. g. the amount of goods shipped (Inventory Management), and routing decisions as tackled in the area of Vehicle Routing. Clearly, both areas intersect to a considerable degree, complicating the solution of such problems. Recently, intensive research has been conducted in this context which is commonly refereed to as Inventory Routing Problems [2, 3] (IRP). Several variants of the IRP can be found, ranging from deterministic demand cases to stochastic models. From the practical point of view of the companies, reality is much more complex than a know demand and much more uncertain than a stochastic law. In fact, companies often have a partial knowledge of the demand over the planning horizon. Our observation of this phenomenon can be transformed in a new type of data, which we propose for further experimental investigations. We here assume that demand of the current period is known at the beginning of the period. Besides, we have an approximate overview of the demand over the 5 next periods, the 20 next periods and the 60 next periods. This overview is rather good (e.g. it does not differ from reality by more that \u00b110%) but of course, we cannot predict with certainty what will happen the next periods. The global objective of this work is to provide practical optimization methods to companies involved in inventory routing problems, taking into account this new type of data. Also, companies are sometimes not able to deal with changing plans every period and would like to adopt regular structures for serving customers. As our work is a long term project, we are gradually going to develop our solution approach. In a first phase, we will focus on the Inventory Routing problem with a single product, deterministic known demand over a finite horizon. Contrary to [1], we assume that the routing costs and the inventory costs are not comparable and therefore should be handled as two different objectives. To our knowledge, this is the first time that a bi-objective approach is considered for this problem.", "targets": "Practical inventory routing: A problem definition and an optimization method"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2f7d036b5795448f902e93007d4202a7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "An efficient, and intuitive algorithm is presented for the identification of speakers from a long dataset (like YouTube long discussion, Cocktail party recorded audio or video).The goal of automatic speaker identification is to identify the number of different speakers and prepare a model for that speaker by extraction, characterization and speaker-specific information contained in the speech signal. It has many diverse application specially in the field of Surveillance , Immigrations at Airport , cyber security , transcription in multi-source of similar sound source, where it is difficult to assign transcription arbitrary. The most commonly speech parameterization used in speaker verification, K-mean, cepstral analysis, is detailed. Gaussian mixture modeling, which is the speaker modeling technique is then explained. Gaussian mixture models (GMM), perhaps the most robust machine learning algorithm has been introduced to examine and judge carefully speaker identification in text independent. The application or employment of Gaussian mixture models for monitoring & Analysing speaker identity is encouraged by the familiarity, awareness, or understanding gained through experience that Gaussian spectrum depict the characteristics of speaker's spectral conformational pattern and remarkable ability of GMM to construct capricious densities after that we illustrate 'Expectation maximization' an iterative algorithm which takes some arbitrary value in initial estimation and carry on the iterative process until the convergence of value is observed We have tried to obtained 85 ~ 95% of accuracy using speaker modeling of vector quantization and Gaussian Mixture model ,so by doing various number of experiments we are able to obtain 79 ~ 82% of identification rate using Vector quantization and 85 ~ 92.6% of identification rate using GMM modeling by Expectation maximization parameter estimation depending on variation of parameter.", "targets": "SPEAKER IDENTIFICATION FROM YOUTUBE OBTAINED DATA"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-27b49b4469a8480f8cc8d877a5dbcde9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper presents a simple end-to-end model for speech recognition, combining a convolutional network based acoustic model and a graph decoding. It is trained to output letters, with transcribed speech, without the need for force alignment of phonemes. We introduce an automatic segmentation criterion for training from sequence annotation without alignment that is on par with CTC [6] while being simpler. We show competitive results in word error rate on the Librispeech corpus [18] with MFCC features, and promising results from raw waveform.", "targets": "Wav2Letter: an End-to-End ConvNet-based Speech Recognition System"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c3f55003968e4d79bab65c23647e2d10", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The success of semi-supervised learning crucially relies on the scalability to a huge amount of unlabelled data that are needed to capture the underlying manifold structure for better classification. Since computing the pairwise similarity between the training data is prohibitively expensive in most kinds of input data, currently, there is no general readyto-use semi-supervised learning method/tool available for learning with tens of millions or more data points. In this paper, we adopted the idea of two low-rank label propagation algorithms, GLNP (Global Linear Neighborhood Propagation) and Kernel Nystr\u00f6m Approximation, and implemented the parallelized version of the two algorithms accelerated with Nesterov\u2019s accelerated projected gradient descent for Big-data Label Propagation (BigLP). The parallel algorithms are tested on five real datasets ranging from 7000 to 10,000,000 in size and a simulation dataset of 100,000,000 samples. In the experiments, the implementation can scale up to datasets with 100,000,000 samples and hundreds of features and the algorithms also significantly improved the prediction accuracy when only a very small percentage of the data is labeled. The results demonstrate that the BigLP implementation is highly scalable to big data and effective in utilizing the unlabeled data for semi-supervised learning.", "targets": "Low-rank Label Propagation for Semi-supervised Learning with 100 Millions Samples"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-edc20043ee7a44febfd29f20631bf9ea", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we propose a multi-kernel classifier learning algorithm to optimize a given nonlinear and nonsmoonth multivariate classifier performance measure. Moreover, to solve the problem of kernel function selection and kernel parameter tuning, we proposed to construct an optimal kernel by weighted linear combination of some candidate kernels. The learning of the classifier parameter and the kernel weight are unified in a single objective function considering to minimize the upper boundary of the given multivariate performance measure. The objective function is optimized with regard to classifier parameter and kernel weight alternately in an iterative algorithm by using cutting plane algorithm. The developed algorithm is evaluated on two different pattern classification methods with regard to various multivariate performance measure optimization problems. The experiment results show the proposed algorithm outperforms the competing methods.", "targets": "Multiple kernel multivariate performance learning using cutting plane algorithm"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-19c8871f37a347faa1d936b2a9e6f55e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Several authors have recently developed risk-sensitive policy gradient methodsthat augment the standard expected cost minimization problem with a measure ofvariability in cost. These studies have focused on specific risk-measures, such asthe variance or conditional value at risk (CVaR). In this work, we extend the pol-icy gradient method to the whole class of coherent risk measures, which is widelyaccepted in finance and operations research, among other fields. We considerboth static and time-consistent dynamic risk measures. For static risk measures,our approach is in the spirit of policy gradient algorithms and combines a standardsampling approach with convex programming. For dynamic risk measures, our ap-proach is actor-critic style and involves explicit approximation of value function.Most importantly, our contribution presents a unified approach to risk-sensitivereinforcement learning that generalizes and extends previous results.", "targets": "Policy Gradient for Coherent Risk Measures"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-be617c91255e43fa9ff90e7b5b01caef", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Plan recognition aims to discover target plans (i.e., sequences of actions) behind observed actions, with history plan libraries or domain models in hand. Previous approaches either discover plans by maximally \u201cmatching\u201d observed actions to plan libraries, assuming target plans are from plan libraries, or infer plans by executing domain models to best explain the observed actions, assuming complete domain models are available. In real world applications, however, target plans are often not from plan libraries and complete domain models are often not available, since building complete sets of plans and complete domain models are often difficult or expensive. In this paper we view plan libraries as corpora and learn vector representations of actions using the corpora; we then discover target plans based on the vector representations. Our approach is capable of discovering underlying plans that are not from plan libraries, without requiring domain models provided. We empirically demonstrate the effectiveness of our approach by comparing its performance to traditional plan recognition approaches in three planning domains.", "targets": "Discovering Underlying Plans Based on Distributed Representations of Actions"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5420a01df2fa403eac95635bc7f0b205", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We study expressive power of shallow and deep neural networks with piece-wise linear activation functions. We establish new rigorous upper and lower bounds for the network complexity in the setting of approximations in Sobolev spaces. In particular, we prove that deep ReLU networks more efficiently approximate smooth functions than shallow networks. In the case of approximations of 1D Lipschitz functions we describe adaptive depth-6 network architectures more efficient than the standard shallow architecture.", "targets": "Error bounds for approximations with deep ReLU networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-35016aaaa4124ebfb776efc3c821f657", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In probabilistic logic entailments, even moderate size problems can yield linear constraint systems with so many variables that exact methods are impractical. This difficulty can be remedied in many cases of interest by introducing a three\u00ad valued logic (true, false, and \"don't care\"). The three-valued approach allows the construction of \"compressed\" constraint systems which have the same solution sets as their two-valued counterparts, but which may involve dramatically fewer variables. Techniques to calculate point estimates for the posterior probabilities of entailed sentences are discussed. 1. PROLIFERATION OF WORLDS An entailment problem in Nilsson's (1986) probabilistic logic derives an estimate for the prior probability of one sentence (hereafter, the \"target\") from the priors for a set of other (\"source\") sentences. The prior beliefs about the source sentences establish constraints of the form P=VW L wi = l Wi <': 0 sum over all \"worlds\"", "targets": "Compressed Constraints in Probabilistic Logic and Their Revision"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-fbf56aff8ce546c292b903e413a7a1c7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Concept maps can be used to concisely represent important information and bring structure into large document collections. Therefore, we study a variant of multidocument summarization that produces summaries in the form of concept maps. However, suitable evaluation datasets for this task are currently missing. To close this gap, we present a newly created corpus of concept maps that summarize heterogeneous collections of web documents on educational topics. It was created using a novel crowdsourcing approach that allows us to efficiently determine important elements in large document collections. We release the corpus along with a baseline system and proposed evaluation protocol to enable further research on this variant of summarization.", "targets": "Connecting the dots: Summarizing and Structuring Large Document Collections Using Concept Maps"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f04bf1bc60044036adec0b5e89ef9ef1", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We extend the model of Multi-armed Bandit with unit switching cost to incorporate a metric between the actions. We consider the case where the metric over the actions can be modeled by a complete binary tree, and the distance between two leaves is the size of the subtree of their least common ancestor, which abstracts the case that the actions are points on the continuous interval r0, 1s and the switching cost is their distance. In this setting, we give a new algorithm that establishes a regret of r Op ? kT ` T {kq, where k is the number of actions and T is the time horizon. When the set of actions corresponds to whole r0, 1s interval we can exploit our method for the task of bandit learning with Lipschitz loss functions, where our algorithm achieves an optimal regret rate of r \u0398pT 2{3q, which is the same rate one obtains when there is no penalty for movements. As our main application, we use our new algorithm to solve an adaptive pricing problem. Specifically, we consider the case of a single seller faced with a stream of patient buyers. Each buyer has a private value and a window of time in which they are interested in buying, and they buy at the lowest price in the window, if it is below their value. We show that with an appropriate discretization of the prices, the seller can achieve a regret of r OpT 2{3q compared to the best fixed price in hindsight, which outperform the previous regret bound of r OpT 3{4q for the problem.", "targets": "Bandits with Movement Costs and Adaptive Pricing"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ddb0c182f92741cdba68a1d08d0230cb", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we provide two axiomatizations of algebraic expected utility, which is a particular generalized expected utility, in a von NeumannMorgenstern setting, i.e. uncertainty representation is supposed to be given and here to be described by a plausibility measure valued on a semiring, which could be partially ordered. We show that axioms identical to those for expected utility entail that preferences are represented by an algebraic expected utility. This algebraic approach allows many previous propositions (expected utility, binary possibilistic utility,...) to be unified in a same general framework and proves that the obtained utility enjoys the same nice features as expected utility: linearity, dynamic consistency, autoduality of the underlying uncertainty representation, autoduality of the decision criterion and possibility of modeling decision maker\u2019s attitude toward uncertainty.", "targets": "Axiomatic Foundations for a Class of Generalized Expected Utility: Algebraic Expected Utility"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a33945746e114281a4614c4f16163912", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "s prominently include information that is relevant to the population group of interest, and intervention, comparison and disease of interest. The vector space model further measures the similarity between the query and citation based on concepts (as opposed to just terms or words themselves).", "targets": "A Hybrid Citation Retrieval Algorithm for Evidence-based Clinical Knowledge Summarization: Combining Concept Extraction, Vector Similarity and Query Expansion for High Precision"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-10783dbda2de48a29f08686ec38412eb", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a label propagation approach to geolocation prediction based on Modified Adsorption, with two enhancements: (1) the removal of \u201ccelebrity\u201d nodes to increase location homophily and boost tractability; and (2) the incorporation of text-based geolocation priors for test users. Experiments over three Twitter benchmark datasets achieve state-of-the-art results, and demonstrate the effectiveness of the enhancements.", "targets": "Twitter User Geolocation Using a Unified Text and Network Prediction Model"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-87a35380ce494e19a218ac0628e693c7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper the behavior of various be\u00ad lief network learning algorithms is stud\u00ad ied. Selecting belief networks with cer\u00ad tain minimallity properties turns out to be NP-hard, which justifies the use of search heuristics. Search heuristics based on the Bayesian measure of Cooper and Her\u00ad skovits and a minimum description length (MDL) measure are compared with re\u00ad spect to their properties for both limit\u00ad ing and finite database sizes. It is shown that the MDL measure has more desir\u00ad able properties than the Bayesian mea\u00ad sure. Experimental results suggest that for learning probabilities of belief net\u00ad works smoothing is helpful.", "targets": "Properties of Bayesian Belief Network Learning Algorithms"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c45fb0c9d9e2400bba125c95b497d254", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Gaussian Process bandit optimization has emerged as a powerful tool for optimizing noisy black box functions. One example in machine learning is hyper-parameter optimization where each evaluation of the target function requires training a model which may involve days or even weeks of computation. Most methods for this so-called \u201cBayesian optimization\u201d only allow sequential exploration of the parameter space. However, it is often desirable to propose batches or sets of parameter values to explore simultaneously, especially when there are large parallel processing facilities at our disposal. Batch methods require modeling the interaction between the different evaluations in the batch, which can be expensive in complex scenarios. In this paper, we propose a new approach for parallelizing Bayesian optimization by modeling the diversity of a batch via Determinantal point processes (DPPs) whose kernels are learned automatically. This allows us to generalize a previous result as well as prove better regret bounds based on DPP sampling. Our experiments on a variety of synthetic and real-world robotics and hyper-parameter optimization tasks indicate that our DPP-based methods, especially those based on DPP sampling, outperform state-of-the-art methods.", "targets": "Batched Gaussian Process Bandit Optimization via Determinantal Point Processes"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-955100045f0c4d38a88bd3f5f7555225", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Training Recurrent Neural Networks is more troublesome than feedforward ones because of the vanishing and exploding gradient problems detailed in Bengio et al. (1994). In this paper we attempt to understand the fundamental issues underlying the exploding gradient problem by exploring it from an analytical, a geometric and a dynamical system perspective. Our analysis is used to justify the simple yet effective solution of norm clipping the exploded gradient. In the experimental section, the comparison between this heuristic solution and standard SGD provides empirical evidence towards our hypothesis as well as it shows that such a heuristic is required to reach state of the art results on a character prediction task and a polyphonic music prediction one.", "targets": "Understanding the exploding gradient problem"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5da10dc0c4784dae8b109eaa465d2448", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Graph partitioning, a well studied problem of parallel computing has many applications in diversified fields such as distributed computing, social network analysis, data mining and many other domains. In this paper, we introduce FGPGA, an efficient genetic approach for producing feasible graph partitions. Our method takes into account the heterogeneity and capacity constraints of the partitions to ensure balanced partitioning. Such approach has various applications in mobile cloud computing that include feasible deployment of software applications on the more resourceful infrastructure in the cloud instead of mobile hand set. Our proposed approach is light weight and hence suitable for use in cloud architecture. We ensure feasibility of the partitions generated by not allowing over-sized partitions to be generated during the initialization and search. Our proposed method tested on standard benchmark datasets significantly outperforms the state-of-the-art methods in terms of quality of partitions and feasibility of the solutions.", "targets": "FGPGA: An Efficient Genetic Approach for Producing Feasible Graph Partitions"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7aa658ed0fcf4b5480094bd395a9f941", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The present complexity in designing web applications makes software security a difficult goal to achieve. An attacker can explore a deployed service on the web and attack at his/her own leisure. Moving Target Defense (MTD) in web applications is an effective mechanism to nullify this advantage of their reconnaissance but the framework demands a good switching strategy when switching between multiple configurations for its web-stack. To address this issue, we propose modeling of a real-world MTD web application as a repeated Bayesian game. We then formulate an optimization problem that generates an effective switching strategy while considering the cost of switching between different web-stack configurations. To incorporate this model into a developed MTD system, we develop an automated system for generating attack sets of Common Vulnerabilities and Exposures (CVEs) for input attacker types with predefined capabilities. Our framework obtains realistic reward values for the players (defenders and attackers) in this game by using security domain expertise on CVEs obtained from the National Vulnerability Database (NVD). We also address the issue of prioritizing vulnerabilities that when fixed, improves the security of the MTD system. Lastly, we demonstrate the robustness of our proposed model by evaluating its performance when there is uncertainty about input attacker information.", "targets": "Moving Target Defense for Web Applications using Bayesian Stackelberg Games"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f0885e4200514a109e7b28beb7dfdcb2", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "As the complexity of deep neural networks (DNNs) trend to grow to absorb the increasing sizes of data, memory and energy consumption has been receiving more and more attentions for industrial applications, especially on mobile devices. This paper presents a novel structure based on functional hashing to compress DNNs, namely FunHashNN. For each entry in a deep net, FunHashNN uses multiple low-cost hash functions to fetch values in the compression space, and then employs a small reconstruction network to recover that entry. The reconstruction network is plugged into the whole network and trained jointly. FunHashNN includes the recently proposed HashedNets [7] as a degenerated case, and benefits from larger value capacity and less reconstruction loss. We further discuss extensions with dual space hashing and multi-hops. On several benchmark datasets, FunHashNN demonstrates high compression ratios with little loss on prediction accuracy.", "targets": "Functional Hashing for Compressing Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-136371659e5642a98dc3981b32560714", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "It has been proved that large-scale realistic Knowledge Based Machine Translation (KBMT) applications require acquisition of huge knowledge about language and about the world. This knowledge is encoded in computational grammars, lexicons and domain models. Another approach \u2013 which avoids the need for collecting and analyzing massive knowledgeis the Example Based approach, which is the topic of this paper. We show through the paper that using Example Based in its native form is not suitable for translating into Arabic. Therefore a modification to the basic approach is presented to improve the accuracy of the translation process. The basic idea of the new approach is to improve the technique by which template-based approaches select the appropriate templates. It relies on extracting, from a parallel Bilingual Corpus, all possible templates that could match parts of the source sentence. These templates are selected as suitable candidate chunks for the source sentence. The corresponding Arabic templates are also extracted and represented by a diredted graph. Each branch represents one possible string of templates candidate to represent the target sentence. The shortest continuous path or the most probable tree branch is selected to represent the target sentence. Finally the Arabic translation of the selected tree branch is generated.", "targets": "The Best Templates Match Technique For Example Based Machine Translation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ffc8a5e4d03d424d8ac513818847d465", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present a new decision rule, maximin safety, that seeks to maintain a large margin from the worst outcome, in much the same way minimax regret seeks to minimize distance from the best. We argue that maximin safety is valuable both descriptively and normatively. Descriptively, maximin safety explains the well-known decoy effect, in which the introduction of a dominated option changes preferences among the other options. Normatively, we provide an axiomatization that characterizes preferences induced by maximin safety, and show that maximin safety shares much of the same behavioral basis with minimax regret.", "targets": "Maximin Safety: When Failing to Lose is Preferable to Trying to Win"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-35ad305fda3e49bfabf724820cee4c36", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In recent years we have seen rapid and significant progress in automatic image description but what are the open problems in this area? Most work has been evaluated using text-based similarity metrics, which only indicate that there have been improvements, without explaining what has improved. In this paper, we present a detailed error analysis of the descriptions generated by a state-of-the-art attentionbased model. Our analysis operates on two levels: first we check the descriptions for accuracy, and then we categorize the types of errors we observe in the inaccurate descriptions. We find only 20% of the descriptions are free from errors, and surprisingly that 26% are unrelated to the image. Finally, we manually correct the most frequently occurring error types (e.g. gender identification) to estimate the performance reward for addressing these errors, observing gains of 0.2\u20131 BLEU point per type.", "targets": "Room for improvement in automatic image description: an error analysis"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1eebaa1f6771459a8509158c5ccf83ae", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Retrieving spoken content with spoken queries, or query-byexample spoken term detection (STD), is attractive because it makes possible the matching of signals directly on the acoustic level without transcribing them into text. Here, we propose an end-to-end query-by-example STD model based on an attention-based multi-hop network, whose input is a spoken query and an audio segment containing several utterances; the output states whether the audio segment includes the query. The model can be trained in either a supervised scenario using labeled data, or in an unsupervised fashion. In the supervised scenario, we find that the attention mechanism and multiple hops improve performance, and that the attention weights indicate the time span of the detected terms. In the unsupervised setting, the model mimics the behavior of the existing query-by-example STD system, yielding performance comparable to the existing system but with a lower search time complexity.", "targets": "QUERY-BY-EXAMPLE SPOKEN TERM DETECTION USING ATTENTION-BASED MULTI-HOP NETWORKS"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-cd2b01720d5c4f6da035a155c62501a4", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "For many low-resource or endangered languages, spoken language resources are more likely to be annotated with translations than with transcriptions. Recent work exploits such annotations to produce speech-to-translation alignments, without access to any text transcriptions. We investigate whether providing such information can aid in producing better (mismatched) crowdsourced transcriptions, which in turn could be valuable for training speech recognition systems, and show that they can indeed be beneficial through a smallscale case study as a proof-of-concept. We also present a simple phonetically aware string averaging technique that produces transcriptions of higher quality.", "targets": "A case study on using speech-to-translation alignments for language documentation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2bd7c18b97184b4fb922eb641f7cb55b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Ensembling is a well-known technique in neural machine translation (NMT) to improve system performance. Instead of a single neural net, multiple neural nets with the same topology are trained separately, and the decoder generates predictions by averaging over the individual models. Ensembling often improves the quality of the generated translations drastically. However, it is not suitable for production systems because it is cumbersome and slow. This work aims to reduce the runtime to be on par with a single system without compromising the translation quality. First, we show that the ensemble can be unfolded into a single large neural network which imitates the output of the ensemble system. We show that unfolding can already improve the runtime in practice since more work can be done on the GPU. We proceed by describing a set of techniques to shrink the unfolded network by reducing the dimensionality of layers. On JapaneseEnglish we report that the resulting network has the size and decoding speed of a single NMT network but performs on the level of a 3-ensemble system.", "targets": "Unfolding and Shrinking Neural Machine Translation Ensembles"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0b90f62d080c4f868b13b296bd61d949", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recent research shows that most Brazilian students have serious problems regarding their reading skills. The full development of this skill is key for the academic and professional future of every citizen. Tools for classifying the complexity of reading materials for children aim to improve the quality of the model of teaching reading and text comprehension. For English, Feng\u2019s work [11] is considered the state-of-art in grade level prediction and achieved 74% of accuracy in automatically classifying 4 levels of textual complexity for close school grades. There are no classifiers for nonfiction texts for close grades in Portuguese. In this article, we propose a scheme for manual annotation of texts in 5 grade levels, which will be used for customized reading to avoid the lack of interest by students who are more advanced in reading and the blocking of those that still need to make further progress. We obtained 52% of accuracy in classifying texts into 5 levels and 74% in 3 levels. The results prove to be promising when compared to the state-of-art work.", "targets": "Automatic Classification of the Complexity of Nonfiction Texts in Portuguese for Early School Years"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c7f5b4ef462e42de849f0b9abf6d804f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "It is hypothesized that creativity arises from the self-mending capacity of an internal model of the world, or worldview. The uniquely honed worldview of a creative individual results in a distinctive style that is recognizable within and across domains. It is further hypothesized that creativity is domaingeneral in the sense that there exist multiple avenues by which the distinctiveness of one\u2019s worldview can be expressed. These hypotheses were tested using art students and creative writing students. Art students guessed significantly above chance both which painting was done by which of five famous artists, and which artwork was done by which of their peers. Similarly, creative writing students guessed significantly above chance both which passage was written by which of five famous writers, and which passage was written by which of their peers. These findings support the hypothesis that creative style is recognizable. Moreover, creative writing students guessed significantly above chance which of their peers produced particular works of art, supporting the hypothesis that creative style is recognizable not just within but across domains.", "targets": "Recognizability of Individual Creative Style Within and Across Domains: Preliminary Studies"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-01f0886bf96f4c4daf003a743f4b9b32", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The GLEU metric was proposed for evaluating grammatical error corrections using n-gram overlap with a set of reference sentences, as opposed to precision/recall of specific annotated errors (Napoles et al., 2015). This paper describes improvements made to the GLEU metric that address problems that arise when using an increasing number of reference sets. Unlike the originally presented metric, the modified metric does not require tuning. We recommend that this version be used instead of the original version.1", "targets": "GLEU Without Tuning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-17b9b356bdfd4a13a767ca119ef7c45d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present a first step towards a framework for defining and manipulating normative documents or contracts described as ContractOriented (C-O) Diagrams. These diagrams provide a visual representation for such texts, giving the possibility to express a signatory\u2019s obligations, permissions and prohibitions, with or without timing constraints, as well as the penalties resulting from the non-fulfilment of a contract. This work presents a CNL for verbalising C-O Diagrams, a web-based tool allowing editing in this CNL, and another for visualising and manipulating the diagrams interactively. We then show how these proof-ofconcept tools can be used by applying them to a small example.", "targets": "A CNL for Contract-Oriented Diagrams"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-31610604c33c4a259d823f5ae6f8cdbd", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The speed of convergence of the Expectation Maximization (EM) algorithm for Gaussian mixture model fitting is known to be dependent on the amount of overlap among the mixture components. In this paper, we study the impact of mixing coefficients on the convergence of EM. We show that when the mixture components exhibit some overlap, the convergence of EM becomes slower as the dynamic range among the mixing coefficients increases. We propose a deterministic anti-annealing algorithm, that significantly improves the speed of convergence of EM for such mixtures with unbalanced mixing coefficients. The proposed algorithm is compared against other standard optimization techniques like BFGS, Conjugate Gradient, and the traditional EM algorithm. Finally, we propose a similar deterministic antiannealing based algorithm for the Dirichlet process mixture model and demonstrate its advantages over the conventional variational Bayesian approach.", "targets": "Convergence of the EM Algorithm for Gaussian Mixtures with Unbalanced Mixing Coefficients"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7448e6ba050344b28c4c2f46180e4e2c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We introduce the \u201cexponential linear unit\u201d (ELU) which speeds up learning indeep neural networks and leads to higher classification accuracies. Like rectifiedlinear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PRe-LUs), ELUs also avoid a vanishing gradient via the identity for positive values.However ELUs have improved learning characteristics compared to the units withother activation functions. In contrast to ReLUs, ELUs have negative values whichallows them to push mean unit activations closer to zero. Zero means speed uplearning because they bring the gradient closer to the unit natural gradient. Weshow that the unit natural gradient differs from the normal gradient by a bias shiftterm, which is proportional to the mean activation of incoming units. Like batchnormalization, ELUs push the mean towards zero, but with a significantly smallercomputational footprint. While other activation functions like LReLUs and PRe-LUs also have negative values, they do not ensure a noise-robust deactivation state.ELUs saturate to a negative value with smaller inputs and thereby decrease thepropagated variation and information. Therefore ELUs code the degree of pres-ence of particular phenomena in the input, while they do not quantitatively modelthe degree of their absence. Consequently, dependencies between ELUs are mucheasier to model and distinct concepts are less likely to interfere.We found that ELUs lead not only to faster learning, but also to better general-ization performance once networks have many layers (\u2265 5). ELU networks wereamong top 10 reported CIFAR-10 results and yielded the best published result onCIFAR-100, without resorting to multi-view evaluation or model averaging. OnImageNet, ELU networks considerably speed up learning compared to a ReLUnetwork with the same architecture, obtaining less than 10% classification errorfor a single crop, single model network.", "targets": "EXPONENTIAL LINEAR UNITS (ELUS)"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b752ee3f279e41c8948b0d7f22228317", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A method is presented for the rhythmic pars\u00ading problem: Given a sequence of observedmusical note onset times, we simultaneouslyestimate the corresponding notated rhythmand tempo process. A graphical model isdeveloped thatrepresents the evolution oftempo and rhythm and relates these hid\u00adden quantities to an observable performance.The rhythm variables are discrete and thetempo and observation variables are contin\u00aduous. We show how to compute theglob\u00adally most likely configuration of the tempoand rhythm variables given an observation ofnote onset times. Preliminary experimentsare presented on a small data set. A gen\u00aderalization to computing MAP estimates forarbitrary conditional Gaussian distributionsis outlined.", "targets": "A Mixed Graphical Model for Rhythmic Parsing"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-77f2b4b542c945d9a444fbeafe0ad0c0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The properties of local optimal solutions in multi-objective combinatorial optimization problems are crucial for the effectiveness of local search algorithms, particularly when these algorithms are based on Pareto dominance. Such local search algorithms typically return a set of mutually nondominated Pareto local optimal (PLO) solutions, that is, a PLO-set. This paper investigates two aspects of PLO-sets by means of experiments with Pareto local search (PLS). First, we examine the impact of several problem characteristics on the properties of PLO-sets for multi-objective NK-landscapes with correlated objectives. In particular, we report that either increasing the number of objectives or decreasing the correlation between objectives leads to an exponential increment on the size of PLO-sets, whereas the variable correlation has only a minor effect. Second, we study the running time and the quality reached when using bounding archiving methods to limit the size of the archive handled by PLS, and thus, the maximum size of the PLO-set found. We argue that there is a clear relationship between the running time of PLS and the difficulty of a problem instance.", "targets": "Local Optimal Sets and Bounded Archiving on Multi-objective NK-Landscapes with Correlated Objectives"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-27c046e1a9134557bf5b80faf5609fce", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Topic models provide a useful method for dimensionality reduction and exploratory dataanalysis in large text corpora. Most approaches to topic model inference have been based ona maximum likelihood objective. Efficient algorithms exist that approximate this objective,but they have no provable guarantees. Recently, algorithms have been introduced that provideprovable bounds, but these algorithms are not practical because they are inefficient and not ro-bust to violations of model assumptions. In this paper we present an algorithm for topic modelinference that is both provable and practical. The algorithm produces results comparable to thebest MCMC implementations while running orders of magnitude faster.", "targets": "A Practical Algorithm for Topic Modeling with Provable Guarantees"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-45e9fa2ebb1f430c8de37f5549275bdb", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Despite the increasing use of social media platforms for information and news gathering, its unmoderated nature often leads to the emergence and spread of rumours, i.e. pieces of information that are unverified at the time of posting. At the same time, the openness of social media platforms provides opportunities to study how users share and discuss rumours, and to explore how natural language processing and data mining techniques may be used to find ways of determining their veracity. In this survey we introduce and discuss two types of rumours that circulate on social media; long-standing rumours that circulate for long periods of time, and newly-emerging rumours spawned during fast-paced events such as breaking news, where reports are released piecemeal and often with an unverified status in their early stages. We provide an overview of research into social media rumours with the ultimate goal of developing a rumour classification system that consists of four components: rumour detection, rumour tracking, rumour stance classification and rumour veracity classification. We delve into the approaches presented in the scientific literature for the development of each of these four components. We summarise the efforts and achievements so far towards the development of rumour classification systems and conclude with suggestions for avenues for future research in social media mining for detection and resolution of rumours.", "targets": "Detection and Resolution of Rumours in Social Media: A Survey"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b3905604550040b692afc43376a98903", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Today data mining techniques are exploited in medical science for diagnosing, overcoming and treating diseases. Neural network is one of the techniques which are widely used for diagnosis in medical field. In this article efficiency of nine algorithms, which are basis of neural network learning in diagnosing cardiovascular diseases, will be assessed. Algorithms are assessed in terms of accuracy, sensitivity, transparency, AROC and convergence rate by means of 10 fold cross validation. The results suggest that in training phase, Lonberg-M algorithm has the best efficiency in terms of all metrics, algorithm OSS has maximum accuracy in testing phase, algorithm SCG has the maximum transparency and algorithm CGB has the maximum sensitivity. Keywords\u2014 cardiovascular disease; neural network; learning algorithms.", "targets": "Comparing learning algorithms in neural network for diagnosing cardiovascular disease"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2e00968cc7a74920b6d99eaa40e7b3a0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Multimedia reasoning, which is suitable for, among others, multimedia content analysis and high-level video scene interpretation, relies on the formal and comprehensive conceptualization of the represented knowledge domain. However, most multimedia ontologies are not exhaustive in terms of role definitions, and do not incorporate complex role inclusions and role interdependencies. In fact, most multimedia ontologies do not have a role box at all, and implement only a basic subset of the available logical constructors. Consequently, their application in multimedia reasoning is limited. To address the above issues, VidOnt, the very first multimedia ontology with SROIQ(D) expressivity and a DL-safe ruleset has been introduced for next-generation multimedia reasoning. In contrast to the common practice, the formal grounding has been set in one of the most expressive description logics, and the ontology validated with industry-leading reasoners, namely HermiT and FaCT++. This paper also presents best practices for developing multimedia ontologies, based on my ontology engineering approach.", "targets": "A Novel Approach to Multimedia Ontology Engineering for Automated Reasoning over Audiovisual LOD Datasets"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f9667a20eda04875a9ad3aaa30123ae2", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present a novel diffusion scheme for online kernel-based learning over networks. So far, a major drawback of any online learning algorithm, operating in a reproducing kernel Hilbert space (RKHS), is the need for updating a growing number of parameters as time iterations evolve. Besides complexity, this leads to an increased need of communication resources, in a distributed setting. In contrast, the proposed method approximates the solution as a fixed-size vector (of larger dimension than the input space) using Random Fourier Features. This paves the way to use standard linear combine-then-adapt techniques. To the best of our knowledge, this is the first time that a complete protocol for distributed online learning in RKHS is presented. Conditions for asymptotic convergence and boundness of the networkwise regret are also provided. The simulated tests illustrate the performance of the proposed scheme.", "targets": "Online Distributed Learning Over Networks in RKH Spaces Using Random Fourier Features"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-805718f88b1d4676a8acf712cf11048d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a new active learning (AL) method for text classification based on convolutional neural networks (CNNs). In AL, one selects the instances to be manually labeled with the aim of maximizing model performance with minimal effort. Neural models capitalize on word embeddings as features, tuning these to the task at hand. We argue that AL strategies for neural text classification should focus on selecting instances that most affect the embedding space (i.e., induce discriminative word representations). This is in contrast to traditional AL approaches (e.g., uncertainty sampling), which specify higher level objectives. We propose a simple approach that selects instances containing words whose embeddings are likely to be updated with the greatest magnitude, thereby rapidly learning discriminative, task-specific embeddings. Empirical results show that our method outperforms baseline AL approaches.", "targets": "Active Discriminative Word Embedding Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8357ec923c8f4b6aa9bb2c3e237d6bed", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence \u0177 = {y0 . . . yT }, by maximizing p(y|x) = \u220f t p(yt|x; {y0 . . . yt\u22121}). Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model\u2019s output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.", "targets": "Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-72b397ec33804b3894c088ea22a7935b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Feature selection plays an important role in the data mining process. It is needed to deal with the excessive number of features, which can become a computational burden on the learning algorithms. It is also necessary, even when computational resources are not scarce, since it improves the accuracy of the machine learning tasks, as we will see in the upcoming sections. In this review, we discuss the different feature selection approaches, and the relation between them and the various machine learning algorithms.", "targets": "Survey on Feature Selection"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f5429aea89224b0e84bb45a3181b7bf1", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Robust belief revision methods are crucial in streaming data situations for updating existing knowledge (or beliefs) with new incoming evidence. Bayes conditioning is the primary mechanism in use for belief revision in data fusion systems that use probabilistic inference. However, traditional conditioning methods face several challenges due to inherent data/source imperfections in big-data environments that harness soft (i.e., human or human-based) sources in addition to hard (i.e., physicsbased) sensors. The objective of this paper is to investigate the most natural extension of Bayes conditioning that is suitable for evidence updating in the presence of such uncertainties. By viewing the evidence updating process as a thought experiment, an elegant strategy is derived for robust evidence updating in the presence of extreme uncertainties that are characteristic of big-data environments. In particular, utilizing the Fagin-Halpern conditional notions, a natural extension to Bayes conditioning is derived for evidence that takes the form of a general belief function. The presented work differs fundamentally from the Conditional Update Equation (CUE) and authors own extensions of it. An overview of this development is provided via illustrative examples. Furthermore, insights into parameter selection under various fusion contexts are also provided.", "targets": "Evidence Updating for Stream-Processing in Big-Data: Robust Conditioning in Soft and Hard Data Fusion Environments"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6e31ba4baed34e4d830779a411d8b873", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Image Registration implies mapping images having varying orientation, multi-modal or multi-temporal images to map to one coordinate system. Digital Elevation models (DEM) are images having terrain information embedded into them. DEM-to-DEM registration incorporate registration of DEMs having different orientation, may have been mapped at different times, or may have been processed using different resolutions. Though very important only a handful of methods for DEM registration exist, most of which are for DEM-to-topographical map or DEMto-Remote Sensed Image registration. Using cognitive mapping concepts for DEM registration, has evolved from this basic idea of using the mapping between the space to objects and defining their relationships to form the basic landmarks that need to be marked, stored and manipulated in and about the environment or other candidate environments, namely, in our case, the DEMs. The progressive two-level encapsulation of methods of geo-spatial cognition includes landmark knowledge and layout knowledge and can be useful for DEM registration. Space-based approach, that emphasizes on explicit extent of the environment under consideration, and object-based approach, that emphasizes on the relationships between objects in the local environment being the two paradigms of cognitive mapping can be methodically integrated in this three-architecture for DEM registration. Initially, P-model based segmentation is performed followed by landmark formation for contextual mapping that uses contextual pyramid formation. Apart from landmarks being used for registration key-point finding, Euclidean distance based deformation calculation has been used for transformation and change detection. Initially, P-model based segmentation is performed followed by landmark formation for contextual mapping that uses contextual pyramid formation. Landmarks have been categorized to belong to either being flat-plain areas without much variation in the land heights; peaks that can be found when there is gradual increase in height as compared to the flat areas; valleys, marked with gradual decrease in the height seen in DEM; and finally, ripple areas with very shallow crests and nadirs. For the final storage of co-registered DEMs, fractal based compression has been found to give good results in terms of space and computation requirements. In this paper, an attempt has been made to implement DEM-DEM registration based on human spatial cognition method of recollection. This method may further be extended for DEM-totopographic map and DEM-to-remote sensed image registration. Experimental results further cement the fact that DEM registration may be effectively done using the proposed method.", "targets": "Cognitive-mapping and contextual pyramid based Digital Elevation Model Registration and its effective storage using fractal based compression"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c6242a833e4c4db18e7ad72b2891317e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This work uses the L-system to construct a tree structure for the text sequence and derives its complexity [1]. It serves as a measure of structural complexity of the text. It is applied to anomaly detection in data transmission. Keyword: text complexity, anomaly detection, structural complexity, rewriting rule, context-free grammar, L-system", "targets": "Syntactic sensitive complexity for symbol-free sequence"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7a80a19342a24d218de48d73fa74c888", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Low dimensional representations of words allowaccurate NLP models to be trained on limitedannotated data. While most representations ig-nore words\u2019 local context, a natural way to in-duce context-dependent representations is to per-form inference in a probabilistic latent-variablesequence model. Given the recent success ofcontinuous vector space word representations,we provide such an inference procedure for con-tinuous states, where words\u2019 representations aregiven by the posterior mean of a linear dynam-ical system. Here, efficient inference can beperformed using Kalman filtering. Our learn-ing algorithm is extremely scalable, operatingon simple cooccurrence counts for both param-eter initialization using the method of momentsand subsequent iterations of EM. In our exper-iments, we employ our inferred word embed-dings as features in standard tagging tasks, ob-taining significant accuracy improvements. Fi-nally, the Kalman filter updates can be seen as alinear recurrent neural network. We demonstratethat using the parameters of our model to ini-tialize a non-linear recurrent neural network lan-guage model reduces its training time by a dayand yields lower perplexity.", "targets": "A Linear Dynamical System Model for Text"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-fb0c40e3239d4896926db301f581d6d0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The abstract should summarize the contents of the paper using at least 70 and at most 150 words. It will be set in 9-point font size and be inset 1.0 cm from the right and left margins. There will be two blank lines before and after the Abstract. . . .", "targets": "Discriminative Parameter Estimation for Random Walks Segmentation: Technical Report"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-13769441716a43f381d24755875a4e02", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The Stable Marriage Problem (SMP) is a well-known matching problem first introduced and solved by Gale and Shapley [7]. Several variants and extensions to this problem have since been investigated to cover a wider set of applications. Each time a new variant is considered, however, a new algorithm needs to be developed and implemented. As an alternative, in this paper we propose an encoding of the SMP using Answer Set Programming (ASP). Our encoding can easily be extended and adapted to the needs of specific applications. As an illustration we show how stable matchings can be found when individuals may designate unacceptable partners and ties between preferences are allowed. Subsequently, we show how our ASP based encoding naturally allows us to select specific stable matchings which are optimal according to a given criterion. Each time, we can rely on generic and efficient off-the-shelf answer set solvers to find (optimal) stable matchings.", "targets": "Modeling Stable Matching Problems with Answer Set Programming"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c706f3125b894292a996f8d1f8fd0f90", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Idea Density (ID) measures the rate at which ideas or elementary predications are expressed in an utterance or in a text. Lower ID is found to be associated with an increased risk of developing Alzheimer\u2019s disease (AD) (Snowdon et al., 1996; Engelman et al., 2010). ID has been used in two different versions: propositional idea density (PID) counts the expressed ideas and can be applied to any text while semantic idea density (SID) counts pre-defined information content units and is naturally more applicable to normative domains, such as picture description tasks. In this paper, we develop DEPID, a novel dependency-based method for computing PID, and its version DEPID-R that enables to exclude repeating ideas\u2014a feature characteristic to AD speech. We conduct the first comparison of automatically extracted PID and SID in the diagnostic classification task on two different AD datasets covering both closed-topic and free-recall domains. While SID performs better on the normative dataset, adding PID leads to a small but significant improvement (+1.7 Fscore). On the free-topic dataset, PID performs better than SID as expected (77.6 vs 72.3 in F-score) but adding the features derived from the word embedding clustering underlying the automatic SID increases the results considerably, leading to an Fscore of 84.8.", "targets": "Idea density for predicting Alzheimer\u2019s disease from transcribed speech"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b990f430f102463495224a18558bb02f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Philosophers writing about the ravens paradox often note that Nicod\u2019s Condition (NC) holds given some set of background information, and fails to hold against others, but rarely go any further. That is, it is usually not explored which background information makes NC true or false. The present paper aims to fill this gap. For us, \u201c(objective) background knowledge\u201d is restricted to information that can be expressed as probability events. Any other configuration is regarded as being subjective and a property of the a priori probability distribution. We study NC in two specific settings. In the first case, a complete description of some individuals is known, e.g. one knows of each of a group of individuals whether they are black and whether they are ravens. In the second case, the number of individuals having a particular property is given, e.g. one knows how many ravens or how many black things there are (in the relevant population). While some of the most famous answers to the paradox are measure-dependent, our discussion is not restricted to any particular probability measure. Our most interesting result is that in the second setting, NC violates a simple kind of inductive inference (namely projectability). Since relative to NC, this latter rule is more closely related to, and more directly justified by our intuitive notion of inductive reasoning, this tension makes a case against the plausibility of NC. In the end, we suggest that the informal representation of NC may seem to be intuitively plausible because it can easily be mistaken for reasoning by analogy.", "targets": "On Nicod\u2019s Condition, Rules of Induction and the Raven Paradox"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6e6b6627f2674ea384bbf49e71bd4105", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Robot warehouse automation has attracted significant interest in recent years, perhaps most visibly in the Amazon Picking Challenge (APC) [1]. A fully autonomous warehouse pick-and-place system requires robust vision that reliably recognizes and locates objects amid cluttered environments, self-occlusions, sensor noise, and a large variety of objects. In this paper we present an approach that leverages multiview RGB-D data and self-supervised, data-driven learning to overcome those difficulties. The approach was part of the MITPrinceton Team system that took 3rdand 4thplace in the stowing and picking tasks, respectively at APC 2016. In the proposed approach, we segment and label multiple views of a scene with a fully convolutional neural network, and then fit pre-scanned 3D object models to the resulting segmentation to get the 6D object pose. Training a deep neural network for segmentation typically requires a large amount of training data. We propose a self-supervised method to generate a large labeled dataset without tedious manual segmentation. We demonstrate that our system can reliably estimate the 6D pose of objects under a variety of scenarios. All code, data, and benchmarks are available at http://www.cs.princeton.edu/\u223candyz/apc2016.", "targets": "Multi-view Self-supervised Deep Learning for 6D Pose Estimation in the Amazon Picking Challenge"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e45b5f3133fb4923b6f36a1056792eb6", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Possibilistic logic bases and possibilistic graphs are two different frameworks of interest for representing knowledge. The former stratifies the pieces of knowledge (expressed by logical formulas) accor?i\ufffdg to their level of certainty, while the latter exhibits relationships between variables. The two types of representations are semantically equivalent when they lead to the same possibility distribution (which rank\u00ad orders the possible interpretations). A possibility distribution can be decomposed using a chain rule which may be based on two different kinds of conditioning which exist in possibility theory (one based on product in a numerical setting, one based on minimum operation in a qualitative setting). These two types of conditioning induce two kin_ds of possibilistic graphs. In both cases, a translatiOn of these graphs into possibilistic bases is provided. The converse translation from a possibilistic knowledge base into a min-based graph is also described.", "targets": "Possibilistic logic bases and possibilistic graphs"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-765232697f5e40d394e546b65eda3523", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This is a companion note to our recent study of the weak convergence properties of constrained emphatic temporal-difference learning (ETD) algorithms from a theoretic perspective. It supplements the latter analysis with simulation results and illustrates the behavior of some of the ETD algorithms using three example problems.", "targets": "Some Simulation Results for Emphatic Temporal-Difference Learning Algorithms\u2217"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a7c9c482e64e45c4a31b348bdb509dda", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Protein subcellular localization prediction is an important and challenging problem. The traditional biology experiments are expensive and time-consuming, so more and more research interests tend to a series of machine learning approaches for predicting protein subcellular location. There are two main difficult problems among the existing state-of-the-art methods. First, most of the existing techniques are designed to deal with the multi-class but not the multi-label classification, which ignores the connection between the multiple labels. In reality, multiple location proteins implicate that there are vital and unique biological significances worthy of special focus, which cannot be ignored. Second, the techniques for handling imbalanced data in multi-label classification problem is significant but less. For solving the two issues, we have developed an ensemble multi-label classifier called HPSLPred which can be applied for the multi-label classification with imbalanced protein source. For the conveniences of users, a user-friendly webserver for HPSLPred was established at http://server.malab.cn/HPSLPred.", "targets": "HPSLPred: An Ensemble Multi-label Classifier for Human Protein Subcellular Location Prediction with Imbalanced Source"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7dd430b06bc242e4b86265cc89cd799e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Composition of low-dimensional distribu\u00ad tions, whose foundations were laid in the pa\u00ad per published in the Proceedings of UAI'97 (Jirousek 1997), appeared to be an alterna\u00ad tive apparatus to describe multidimensional probabilistic models. In contrast to Graphi\u00ad cal Markov Models, which define multidimen\u00ad sional distributions in a declarative way, this approach is rather procedural. Ordering of low-dimensional distributions into a proper sequence fully defines the respective compu\u00ad tational procedure; therefore, a study of dif\u00ad ferent types of generating sequences is one of the central problems in this field. Thus, it ap\u00ad pears that an important role is played by spe\u00ad cial sequences that are called perfect. Their main characterization theorems are presented in this paper. However, the main result of this paper is a solution to the problem of marginalization for general sequences. The main theorem describes a way to obtain a generating sequence that defines the model corresponding to the marginal of the distri\u00ad bution defined by an arbitrary generating se\u00ad quence. From this theorem the reader can see to what extent these computations are lo\u00ad cal; i.e., the sequence consists of marginal dis\u00ad tributions whose computation must be made by summing up over the values of the vari\u00ad able eliminated (the paper deals with a finite model) .", "targets": "Marginalization in Composed Probabilistic Models"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-23c3eb6109fe42589a2ebc548d3abcb3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Many academic disciplines including information systems, computer science, and operations management face scheduling problems as important decision making tasks. Since many scheduling problems are NP-hard in the strong sense, there is a need for developing solution heuristics. For scheduling problems with setup times on unrelated parallel machines, there is limited research on solution methods and to the best of our knowledge, parallel computer architectures have not yet been taken advantage of. We address this gap by proposing and implementing a new solution heuristic and by testing different parallelization strategies. In our computational experiments, we show that our heuristic calculates near-optimal solutions even for large instances and that computing time can be reduced substantially by our parallelization approach.", "targets": "High-Performance Computing for Scheduling Decision Support: A Parallel Depth-First Search Heuristic"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ea65dfa46a6d4bd3ac3a48cf70bacaa3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we propose convolutional neural networks for learning an optimal representation of question and answer sentences. Their main aspect is the use of relational information given by the matches between words from the two members of the pair. The matches are encoded as embeddings with additional parameters (dimensions), which are tuned by the network. These allows for better capturing interactions between questions and answers, resulting in a significant boost in accuracy. We test our models on two widely used answer sentence selection benchmarks. The results clearly show the effectiveness of our relational information, which allows our relatively simple network to approach the state of the art.", "targets": "Modeling Relational Information in Question-Answer Pairs with Convolutional Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-fe54fef6f3984532b1b39e57d7f2c471", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "For a finite state automaton, a synchronizing sequence is an input sequence that takes all the states to the same state. Checking the existence of a synchronizing sequence and finding a synchronizing sequence, if one exists, can be performed in polynomial time. However, the problem of finding a shortest synchronizing sequence is known to be NP-hard. In this work, the usefulness of Answer Set Programming to solve this optimization problem is investigated, in comparison with brute-force algorithms and SAT-based approaches.", "targets": "Generating Shortest Synchronizing Sequences using Answer Set Programming"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b5011c1339814fa8b16688528d63c956", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper proposes a hierarchical attentional neural translation model which focuses on enhancing source-side hierarchical representations by covering both local and global semantic information using a bidirectional tree-based encoder. To maximize the predictive likelihood of target words, a weighted variant of an attention mechanism is used to balance the attentive information between lexical and phrase vectors. Using a tree-based rare word encoding, the proposed model is extended to sub-word level to alleviate the out-of-vocabulary (OOV) problem. Empirical results reveal that the proposed model significantly outperforms sequence-to-sequence attention-based and tree-based neural translation models in English-Chinese translation tasks.", "targets": "Towards Bidirectional Hierarchical Representations for Attention-Based Neural Machine Translation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-fda476d165cb458c86fc3556bcff4568", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper we present a hybrid approach for automatic composition of Web services that generates semantic inputoutput based compositions with optimal end-to-end QoS, minimizing the number of services of the resulting composition. The proposed approach has four main steps: 1) generation of the composition graph for a request; 2) computation of the optimal composition that minimizes a single objective QoS function; 3) multi-step optimizations to reduce the search space by identifying equivalent and dominated services; and 4) hybrid local-global search to extract the optimal QoS with the minimum number of services. An extensive validation with the datasets of the Web Service Challenge 2009-2010 and randomly generated datasets shows that: 1) the combination of local and global optimization is a general and powerful technique to extract optimal compositions in diverse scenarios; and 2) the hybrid strategy performs better than the state-of-the-art, obtaining solutions with less services and optimal QoS. Keywords\u2014Service Composition; Service Optimization; Hybrid Algorithm; QoS-aware; Semantic Web Services.", "targets": "Hybrid Optimization Algorithm for Large-Scale QoS-Aware Service Composition"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6bcbdaec923643d69e02eb5c9583f3f0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A graphical multiagent model (GMM) represents a joint distribution over the behavior of a set of agents. One source of knowledge about agents' behavior may come from gametheoretic analysis, as captured by several graphical game representations developed in recent years. GMMs generalize this approach to express arbitrary distributions, based on game descriptions or other sources of knowledge bearing on beliefs about agent behavior. To illustrate the exibility of GMMs, we exhibit game-derived models that allow probabilistic deviation from equilibrium, as well as models based on heuristic action choice. We investigate three di erent methods of integrating these models into a single model representing the combined knowledge sources. To evaluate the predictive performance of the combined model, we treat as actual outcome the behavior produced by a reinforcement learning process. We nd that combining the two knowledge sources, using any of the methods, provides better predictions than either source alone. Among the combination methods, mixing data outperforms the opinion pool and direct update methods investigated in this empirical trial.", "targets": "Knowledge Combination in Graphical Multiagent Models"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9d9649dfb57149e59417c5222380ec1c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Ristretto: Hardware-Oriented Approximation of Convolutional Neural Networks Convolutional neural networks (CNN) have achieved major breakthroughs in recent years. Their performance in computer vision have matched and in some areas even surpassed human capabilities. Deep neural networks can capture complex non-linear features; however this ability comes at the cost of high computational and memory requirements. State-ofart networks require billions of arithmetic operations and millions of parameters. To enable embedded devices such as smart phones, Google glasses and monitoring cameras with the astonishing power of deep learning, dedicated hardware accelerators can be used to decrease both execution time and power consumption. In applications where fast connection to the cloud is not guaranteed or where privacy is important, computation needs to be done locally. Many hardware accelerators for deep neural networks have been proposed recently. A first important step of accelerator design is hardware-oriented approximation of deep networks, which enables energy-efficient inference. We present Ristretto, a fast and automated framework for CNN approximation. Ristretto simulates the hardware arithmetic of a custom hardware accelerator. The framework reduces the bit-width of network parameters and outputs of resource-intense layers, which reduces the chip area for multiplication units significantly. Alternatively, Ristretto can remove the need for multipliers altogether, resulting in an adder-only arithmetic. The tool fine-tunes trimmed networks to achieve high classification accuracy. Since training of deep neural networks can be time-consuming, Ristretto uses highly optimized routines which run on the GPU. This enables fast compression of any given network. Given a maximum tolerance of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit. The code for Ristretto is available.", "targets": "Ristretto: Hardware-Oriented Approximation of Convolutional Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-80e25b5b49584c0fba10e05404166963", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The rise of robotic applications has led to the generation of a huge volume of unstructured data, whereas the current cloud infrastructure was designed to process limited amounts of structured data. To address this problem, we propose a learn-memorize-recall-reduce paradigm for robotic cloud computing. The learning stage converts incoming unstructured data into structured data; the memorization stage provides effective storage for the massive amount of data; the recall stage provides efficient means to retrieve the raw data; while the reduction stage provides means to make sense of this massive amount of unstructured data with limited computing resources.", "targets": "Learn-Memorize-Recall-Reduce: A Robotic Cloud Computing Paradigm"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2f049a0091594e0086b7d5485ee49a7b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A Dialogue System is a system which interacts with human in natural language. At present many universities are developing the dialogue system in their regional language. This paper will discuss about dialogue system, its components, challenges and its evaluation. This paper helps the researchers for getting info regarding dialogues system.", "targets": "Dialogue System: A Brief Review"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1ea759e119224c3d9d2cd69a9a511476", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present a Bayesian scheme for the approximate diagonalisation of several square matrices which are not necessarily symmetric. A Gibbs sampler is derived to simulate samples of the common eigenvectors and the eigenvalues for these matrices. Several synthetic examples are used to illustrate the performance of the proposed Gibbs sampler and we then provide comparisons to several other joint diagonalization algorithms, which shows that the Gibbs sampler achieves the state-of-theart performance on the examples considered. As a byproduct, the output of the Gibbs sampler could be used to estimate the log marginal likelihood, however we employ the approximation based on the Bayesian information criterion (BIC) which in the synthetic examples considered correctly located the number of common eigenvectors. We then succesfully applied the sampler to the source separation problem as well as the common principal component analysis and the common spatial pattern analysis problems.", "targets": "A Bayesian Approach to Approximate Joint Diagonalization of Square Matrices"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-decd6a8c01144d5694e988c55d55b18b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The chain-structured long short-term memory (LSTM) has showed to be effective in a wide range of problems such as speech recognition and machine translation. In this paper, we propose to extend it to tree structures, in which a memory cell can reflect the history memories of multiple child cells or multiple descendant cells in a recursive process. We call the model S-LSTM, which provides a principled way of considering long-distance interaction over hierarchies, e.g., language or image parse structures. We leverage the models for semantic composition to understand the meaning of text, a fundamental problem in natural language understanding, and show that it outperforms a state-of-theart recursive model by replacing its composition layers with the S-LSTM memory blocks. We also show that utilizing the given structures is helpful in achieving a performance better than that without considering the structures.", "targets": "Long Short-Term Memory Over Tree Structures"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-17e703fb7ed445629a62f1779243d48e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper provides a global vision of the scientific publications related with the Systemic Lupus Erythematosus (SLE), taking as starting point abstracts of articles. Through the time, abstracts have been evolving towards higher complexity on used terminology, which makes necessary the use of sophisticated statistical methods and answering questions including: how vocabulary is evolving through the time? Which ones are most influential articles? And which one are the articles that introduced new terms and vocabulary? To answer these, we analyze a dataset composed by 506 abstracts and downloaded from 115 different journals and cover a 18 year-period.", "targets": "How scientific literature has been evolving over the time? A novel statistical approach using tracking verbal-based methods"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-71765e186dc4483288ae02246eb7c5ee", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Service level agreement (SLA) is an essential part of cloud systems to ensure maximum availability of services for customers. With a violation of SLA, the provider has to pay penalties. Thus, being able to predict SLA violations favors both the customers and the providers. In this paper, we explore two machine learning models: Naive Bayes and Random Forest Classifiers to predict SLA violations. Since SLA violations are a rare event in the real world (\u223c 0.2%), the classification task becomes more challenging. In order to overcome these challenges, we use several re-sampling methods such as Random Over and Under Sampling, SMOTH, NearMiss (1,2,3), One-sided Selection, Neighborhood Cleaning Rule, etc. to re-balance the dataset. We use the Google Cloud Cluster trace as the dataset to examine these different methods. We find that random forests with SMOTE-ENN re-sampling have the best performance among other methods with the accuracy of 0.9988% and F1 score of 0.9980.", "targets": "SLA Violation Prediction In Cloud Computing: A Machine Learning Perspective"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4302672f2a714c9781859c93bdbfb412", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Many combinatorial problems deal with preferences and violations, the goal of which is to find solutions with the minimum cost. Weighted constraint satisfaction is a framework for modeling such problems, which consists of a set of cost functions to measure the degree of violation or preferences of different combinations of variable assignments. Typical solution methods for weighted constraint satisfaction problems (WCSPs) are based on branch-and-bound search, which are made practical through the use of powerful consistency techniques such as AC*, FDAC*, EDAC* to deduce hidden cost information and value pruning during search. These techniques, however, are designed to be efficient only on binary and ternary cost functions which are represented in table form. In tackling many real-life problems, high arity (or global) cost functions are required. We investigate efficient representation scheme and algorithms to bring the benefits of the consistency techniques to also high arity cost functions, which are often derived from hard global constraints from classical constraint satisfaction. The literature suggests some global cost functions can be represented as flow networks, and the minimum cost flow algorithm can be used to compute the minimum costs of such networks in polynomial time. We show that naive adoption of this flow-based algorithmic method for global cost functions can result in a stronger form of \u2205-inverse consistency. We further show how the method can be modified to handle cost projections and extensions to maintain generalized versions of AC* and FDAC* for cost functions with more than two variables. Similar generalization for the stronger EDAC* is less straightforward. We reveal the oscillation problem when enforcing EDAC* on cost functions sharing more than one variable. To avoid oscillation, we propose a weak version of EDAC* and generalize it to weak EDGAC* for non-binary cost functions. Using various benchmarks involving the soft variants of hard global constraints ALLDIFFERENT, GCC, SAME, and REGULAR, empirical results demonstrate that our proposal gives improvements of up to an order of magnitude when compared with the traditional constraint optimization approach, both in terms of time and pruning.", "targets": "Consistency Techniques for Flow-Based Projection-Safe Global Cost Functions in Weighted Constraint Satisfaction"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e8c867cbad364bc8969d2817be31dd37", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "To bridge the gap between humans and machines in image understanding and describing, we need further insight into how people describe a perceived scene. In this paper, we study the agreement between bottom-up saliency-based visual attention and object referrals in scene description constructs. We investigate the properties of human-written descriptions and machine-generated ones. We then propose a saliency-boosted image captioning model in order to investigate benefits from low-level cues in language models. We learn that (1) humans mention more salient objects earlier than less salient ones in their descriptions, (2) the better a captioning model performs, the better attention agreement it has with human descriptions, (3) the proposed saliencyboosted model, compared to its baseline form, does not improve significantly on the MS COCO database, indicating explicit bottom-up boosting does not help when the task is well learnt and tuned on a data, (4) a better generalization ability is, however, observed for the saliency-boosted model on unseen data.", "targets": "Can Saliency Information Benefit Image Captioning Models?"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-71d744f7dbbc4ef1a9bfb2e845027036", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Robust Principal Component Analysis (RPCA) via rank minimization is a powerful tool for recovering underlying low-rank structure of clean data corrupted with sparse noise/outliers. In many low-level vision problems, not only it is known that the underlying structure of clean data is low-rank, but the exact rank of clean data is also known. Yet, when applying conventional rank minimization for those problems, the objective function is formulated in a way that does not fully utilize a priori target rank information about the problems. This observation motivates us to investigate whether there is a better alternative solution when using rank minimization. In this paper, instead of minimizing the nuclear norm, we propose to minimize the partial sum of singular values, which implicitly encourages the target rank constraint. Our experimental analyses show that, when the number of samples is deficient, our approach leads to a higher success rate than conventional rank minimization, while the solutions obtained by the two approaches are almost identical when the number of samples is more than sufficient. We apply our approach to various low-level vision problems, e.g. high dynamic range imaging, motion edge detection, photometric stereo, image alignment and recovery, and show that our results outperform those obtained by the conventional nuclear norm rank minimization method.", "targets": "Partial Sum Minimization of Singular Values in Robust PCA: Algorithm and Applications"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ac66bf4623be4784be6cf4c3e4ce6b68", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Conditional probabilities are a core concept in machine learning. For ex-ample, optimal prediction of a label Y given an inputX corresponds to maximizingthe conditional probability of Y given X . A common approach to inference tasksis learning a model of conditional probabilities. However, these models are oftenbased on strong assumptions (e.g., log-linear models), and hence their estimate ofconditional probabilities is not robust and is highly dependent on the validity oftheir assumptions.Here we propose a framework for reasoning about conditional probabilities withoutassuming anything about the underlying distributions, except knowledge of theirsecond order marginals, which can be estimated from data. We show how thissetting leads to guaranteed bounds on conditional probabilities, which can be calcu-lated efficiently in a variety of settings, including structured-prediction. Finally, weapply them to semi-supervised deep learning, obtaining results competitive withvariational autoencoders.", "targets": "Robust Conditional Probabilities"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8f6d95c9f8854b09b91f1613fd2d58c0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Arabic Documents Clustering is an important task for obtaining good results with the traditional Information Retrieval (IR) systems especially with the rapid growth of the number of online documents present in Arabic language. Documents clustering aim to automatically group similar documents in one cluster using different similarity/distance measures. This task is often affected by the documents length, useful information on the documents is often accompanied by a large amount of noise, and therefore it is necessary to eliminate this noise while keeping useful information to boost the performance of Documents clustering. In this paper, we propose to evaluate the impact of text summarization using the Latent Semantic Analysis Model on Arabic Documents Clustering in order to solve problems cited above, using five similarity/distance measures: Euclidean Distance, Cosine Similarity, Jaccard Coefficient, Pearson Correlation Coefficient and Averaged Kullback-Leibler Divergence, for two times: without and with stemming. Our experimental results indicate that our proposed approach effectively solves the problems of noisy information and documents length, and thus significantly improve the clustering performance.", "targets": "SEMANTIC ANALYSIS TO ENHANCE ARABIC DOCUMENTS CLUSTERING"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b2e766ffac0a49929eebb0de7e18766d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Significant efforts have been made to understand and document knowledge related to scientific measurements. Many of those efforts resulted in one or more high-quality ontologies that describe some aspects of scientific measurements, but not in a comprehensive and coherently integrated manner. For instance, we note that many of these high-quality ontologies are not properly aligned, and more challenging, that they have different and often conflicting concepts and approaches for encoding knowledge about empirical measurements. As a result of this lack of an integrated view, it is often challenging for scientists to determine whether any two scientific measurements were taken in semantically compatible manners, thus making it difficult to decide whether measurements should be analyzed in combination or not. In this paper, we present the Human-Aware Sensor Network Ontology that is a comprehensive alignment and integration of a sensing infrastructure ontology and a provenance ontology. HASNetO has been under development for more than one year, and has been reviewed, shared and used by multiple scientific communities. The ontology has been in use to support the data management of a number of large-scale ecological monitoring activities (observations) and empirical experiments.", "targets": "Human-Aware Sensor Network Ontology: Semantic Support for Empirical Data Collection"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4ded0abc1a0b40bb93144875b8161788", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper investigates stochastic and adversarial combinatorial multi-armed bandit problems. In the stochastic setting, we first derive problem-specific regret lower bounds, and analyze how these bounds scale with the dimension of the decision space. We then propose COMBUCB, algorithms that efficiently exploit the combinatorial structure of the problem, and derive finite-time upper bound on their regrets. These bounds improve over regret upper bounds of existing algorithms, and we show numerically thatCOMBUCB significantly outperforms any other algorithm. In the adversarial setting, we propose two simple algorithms, namely COMBEXP-1 and COMBEXP-2 for semi-bandit and bandit feedback, respectively. Their regrets have similar scaling as state-of-the-art algorithms, in spite of the simplicity of their implementation.", "targets": "Stochastic and Adversarial Combinatorial Bandits"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c217c2adf1d143c7ac8c41dd2925ad78", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Sarcasm is considered one of the most difficult problem in sentiment analysis. In our ob-servation on Indonesian social media, for cer-tain topics, people tend to criticize something using sarcasm. Here, we proposed two additional features to detect sarcasm after a common sentiment analysis is conducted. The features are the negativity information and the number of interjection words. We also employed translated SentiWordNet in the sentiment classification. All the classifications were conducted with machine learning algorithms. The experimental results showed that the additional features are quite effective in the sarcasm detection. Keywords\u2014 Sentimen analysis, sarcasm, classification,", "targets": "Indonesian Social Media Sentiment Analysis with Sarcasm Detection"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-25a9a073f7004ceeaf26c9dc2ebfc3b9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Extending the success of deep neural networks to natural language understanding and symbolic reasoning requires complex operations and external memory. Recent neural program induction approaches have attempted to address this problem, but are typically limited to differentiable memory, and consequently cannot scale beyond small synthetic tasks. In this work, we propose the Manager-ProgrammerComputer framework, which integrates neural networks with non-differentiable memory to support abstract, scalable and precise operations through a friendly neural computer interface. Specifically, we introduce a Neural Symbolic Machine, which contains a sequence-to-sequence neural \"programmer\", and a nondifferentiable \"computer\" that is a Lisp interpreter with code assist. To successfully apply REINFORCE for training, we augment it with approximate gold programs found by an iterative maximum likelihood training process. NSM is able to learn a semantic parser from weak supervision over a large knowledge base. It achieves new state-of-the-art performance on WEBQUESTIONSSP, a challenging semantic parsing dataset. Compared to previous approaches, NSM is end-to-end, therefore does not rely on feature engineering or domain specific knowledge.", "targets": "Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision (Short Version)"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2dd65e9d6dca4d65901ae14b8c35a17c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A fundamental problem in control is to learn a model of a system from observations that is useful for controller synthesis. To provide good performance guarantees, existing methods must assume that the real system is in the class of models considered during learning. We present an iterative method with strong guarantees even in the agnostic case where the system is not in the class. In particular, we show that any no-regret online learning algorithm can be used to obtain a nearoptimal policy, provided some model achieves low training error and access to a good exploration distribution. Our approach applies to both discrete and continuous domains. We demonstrate its efficacy and scalability on a challenging helicopter domain from the literature.", "targets": "Agnostic System Identification for Model-Based Reinforcement Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3d7c2944fc2046db8316ed5b7b48e53a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In the last two decades, a number of methods have been proposed for forecasting based on fuzzy time series. Most of the fuzzy time series methods are presented for forecasting of car road accidents. However , the forecasting accuracy rates of the existing methods are not good enough. In this paper, we compared our proposed new method of fuzzy time series forecasting with existing methods. Our method is based on means based partitioning of the historical data of car road accidents. The proposed method belongs to the kth order and time-variant methods. The proposed method can get the best forecasting accuracy rate for forecasting the car road accidents than the existing methods. KeywordsFuzzy sets, Fuzzy logical groups, fuzzified data, fuzzy time series.", "targets": "Inaccuracy Minimization by Partitioning Fuzzy Data Sets \u2013 Validation of an Analytical Methodology"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-cec01a2896fd4216a03b6222376416e7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Living organisms intertwine soft (e.g., muscle) and hard (e.g., bones) materials, giving them an intrinsic flexibility and resiliency often lacking in conventional rigid robots. The emerging field of soft robotics seeks to harness these same properties in order to create resilient machines. The nature of soft materials, however, presents considerable challenges to aspects of design, construction, and control \u2013 and up until now, the vast majority of gaits for soft robots have been hand-designed through empirical trial-and-error. This manuscript describes an easy-to-assemble tensegritybased soft robot capable of highly dynamic locomotive gaits and demonstrating structural and behavioral resilience in the face of physical damage. Enabling this is the use of a machine learning algorithm able to discover novel gaits with a minimal number of physical trials. These results lend further credence to soft-robotic approaches that seek to harness the interaction of complex material dynamics in order to generate a wealth of dynamical behaviors.", "targets": "Soft tensegrity robots"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c0e4ab00e068459087a401787e7c3c3c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Translating information between text and image is a fundamental problem in artificial intelligence that connects natural language processing and computer vision. In the past few years, performance in image caption generation has seen significant improvement through the adoption of recurrent neural networks (RNN). Meanwhile, text-to-image generation begun to generate plausible images using datasets of specific categories like birds and flowers. We\u2019ve even seen image generation from multi-category datasets such as the Microsoft Common Objects in Context (MSCOCO) through the use of generative adversarial networks (GANs). Synthesizing objects with a complex shape, however, is still challenging. For example, animals and humans have many degrees of freedom, which means that they can take on many complex shapes. We propose a new training method called Image-Text-Image (I2T2I) which integrates text-to-image and image-to-text (image captioning) synthesis to improve the performance of textto-image synthesis. We demonstrate that I2T2I can generate better multi-categories images using MSCOCO than the stateof-the-art. We also demonstrate that I2T2I can achieve transfer learning by using a pre-trained image captioning module to generate human images on the MPII Human Pose dataset (MHP) without using sentence annotation.", "targets": "I2T2I: LEARNING TEXT TO IMAGE SYNTHESIS WITH TEXTUAL DATA AUGMENTATION"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2a606b01f05b468e9ccbec8ce0aed68e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In Machine Learning, the parent set identification problem is to find a set of random variables that best explain selected variable given the data and some predefined scoring function. This problem is a critical component to structure learning of Bayesian networks and Markov blankets discovery, and thus has many practical applications ranging from fraud detection to clinical decision support. In this paper, we introduce a new distributed memory approach to the exact parent sets assignment problem. To achieve scalability, we derive theoretical bounds to constraint the search space when MDL scoring function is used, and we reorganize the underlying dynamic programming such that the computational density is increased and fine-grain synchronization is eliminated. We then design efficient realization of our approach in the Apache Spark platform. Through experimental results, we demonstrate that the method maintains strong scalability on a 500-core standalone Spark cluster, and it can be used to efficiently process data sets with 70 variables, far beyond the reach of the currently available solutions.", "targets": "Scalable Exact Parent Sets Identification in Bayesian Networks Learning with Apache Spark"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5f412f1324ab47e6a64370f6434a959c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Efficient and robust algorithms for decentralized estimation in networks are essential to many distributed systems. Whereas distributed estimation of sample mean statistics has been the subject of a good deal of attention, computation of U statistics, relying on more expensive averaging over pairs of observations, is a less investigated area. Yet, such data functionals are essential to describe global properties of a statistical population, with important examples including Area Under the Curve, empirical variance, Gini mean difference and within-cluster point scatter. This paper proposes new synchronous and asynchronous randomized gossip algorithms which simultaneously propagate data across the network and maintain local estimates of the U -statistic of interest. We establish convergence rate bounds of O(1/t) and O(log t/t) for the synchronous and asynchronous cases respectively, where t is the number of iterations, with explicit data and network dependent terms. Beyond favorable comparisons in terms of rate analysis, numerical experiments provide empirical evidence the proposed algorithms surpasses the previously introduced approach.", "targets": "Extending Gossip Algorithms to Distributed Estimation of U -Statistics"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-65999922c46a4dafa76569ec97c5273f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper presents a dataset collected from natural dialogs which enables to test the ability of dialog systems to learn new facts from user utterances throughout the dialog. This interactive learning will help with one of the most prevailing problems of open domain dialog system, which is the sparsity of facts a dialog system can reason about. The proposed dataset, consisting of 1900 collected dialogs, allows simulation of an interactive gaining of denotations and questions explanations from users which can be used for the interactive learning.", "targets": "Data Collection for Interactive Learning through the Dialog"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f3a06218c6cd40bea3ce32a7f833c6b9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Policy evaluation is concerned with estimating the value function that predicts long-term values of states under a given policy. It is a crucial step in many reinforcement-learning algorithms. In this paper, we focus on policy evaluation with linear function approximation over a fixed dataset. We first transform the empirical policy evaluation problem into a (quadratic) convex-concave saddle-point problem, and then present a primal-dual batch gradient method, as well as two stochastic variance reduction methods for solving the problem. These algorithms scale linearly in both sample size and feature dimension. Moreover, they achieve linear convergence even when the saddle-point problem has only strong concavity in the dual variables but no strong convexity in the primal variables. Numerical experiments on benchmark problems demonstrate the effectiveness of our methods.", "targets": "Stochastic Variance Reduction Methods for Policy Evaluation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8384b28238ac4fb1a81d8745240c7438", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Structured sparse optimization is an important and challenging problem for analyzing high-dimensional data in a variety of applications such as bioinformatics, medical imaging, social networks, and astronomy. Although a number of structured sparsity models have been explored, such as trees, groups, clusters, and paths, connected subgraphs have been rarely explored in the current literature. One of the main technical challenges is that there is no structured sparsity-inducing norm that can directly model the space of connected subgraphs, and there is no exact implementation of a projection oracle for connected subgraphs due to its NP-hardness. In this paper, we explore efficient approximate projection oracles for connected subgraphs, and propose two new efficient algorithms, namely, GRAPH-IHT and GRAPH-GHTP, to optimize a generic nonlinear objective function subject to connectivity constraint on the support of the variables. Our proposed algorithms enjoy strong guarantees analogous to several current methods for sparsity-constrained optimization, such as Projected Gradient Descent (PGD), Approximate Model Iterative Hard Thresholding (AM-IHT), and Gradient Hard Thresholding Pursuit (GHTP) with respect to convergence rate and approximation accuracy. We apply our proposed algorithms to optimize several well-known graph scan statistics in several applications of connected subgraph detection as a case study, and the experimental results demonstrate that our proposed algorithms outperform state-of-the-art methods.", "targets": "Technical Report: Graph-Structured Sparse Optimization for Connected Subgraph Detection"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9809ba5a32794a9e844afc4b6b204c23", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Due to imprecision and uncertainties in predicting real world problems, artificial neural network (ANN) techniques have become increasingly useful for modeling and optimization. This paper presents an artificial neural network approach for forecasting electric energy consumption. For effective planning and operation of power systems, optimal forecasting tools are needed for energy operators to maximize profit and also to provide maximum satisfaction to energy consumers. Monthly data for electric energy consumed in the Gaza strip was collected from year 1994 to 2013. Data was trained and the proposed model was validated using 2-Fold and K-Fold cross validation techniques. The model has been tested with actual energy consumption data and yields satisfactory performance.", "targets": "Using Artificial Neural Network Techniques for Prediction of Electric Energy Consumption"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-eb47bc43994944e5abf51d3b93308542", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a transfer deep learning (TDL) framework that can transfer the knowledge obtained from a single-modal neural network to a network with a different modality. Specifically, we show that we can leverage speech data to fine-tune the network trained for video recognition, given an initial set of audio-video parallel dataset within the same semantics. Our approach first learns the analogypreserving embeddings between the abstract representations learned from intermediate layers of each network, allowing for semantics-level transfer between the source and target modalities. We then apply our neural network operation that fine-tunes the target network with the additional knowledge transferred from the source network, while keeping the topology of the target network unchanged. While we present an audio-visual recognition task as an application of our approach, our framework is flexible and thus can work with any multimodal dataset, or with any already-existing deep networks that share the common underlying semantics. In this work in progress report, we aim to provide comprehensive results of different configurations of the proposed approach on two widely used audiovisual datasets, and we discuss potential applications of the proposed approach.", "targets": "Multimodal Transfer Deep Learning with Applications in Audio-Visual Recognition"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0610589040524db68fa60562554b04ca", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper we present an Action Language-Answer Set Programming based approach to solving planning and scheduling problems in hybrid domains domains that exhibit both discrete and continuous behavior. We use action language H to represent the domain and then translate the resulting theory into an A-Prolog program. In this way, we reduce the problem of finding solutions to planning and scheduling problems to computing answer sets of A-Prolog programs. We cite a planning and scheduling example from the literature and show how to model it in H. We show how to translate the resulting H theory into an equivalent A-Prolog program. We compute the answer sets of the resulting program using a hybrid solver called EZCSP which loosely integrates a constraint solver with an answer set solver. The solver allows us reason about constraints over reals and compute solutions to complex planning and scheduling problems. Results have shown that our approach can be applied to any planning and scheduling problem in hybrid domains.", "targets": "Planning and Scheduling in Hybrid Domains Using Answer Set Programming"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5fcbb821b26748f1b1193e586df244d3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We first consider the problem of learning k-parities in the on-line mistake-bound model: given a hidden vector x \u2208 {0, 1} with |x| = k and a sequence of \u201cquestions\u201d a1, a2, \u00b7 \u00b7 \u00b7 \u2208 {0, 1} , where the algorithm must reply to each question with \u3008ai, x\u3009 (mod 2), what is the best tradeoff between the number of mistakes made by the algorithm and its time complexity? We improve the previous best result of Buhrman et. al. [BGM10] by an exp(k) factor in the time complexity. Second, we consider the problem of learning k-parities in the presence of classification noise of rate \u03b7 \u2208 (0, 1/2). A polynomial time algorithm for this problem (when \u03b7 > 0 and k = \u03c9(1)) is a longstanding challenge in learning theory. Grigorescu et al. [GRV11] showed an algorithm running in time ( n k/2 )1+4\u03b7+o(1) . Note that this algorithm inherently requires time ( n k/2 ) even when the noise rate \u03b7 is polynomially small. We observe that for sufficiently small noise rate, it is possible to break the ( n k/2 ) barrier. In particular, if for some function f(n) = \u03c9(1) and \u03b1 \u2208 [1/2, 1), k = n/f(n) and \u03b7 = o(f(n)/ log n), then there is an algorithm for the problem with running time poly(n) \u00b7 ( n k )1\u2212\u03b1 \u00b7 e.", "targets": "On learning k-parities with and without noise"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ee904a0d2e024391bb7d2208fb1d8ade", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "English to Indian language machine translation poses the challenge of structural and morphological divergence. This paper describes English to Indian language statistical machine translation using pre-ordering and suffix separation. The pre-ordering uses rules to transfer the structure of the source sentences prior to training and translation. This syntactic restructuring helps statistical machine translation to tackle the structural divergence and hence better translation quality. The suffix separation is used to tackle the morphological divergence between English and highly agglutinative Indian languages. We demonstrate that the use of pre-ordering and suffix separation helps in improving the quality of English to Indian Language machine translation.", "targets": "MTIL17: English to Indian Langauge Statistical Machine Translation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0aeb2cf4ed674f648bbfb475fa3013cc", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We explore the problem of binary classification in machine learning, with a twist the classifier is allowed to abstain on any datum, professing ignorance about the true class label without committing to any prediction. This is directly motivated by applications like medical diagnosis and fraud risk assessment, in which incorrect predictions have potentially calamitous consequences. We focus on a recent spate of theoretically driven work in this area that characterizes how allowing abstentions can lead to fewer errors in very general settings. Two areas are highlighted: the surprising possibility of zero-error learning, and the fundamental tradeoff between predicting sufficiently often and avoiding incorrect predictions. We review efficient algorithms with provable guarantees for each of these areas. We also discuss connections to other scenarios, notably active learning, as they suggest promising directions of further inquiry in this emerging field.", "targets": "The Utility of Abstaining in Binary Classification"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c2a81819aa22475d9c7ea8cce8668595", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Evolution has resulted in highly developed abilities in many natural intelligences to quickly and accurately predict mechanical phenomena. Humans have successfully developed laws of physics to abstract and model such mechanical phenomena. In the context of artificial intelligence, a recent line of work has focused on estimating physical parameters based on sensory data and use them in physical simulators to make long-term predictions. In contrast, we investigate the effectiveness of a single neural network for end-to-end long-term prediction of mechanical phenomena. Based on extensive evaluation, we demonstrate that such networks can outperform alternate approaches having even access to ground-truth physical simulators, especially when some physical parameters are unobserved or not known a-priori. Further, our network outputs a distribution of outcomes to capture the inherent uncertainty in the data. Our approach demonstrates for the first time the possibility of making actionable long-term predictions from sensor data without requiring to explicitly model the underlying physical laws.", "targets": "Learning A Physical Long-term Predictor"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1157a41132744c2699d40f27e1426501", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Semantic segmentation requires a detailed labeling of image pixels by object category. Information derived from local image patches is necessary to describe the detailed shape of individual objects. However, this information is ambiguous and can result in noisy labels. Global inference of image content can instead capture the general semantic concepts present. We advocate that holistic inference of image concepts provides valuable information for detailed pixel labeling. We propose a generic framework to leverage holistic information in the form of a LabelBank for pixellevel segmentation. We show the ability of our framework to improve semantic segmentation performance in a variety of settings. We learn models for extracting a holistic LabelBank from visual cues, attributes, and/or textual descriptions. We demonstrate improvements in semantic segmentation accuracy on standard datasets across a range of state-of-the-art segmentation architectures and holistic inference approaches.", "targets": "LabelBank: Revisiting Global Perspectives for Semantic Segmentation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-88c2634002d149dbbb319486f1b8ea3c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Phase retrieval problems involve solving linear equations, but with missing sign (or phase, forcomplex numbers) information. Over the last two decades, a popular generic empirical approachto the many variants of this problem has been one of alternating minimization; i.e. alternatingbetween estimating the missing phase information, and the candidate solution. In this paper, weshow that a simple alternating minimization algorithm geometrically converges to the solutionof one such problem \u2013 finding a vector x from y,A, where y = |Ax| and |z| denotes a vectorof element-wise magnitudes of z \u2013 under the assumption that A is Gaussian.Empirically, our algorithm performs similar to recently proposed convex techniques for thisvariant (which are based on \u201clifting\u201d to a convex matrix problem) in sample complexity androbustness to noise. However, our algorithm is much more efficient and can scale to largeproblems. Analytically, we show geometric convergence to the solution, and sample complexitythat is off by log factors from obvious lower bounds. We also establish close to optimal scalingfor the case when the unknown vector is sparse. Our work represents the only known theoreticalguarantee for alternating minimization for any variant of phase retrieval problems in the non-convex setting.", "targets": "Phase Retrieval using Alternating Minimization"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-452f2cc354e44835ac253f3978e8429b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Estimation of causal effects of interventions in dynamical systems of interacting agents is under-developed. In this paper, we explore the intricacies of this problem through standard approaches, and demonstrate the need for more appropriate methods. Working under the Neyman-Rubin causal model, we proceed to develop a causal inference method and we explicate the stability assumptions that are necessary for valid causal inference. Our method consists of a temporal component that models the evolution of behaviors that agents adopt over time, and a behavioral component that models the distribution of agent actions conditional on adopted behaviors. This allows the imputation of long-term estimates of quantities of interest, and thus the estimation of long-term causal effects of interventions. We demonstrate our method on a dataset from behavioral game theory, and discuss open problems to stimulate future research.", "targets": "Statistical inference of long-term causal effects in multiagent systems under the Neyman-Rubin model"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-fc92d8e150bc4815a7c027f28e1a7cda", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we present a conversational model that incorporates both context and participant role for two-party conversations. Different architectures are explored for integrating participant role and context information into a Long Short-term Memory (LSTM) language model. The conversational model can function as a language model or a language generation model. Experiments on the Ubuntu Dialog Corpus show that our model can capture multiple turn interaction between participants. The proposed method outperforms a traditional LSTM model as measured by language model perplexity and response ranking. Generated responses show characteristic differences between the two participant roles.", "targets": "LSTM based Conversation Models"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5b427da1f9814e02b9f08dcd81a8c962", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Spelling errors are introduced in text either during typing, or when the user does not know the correct phoneme or grapheme. If a language contains complex words like sandhi where two or more morphemes join based on some rules, spell checking becomes very tedious. In such situations, having a spell checker with sandhi splitter which alerts the user by flagging the errors and providing suggestions is very useful. A novel algorithm of sandhi splitting is proposed in this paper. The sandhi splitter can split about 7000 most common sandhi words in Kannada language used as test samples. The sandhi splitter was integrated with a Kannada spell checker and a mechanism for generating suggestions was added. A comprehensive, platform independent, standalone spell checker with sandhi splitter application software was thus developed and tested extensively for its efficiency and correctness. A comparative analysis of this spell checker with sandhi splitter was made and results concluded that the Kannada spell checker with sandhi splitter has an improved performance. It is twice as fast, 200 times more space efficient, and it is 90% accurate in case of complex nouns and 50% accurate for complex verbs. Such a spell checker with sandhi splitter will be of foremost significance in machine translation systems, voice processing, etc. This is the first sandhi splitter in Kannada and the advantage of the novel algorithm is that, it can be extended to all Indian languages. Keywords\u2014 Natural language processing; Morphology; Computational linguistics; Sandhi splitter; Spell checke.", "targets": "Kannada Spell Checker with Sandhi Splitter"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-76ceb5fe2c8d4749a9b5c876ae437d99", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The aim of the paper is to provide an exact approach for generating a Poisson process sampled from a hierarchical CRM, without having to instantiate the infinitely many atoms of the random measures. We use completely random measures (CRM) and hierarchical CRM to define a prior for Poisson processes. We derive the marginal distribution of the resultant point process, when the underlying CRM is marginalized out. Using well known properties unique to Poisson processes, we were able to derive an exact approach for instantiating a Poisson process with a hierarchical CRM prior. Furthermore, we derive Gibbs sampling strategies for hierarchical CRM models based on Chinese restaurant franchise sampling scheme. As an example, we present the sum of generalized gamma process (SGGP), and show its application in topicmodelling. We show that one can determine the power-law behaviour of the topics and words in a Bayesian fashion, by defining a prior on the parameters of SGGP.", "targets": "On collapsed representation of hierarchical Completely Random Measures"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ad75f3e4f08f4e1a89774d3739c752ac", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "claims trigrams trigrams dependencies AMT-trained fixed dependencies 3.699% 4.697% 5.974% 18.18% 3.99% 2.797% 3.696% 2.597% 3.297% FEATURES (TFIDFs of) SVM Classifier's Error Rate DATASET", "targets": "Improving Automated Patent Claim Parsing: Dataset, System, and Experiments"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-421f04b2531446188dbe142a345c79b2", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper we explore a symmetry-based search space reduction technique which can speed up optimal pathfinding on undirected uniform-cost grid maps by up to 38 times. Our technique decomposes grid maps into a set of empty rectangles, removing from each rectangle all interior nodes and possibly some from along the perimeter. We then add a series of macro-edges between selected pairs of remaining perimeter nodes to facilitate provably optimal traversal through each rectangle. We also develop a novel online pruning technique to further speed up search. Our algorithm is fast, memory efficient and retains the same optimality and completeness guarantees as searching on an unmodified grid map.", "targets": "Symmetry-Based Search Space Reduction For Grid Maps"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-49481e3b72004a898cb547e8070521f9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recommender systems leverage user demographic informa-tion, such as age, gender, etc., to personalize recommenda-tions and better place their targeted ads. Oftentimes, usersdo not volunteer this information due to privacy concerns, ordue to a lack of initiative in filling out their online profiles.We illustrate a new threat in which a recommender learnsprivate attributes of users who do not voluntarily disclosethem. We design both passive and active attacks that so-licit ratings for strategically selected items, and could thusbe used by a recommender system to pursue this hiddenagenda. Our methods are based on a novel usage of Bayesianmatrix factorization in an active learning setting. Evalua-tions on multiple datasets illustrate that such attacks areindeed feasible and use significantly fewer rated items thanstatic inference methods. Importantly, they succeed withoutsacrificing the quality of recommendations to users.", "targets": "Recommending with an Agenda: Active Learning of Private Attributes using Matrix Factorization"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-85e87f63ae774411b5f3a3dea7ea0797", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The majority of online display ads are served through realtime bidding (RTB) \u2014 each ad display impression is auctioned off in real-time when it is just being generated from a user visit. To place an ad automatically and optimally, it is critical for advertisers to devise a learning algorithm to cleverly bid an ad impression in real-time. Most previous works consider the bid decision as a static optimization problem of either treating the value of each impression independently or setting a bid price to each segment of ad volume. However, the bidding for a given ad campaign would repeatedly happen during its life span before the budget runs out. As such, each bid is strategically correlated by the constrained budget and the overall effectiveness of the campaign (e.g., the rewards from generated clicks), which is only observed after the campaign has completed. Thus, it is of great interest to devise an optimal bidding strategy sequentially so that the campaign budget can be dynamically allocated across all the available impressions on the basis of both the immediate and future rewards. In this paper, we formulate the bid decision process as a reinforcement learning problem, where the state space is represented by the auction information and the campaign\u2019s real-time parameters, while an action is the bid price to set. By modeling the state transition via auction competition, we build a Markov Decision Process framework for learning the optimal bidding policy to optimize the advertising performance in the dynamic real-time bidding environment. Furthermore, the scalability problem from the large real-world auction volume and campaign budget is well handled by state value approximation using neural networks. The empirical study on two large-scale real-world datasets and the live A/B testing on a commercial platform have demonstrated the superior performance and high efficiency compared to state-of-the-art methods.", "targets": "Real-Time Bidding by Reinforcement Learning in Display Advertising"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-298120deaaef46ae814ba65f64a3815d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "An important way to make large training sets is to gather noisy labels from crowds of non experts. We propose a method to aggregate noisy labels collected from a crowd of workers or annotators. Eliciting labels is important in tasks such as judging web search quality and rating products. Our method assumes that labels are generated by a probability distribution over items and labels. We formulate the method by drawing parallels between Gaussian Mixture Models (GMMs) and Restricted Boltzmann Machines (RBMs) and show that the problem of vote aggregation can be viewed as one of clustering. We use K-RBMs to perform clustering. We finally show some empirical evaluations over real datasets.", "targets": "Vote Aggregation as a Clustering Problem"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7db6441f6f47476bae213ac7f110f4a8", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper presents a powerful genetic algorithm (GA) to solve the traveling salesman problem (TSP). To construct a powerful GA, I use edge swapping(ES) with a local search procedure to determine good combinations of building blocks of parent solutions for generating even better offspring solutions. Experimental results on well studied TSP benchmarks demonstrate that the proposed GA is competitive in finding very high quality solutions on instances with up to 16,862 cities.", "targets": "A Powerful Genetic Algorithm for Traveling Salesman Problem"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-89ec796a24d045ff91f15bc45e4e2a1c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Machine-learning techniques are widely used in security-related applications, like spam and malware detection. However, in such settings, they have been shown to be vulnerable to adversarial attacks, including the deliberate manipulation of data at test time to evade detection. In this work, we focus on the vulnerability of linear classifiers to evasion attacks. This can be considered a relevant problem, as linear classifiers have been increasingly used in embedded systems and mobile devices for their low processing time and memory requirements. We exploit recent findings in robust optimization to investigate the link between regularization and security of linear classifiers, depending on the type of attack. We also analyze the relationship between the sparsity of feature weights, which is desirable for reducing processing cost, and the security of linear classifiers. We further propose a novel octagonal regularizer that allows us to achieve a proper trade-off between them. Finally, we empirically show how this regularizer can improve classifier security and sparsity in real-world application examples including spam and malware detection.", "targets": "On Security and Sparsity of Linear Classifiers for Adversarial Settings"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-083bbe36b573457d9efdf76b55051bfe", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "One of the core components of modern spoken dialogue systems is the belief tracker, which estimates the user\u2019s goal at every step of the dialogue. However, most current approaches have difficulty scaling to larger, more complex dialogue domains. This is due to their dependency on either: a) Spoken Language Understanding models that require large amounts of annotated training data; or b) hand-crafted lexicons for capturing some of the linguistic variation in users\u2019 language. We propose a novel Neural Belief Tracking (NBT) framework which overcomes these problems by building on recent advances in representation learning. NBT models reason over pre-trained word vectors, learning to compose them into distributed representations of user utterances and dialogue context. Our evaluation on two datasets shows that this approach surpasses past limitations, matching the performance of state-of-the-art models which rely on hand-crafted semantic lexicons and outperforming them when such lexicons are not provided.", "targets": "Neural Belief Tracker: Data-Driven Dialogue State Tracking"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-40a55737fd8c4d35bdb3c4c5cc051ed1", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Bayesian Belief Networks (BBNs) are a pow\u00ad erful formalism for reasoning under uncer\u00ad tainty but bear some severe limitations: they require a large amount of information be\u00ad fore any reasoning process can start, they have limited contradiction handling capabil\u00ad ities, and their ability to provide explana\u00ad tions for their conclusion is still controversial. There exists a class of reasoning systems, called 11-uth Maintenance Systems (TMSs), which are able to deal with partially speci\u00ad fied knowledge, to provide well-founded ex\u00ad planation for their conclusions, and to detect and handle contradictions. TMSs incorporat\u00ad ing measure of uncertainty are called Belief Maintenance Systems (BMss). This paper de\u00ad scribes how a BMS based on probabilitistic logic can be applied to BBNs, thus introduc\u00ad ing a new class of BBNs, called Ignorant Be\u00ad lief Networks, able to incrementally deal with partially specified conditional dependencies, to provide explanations, and to detect and handle contradictions.", "targets": "Belief Maintenance in Bayesian Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d00b7edcd17042e88f7ece4ed752e07b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We analyze a class of estimators based on convex relaxation for solving high-dimensional matrix decomposition problems. The observations are noisy realizations of a linear transformation X of the sum of an (approximately) low rank matrix \u0398\u22c6 with a second matrix \u0393\u22c6 endowed with a complementary form of low-dimensional structure; this set-up includes many statistical models of interest, including forms of factor analysis, multi-task regression with shared structure, and robust covariance estimation. We derive a general theorem that gives upper bounds on the Frobenius norm error for an estimate of the pair (\u0398\u22c6,\u0393\u22c6) obtained by solving a convex optimization problem that combines the nuclear norm with a general decomposable regularizer. Our results are based on imposing a \u201cspikiness\u201d condition that is related to but milder than singular vector incoherence. We specialize our general result to two cases that have been studied in past work: low rank plus an entrywise sparse matrix, and low rank plus a columnwise sparse matrix. For both models, our theory yields non-asymptotic Frobenius error bounds for both deterministic and stochastic noise matrices, and applies to matrices \u0398\u22c6 that can be exactly or approximately low rank, and matrices \u0393\u22c6 that can be exactly or approximately sparse. Moreover, for the case of stochastic noise matrices and the identity observation operator, we establish matching lower bounds on the minimax error, showing that our results cannot be improved beyond constant factors. The sharpness of our theoretical predictions is confirmed by numerical simulations.", "targets": "Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c2798123e94f4c04bcea41594582930e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We undertook a study of the use of a memristor network for music generation, making use of the memristor\u2019s memory to go beyond the Markov hypothesis. Seed transition matrices are created and populated using memristor equations, and which are shown to generate musical melodies and change in style over time as a result of feedback into the transition matrix. The spiking properties of simple memristor networks are demonstrated and discussed with reference to applications of music making. The limitations of simulating composing memristor networks in von Neumann hardware is discussed and a hardware solution based on physical memristor properties is presented.", "targets": "Beyond Markov Chains, Towards Adaptive Memristor Network-based Music Generation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8f3b66965f444729b911584b94f87e2b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a new learning method for heterogeneous domain adaptation (HDA), in which the data from the source domain and the target domain are represented by heterogeneous features with different dimensions. Using two different projection matrices, we first transform the data from two domains into a common subspace in order to measure the similarity between the data from two domains. We then propose two new feature mapping functions to augment the transformed data with their original features and zeros. The existing learning methods (e.g., SVM and SVR) can be readily incorporated with our newly proposed augmented feature representations to effectively utilize the data from both domains for HDA. Using the hinge loss function in SVM as an example, we introduce the detailed objective function in our method called Heterogeneous Feature Augmentation (HFA) for a linear case and also describe its kernelization in order to efficiently cope with the data with very high dimensions. Moreover, we also develop an alternating optimization algorithm to effectively solve the nontrivial optimization problem in our HFA method. Comprehensive experiments on two benchmark datasets clearly demonstrate that HFA outperforms the existing HDA methods.", "targets": "Learning with Augmented Features for Heterogeneous Domain Adaptation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-808a36f1baa549c0aff8b013a0540646", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The paper presents a novel technique called \u201cStructural Crossing-Over\u201d to synthesize qualified data for training machine learning-based handwriting recognition. The proposed technique can provide a greater variety of patterns of training data than the existing approaches such as elastic distortion and tangentbased affine transformation. A couple of training characters are chosen, then they are analyzed by their similar and different structures, and finally are crossed over to generate the new characters. The experiments are set to compare the performances of tangent-based affine transformation and the proposed approach in terms of the variety of generated characters and percent of recognition errors. The standard MNIST corpus including 60,000 training characters and 10,000 test characters is employed in the experiments. The proposed technique uses 1,000 characters to synthesize 60,000 characters, and then uses these data to train and test the benchmark handwriting recognition system that exploits Histogram of Gradient: HOG as features and Support Vector Machine: SVM as recognizer. The experimental result yields 8.06% of errors. It significantly outperforms the tangent-based affine transformation and the original MNIST training data, which are 11.74% and 16.55%, respectively.", "targets": "AUTOMATIC TRAINING DATA SYNTHESIS FOR HANDWRITING RECOGNITION USING THE STRUCTURAL CROSSING-OVER TECHNIQUE"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0be266d0e7c74c33b17537a370f00a2c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Understanding entailment and contradiction is fundamental to understanding natural language, and inference about entailment and contradiction is a valuable testing ground for the development of semantic representations. However, machine learning research in this area has been dramatically limited by the lack of large-scale resources. To address this, we introduce the Stanford Natural Language Inference corpus, a new, freely available collection of labeled sentence pairs, written by humans doing a novel grounded task based on image captioning. At 570K pairs, it is two orders of magnitude larger than all other resources of its type. This increase in scale allows lexicalized classifiers to outperform some sophisticated existing entailment models, and it allows a neural network-based model to perform competitively on natural language inference benchmarks for the first time.", "targets": "A large annotated corpus for learning natural language inference"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4c2d273743d64a67a483c5af12b53f05", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Building on recent advances in image caption generation and optical character recognition (OCR), we present a generalpurpose, deep learning-based system to decompile an image into presentational markup. While this task is a wellstudied problem in OCR, our method takes an inherently different, data-driven approach. Our model does not require any knowledge of the underlying markup language, and is simply trained end-to-end on real-world example data. The model employs a convolutional network for text and layout recognition in tandem with an attention-based neural machine translation system. To train and evaluate the model, we introduce a new dataset of real-world rendered mathematical expressions paired with LaTeX markup, as well as a synthetic dataset of web pages paired with HTML snippets. Experimental results show that the system is surprisingly effective at generating accurate markup for both datasets. While a standard domainspecific LaTeX OCR system achieves around 25% accuracy, our model reproduces the exact rendered image on 75% of examples.", "targets": "What You Get Is What You See: A Visual Markup Decompiler"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-10f848302e0a4ddc95e88f877be2c7fc", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper we extend neural Turing machine (NTM) into a dynamic neural Turing machine (D-NTM) by introducing a trainable memory addressing scheme. This scheme maintains for each memory cell two separate vectors, content and address vectors. This allows the D-NTM to learn a wide variety of location-based addressing strategies including both linear and nonlinear ones. We implement the D-NTM with both soft, differentiable and hard, non-differentiable read/write mechanisms. We investigate the mechanisms and effects for learning to read and write to a memory through experiments on Facebook bAbI tasks using both a feedforward and GRU-controller. The D-NTM is evaluated on a set of the Facebook bAbI tasks and shown to outperform NTM and LSTM baselines.", "targets": "Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1a112749a48144f590a87bb1a9019034", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Distilling from a knowledge base only the part that is relevant to a subset of alphabet, which is recognized as forgetting, has attracted extensive interests in AI community. In standard propositional logic, a general algorithm of forgetting and its computation-oriented investigation in various fragments whose satisfiability are tractable are still lacking. The paper aims at filling the gap. After exploring some basic properties of forgetting in propositional logic, we present a resolution-based algorithm of forgetting for CNF fragment, and some complexity results about forgetting in Horn, renamable Horn, q-Horn, Krom, DNF and CNF fragments of propositional logic.", "targets": "On Forgetting in Tractable Propositional Fragments"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5f6c898f7d0f4ab3a15970979539b881", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Traditional algorithms for stochastic optimization require projecting the solution at each iteration into a given domain to ensure its feasibility. When facing complex domains, such as positive semi-definite cones, the projection operation can be expensive, leading to a high computational cost per iteration. In this paper, we present a novel algorithm that aims to reduce the number of projections for stochastic optimization. The proposed algorithm combines the strength of several recent developments in stochastic optimization, including mini-batch, extra-gradient, and epoch gradient descent, in order to effectively explore the smoothness and strong convexity. We show, both in expectation and with a high probability, that when the objective function is both smooth and strongly convex, the proposed algorithm achieves the optimal O(1/T ) rate of convergence with only O(log T ) projections. Our empirical study verifies the theoretical result.", "targets": "O(logT ) Projections for Stochastic Optimization of Smooth and Strongly Convex Functions"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-68040f07b48d426e9c20daf74f4a416c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Minimizing the rank of a matrix subject to affine constraints is a fundamental problem with many important applications in machine learning and statistics. In this paper we propose a simple and fast algorithm SVP (Singular Value Projection) for rank minimization with affine constraints (ARMP) and show that SVP recovers the minimum rank solution for affine constraints that satisfy the restricted isometry property. We show robustness of our method to noise with a strong geometric convergence rate even for noisy measurements. Our results improve upon a recent breakthrough by Recht, Fazel and Parillo [RFP07] in three significant ways: 1) our method (SVP) is significantly simpler to analyse and easier to implement, 2) we give geometric convergence guarantees for SVP and, as demonstrated empiricially, SVP is significantly faster on real-world and synthetic problems, 3) we give optimality and geometric convergence guarantees even for the noisy version of ARMP. In addition, we address the practically important problem of low-rank matrix completion, which can be seen as a special case of ARMP. However, the affine constraints defining the matrix-completion problem do not obey the restricted isometry property in general. We empirically demonstrate that our algorithm recovers low-rank incoherent matrices from an almost optimal number of uniformly sampled entries. We make partial progress towards proving exact recovery and provide some intuition for the performance of SVP applied to matrix completion by showing a more restricted isometry property. Our algorithm outperforms existing methods, such as those of [RFP07, CR08, CT09, CCS08, KOM09], for ARMP and the matrix-completion problem by an order of magnitude and is also significantly more robust to noise.", "targets": "Guaranteed Rank Minimization via Singular Value Projection"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-fcdfbc66516948d4ba1648464e609387", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Automatically generated political event data is an important part of the social science data ecosystem. The approaches for generating this data, though, have remained largely the same for two decades. During this time, the field of computational linguistics has progressed tremendously. This paper presents an overview of political event data, including methods and ontologies, and a set of experiments to determine the applicability of deep neural networks to the extraction of political events from news text.", "targets": "Generating Politically-Relevant Event Data"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5759bfe072f64cd8aa9211f24aae0a11", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In recent years, Deep Learning has become the go-to solution for a broad range of applications, often outperforming state-of-the-art. However, it is important, for both theoreticians and practitioners, to gain a deeper understanding of the difficulties and limitations associated with common approaches and algorithms. We describe four families of problems for which some of the commonly used existing algorithms fail or suffer significant difficulty. We illustrate the failures through practical experiments, and provide theoretical insights explaining their source, and how they might be remedied.", "targets": "Failures of Deep Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9a4224e758ac41c0bcb8258dd34361ec", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In order to tell stories in different voices for different audiences, interactive story systems require: (1) a semantic representation of story structure, and (2) the ability to automatically generate story and dialogue from this semantic representation using some form of Natural Language Generation (nlg). However, there has been limited research on methods for linking story structures to narrative descriptions of scenes and story events. In this paper we present an automatic method for converting from Scheherazade\u2019s story intention graph, a semantic representation, to the input required by the personage nlg engine. Using 36 Aesop Fables distributed in DramaBank, a collection of story encodings, we train translation rules on one story and then test these rules by generating text for the remaining 35. The results are measured in terms of the string similarity metrics Levenshtein Distance and BLEU score. The results show that we can generate the 35 stories with correct content: the test set stories on average are close to the output of the Scheherazade realizer, which was customized to this semantic representation. We provide some examples of story variations generated by personage. In future work, we will experiment with measuring the quality of the same stories generated in different voices, and with techniques for making storytelling interactive.", "targets": "Generating Different Story Tellings from Semantic Representations of Narrative"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-bfc7ecd07c7d465e9c95ef6d5d989f48", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper provides a theoretical explanation on the clustering aspect of nonnegative matrix factorization (NMF). We prove that even without imposing orthogonality nor sparsity constraint on the basis and/or coefficient matrix, NMF still can give clustering results, thus providing a theoretical support for many works, e.g., Xu et al. [1] and Kim et al. [2], that show the superiority of the standard NMF as a clustering method. Keywords\u2014bound-constrained optimization, clustering method, non-convex optimization, nonnegative matrix factorization", "targets": "On the clustering aspect of nonnegative matrix factorization"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5379b4557bcf4513aaf3fc7dc7e7ab3e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The assignment of unique IDs to n variables is an instance of the well-known renaming problem, for which multiple algorithms have been proposed in the literature on distributed algorithms. However, to our knowledge, all these algorithms focus on robustness to failures, and ignore the issue of privacy. On the contrary, in this paper we do not consider agent failures, and we rather need an algorithm that protects agent and topology privacy. To this purpose, we propose Algorithm 1, which is a modification of the pseudo-tree generation algorithm in Online Appendix 2, and is an improved version of the algorithm proposed by L\u00e9aut\u00e9 and Faltings (2009). Each variable x is assigned a unique number idx that corresponds to the order in which it is first visited during the distributed traversal of the constraint graph (or, more precisely, an upper bound thereon). This is done by appending to each CHILD message the number id of variables visited so far (lines 8, 29 and 31). Each variable adds a random number to id so as not to leak any useful upper bound on its number of neighbors (lines 5 and 15). At the end of this algorithm, the root variable discovers an upper bound n+ on the total number of variables, and reveals it to everyone (lines 35 and 22 to 24).", "targets": "Protecting Privacy through Distributed Computation in Multi-agent Decision Making Online Appendix 3: Unique ID Generation Algorithm"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8c4d655e3c154582881f0d137b853398", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Artificial object perception usually relies on a priori defined models and feature extraction algorithms. We study how the concept of object can be grounded in the sensorimotor experience of a naive agent. Without any knowledge about itself or the world it is immersed in, the agent explores its sensorimotor space and identifies objects as consistent networks of sensorimotor transitions, independent from their context. A fundamental drive for prediction is assumed to explain the emergence of such networks from a developmental standpoint. An algorithm is proposed and tested to illustrate the approach.", "targets": "Grounding object perception in a naive agent\u2019s sensorimotor experience"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-dccc6c6551cd4e05bff3377489a4f914", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper studies the problem of learning weighted automata from a finite labeled training sample. We consider several general families of weighted automata defined in terms of three different measures: the norm of an automaton\u2019s weights, the norm of the function computed by an automaton, or the norm of the corresponding Hankel matrix. We present new data-dependent generalization guarantees for learning weighted automata expressed in terms of the Rademacher complexity of these families. We further present upper bounds on these Rademacher complexities, which reveal key new data-dependent terms related to the complexity of learning weighted automata.", "targets": "Generalization Bounds for Weighted Automata"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-05c78e4b397843e989829b1091d4784c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a new method to enforce priors on the solution of the nonnegative matrix factorization (NMF). The proposed algorithm can be used for denoising or single-channel source separation (SCSS) applications. The NMF solution is guided to follow the Minimum Mean Square Error (MMSE) estimates under Gaussian mixture prior models (GMM) for the source signal. In SCSS applications, the spectra of the observed mixed signal are decomposed as a weighted linear combination of trained basis vectors for each source using NMF. In this work, the NMF decomposition weight matrices are treated as a distorted image by a distortion operator, which is learned directly from the observed signals. The MMSE estimate of the weights matrix under GMM prior and log-normal distribution for the distortion is then found to improve the NMF decomposition results. The MMSE estimate is embedded within the optimization objective to form a novel regularized NMF cost function. The corresponding update rules for the new objectives are derived in this paper. Experimental results show that, the proposed regularized NMF alPreprint submitted to Elsevier March 1, 2013 ar X iv :1 30 2. 72 83 v1 [ cs .L G ] 2 8 Fe b 20 13 gorithm improves the source separation performance compared with using NMF without prior or with other prior models.", "targets": "Source Separation using Regularized NMF with MMSE Estimates under GMM Priors with Online Learning for The Uncertainties"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e603dd46e8364470ae7a95aae29a3cae", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We describe a strategy for the acquisition of training data necessary to build a social-media-driven early detection system for individuals at risk for (preventable) type 2 diabetes mellitus (T2DM). The strategy uses a game-like quiz with data and questions acquired semi-automatically from Twitter. The questions are designed to inspire participant engagement and collect relevant data to train a public-health model applied to individuals. Prior systems designed to use social media such as Twitter to predict obesity (a risk factor for T2DM) operate on entire communities such as states, counties, or cities, based on statistics gathered by government agencies. Because there is considerable variation among individuals within these groups, training data on the individual level would be more effective, but this data is difficult to acquire. The approach proposed here aims to address this issue. Our strategy has two steps. First, we trained a random forest classifier on data gathered from (public) Twitter statuses and state-level statistics with state-of-the-art accuracy. We then converted this classifier into a 20-questions-style quiz and made it available online. In doing so, we achieved high engagement with individuals that took the quiz, while also building a training set of voluntarily supplied individual-level data for future classification.", "targets": "Towards Using Social Media to Identify Individuals at Risk for Preventable Chronic Illness"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ffb99b3cb8514a01b5127f3e9ce47a5f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The RoboCup 2D Simulation League incorporates several challenging features, setting a benchmark for Artificial Intelligence (AI). In this paper we describe some of the ideas and tools around the development of our team, Gliders2012. In our description, we focus on the evaluation function as one of our central mechanisms for action selection. We also point to a new framework for watching log files in a web browser that we release for use and further development by the RoboCup community. Finally, we also summarize results of the group and final matches we played during RoboCup 2012, with Gliders2012 finishing 4th out of 19 teams.", "targets": "Gliders2012: Development and Competition Results"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-351b5217f745437dbafb540364998681", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We introduce the first, general purpose, slice sampling inference engine for probabilistic programs. This engine is released as part of StocPy, a new Turing-Complete probabilistic programming language, available as a Python library. We present a transdimensional generalisation of slice sampling which is necessary for the inference engine to work on traces with different numbers of random variables. We show that StocPy compares favourably to other PPLs in terms of flexibility and usability, and that slice sampling can outperform previously introduced inference methods. Our experiments include a logistic regression, HMM, and Bayesian Neural Net.", "targets": "Slice Sampling for Probabilistic Programming"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d65188c9af0c49be8666b1d65d4c8ba3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper tackles temporal resolution of documents, such as determining when a document is about or when it was written, based only on its text. We apply techniques from information retrieval that predict dates via language models over a discretized timeline. Unlike most previous works, we rely solely on temporal cues implicit in the text. We consider both document-likelihood and divergence based techniques and several smoothing methods for both of them. Our best model predicts the mid-point of individuals\u2019 lives with a median of 22 and mean error of 36 years for Wikipedia biographies from 3800 B.C. to the present day. We also show that this approach works well when training on such biographies and predicting dates both for nonbiographical Wikipedia pages about specific years (500 B.C. to 2010 A.D.) and for publication dates of short stories (1798 to 2008). Together, our work shows that, even in absence of temporal extraction resources, it is possible to achieve remarkable temporal locality across a diverse set of texts.", "targets": "Dating Texts without Explicit Temporal Cues"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8cf8d216b2ef4eb6b71dcb2fe3ab2d49", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In reinforcement learning, we often define goals by specifying rewards within desirable states. One problem with this approach is that we typically need to redefine the rewards each time the goal changes, which often requires some understanding of the solution in the agent\u2019s environment. When humans are learning to complete tasks, we regularly utilize alternative sources that guide our understanding of the problem. Such task representations allow one to specify goals on their own terms, thus providing specifications that can be appropriately interpreted across various environments. This motivates our own work, in which we represent goals in environments that are different from the agent\u2019s. We introduce Cross-Domain Perceptual Reward (CDPR) functions, learned rewards that represent the visual similarity between an agent\u2019s state and a cross-domain goal image. We report results for learning the CDPRs with a deep neural network and using them to solve two tasks with deep reinforcement learning.", "targets": "Cross-Domain Perceptual Reward Functions"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f75f549290da4086a2c69580f058346c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We introduce a new spatial data structure for high dimensional data called the approximate principal direction tree (APD tree) that adapts to the intrinsic dimension of the data. Our algorithm ensures vector-quantization accuracy similar to that of computationally-expensive PCA trees with similar time-complexity to that of loweraccuracy RP trees. APD trees use a small number of powermethod iterations to find splitting planes for recursively partitioning the data. As such they provide a natural trade-off between the running-time and accuracy achieved by RP and PCA trees. Our theoretical results establish a) strong performance guarantees regardless of the convergence rate of the powermethod and b) that O(log d) iterations suffice to establish the guarantee of PCA trees when the intrinsic dimension is d. We demonstrate this trade-off and the efficacy of our data structure on both the CPU and GPU.", "targets": "Approximate Principal Direction Trees"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8ab4083d11c449ccbf63b922fd06ad41", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Progress in language and image understanding by machines has sparkled the interest of the research community in more open-ended, holistic tasks, and refueled an old AI dream of building intelligent machines. We discuss a few prominent challenges that characterize such holistic tasks and argue for \u201cquestion answering about images\u201d as a particular appealing instance of such a holistic task. In particular, we point out that it is a version of a Turing Test that is likely to be more robust to over-interpretations and contrast it with tasks like grounding and generation of descriptions. Finally, we discuss tools to measure progress in this field.", "targets": "Hard to Cheat: A Turing Test based on Answering Questions about Images"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d66b8f1028d646ffbb764d269a164a40", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The collection and analysis of user data drives improvements in the app and web ecosystems, but comes with risks to privacy. This paper examines discrete distribution estimation under local privacy, a setting wherein service providers can learn the distribution of a categorical statistic of interest without collecting the underlying data. We present new mechanisms, including hashed k-ary Randomized Response (k-RR), that empirically meet or exceed the utility of existing mechanisms at all privacy levels. New theoretical results demonstrate the order-optimality of k-RR and the existing RAPPOR mechanism at different privacy regimes.", "targets": "Discrete Distribution Estimation under Local Privacy"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ca8fabf15f5748c89f5a84d59fddb04e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "One important challenge for a set of agents to achieve more efficient collaboration is for these agents to maintain proper models of each other. An important aspect of these models of other agents is that they are often partial and incomplete. Thus far, there are two common representations of agent models: MDP based and action based, which are both based on action modeling. In many applications, agent models may not have been given, and hence must be learnt. While it may seem convenient to use either MDP based or action based models for learning, in this paper, we introduce a new representation based on capability models, which has several unique advantages. First, we show that learning capability models can be performed efficiently online via Bayesian learning, and the learning process is robust to high degrees of incompleteness in plan execution traces (e.g., with only start and end states). While high degrees of incompleteness in plan execution traces presents learning challenges for MDP based and action based models, capability models can still learn to abstract useful information out of these traces. As a result, capability models are useful in applications in which such incompleteness is common, e.g., robot learning human model from observations and interactions. Furthermore, when used in multi-agent planning (with each agent modeled separately), capability models provide flexible abstraction of actions. The limitation, however, is that the synthesized plan is incomplete and abstract.", "targets": "Learning of Agent Capability Models with Applications in Multi-agent Planning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b6bf07bf4f0a40539221c052c06eea87", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present an alternative methodology for the analysis of algorithms, based on the concept of expected discounted reward. This methodology naturally handles algorithms that do not always terminate, so it can (theoretically) be used with partial algorithms for undecidable problems, such as those found in artificial general intelligence (AGI) and automated theorem proving. We mention an approach to self-improving AGI enabled by this methodology.", "targets": "Analysis of Algorithms and Partial Algorithms"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d4da71c18d5e40df83bcb8c122a5254e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we propose a method which uses semi-supervised convolutional neural networks (CNNs) to select in-domain training data for statistical machine translation. This approach is particularly effective when only tiny amounts of in-domain data are available. The in-domain data and randomly sampled general-domain data are used to train a data selection model with semi-supervised CNN, then this model computes domain relevance scores for all the sentences in the generaldomain data set. The sentence pairs with top scores are selected to train the system. We carry out experiments on 4 language directions with three test domains. Compared with strong baseline systems trained with large amount of data, this method can improve the performance up to 3.1 BLEU. Its performances are significant better than three state-of-the-art language model based data selection methods. We also show that the in-domain data used to train the selection model could be as few as 100 sentences, which makes finegrained topic-dependent translation adaptation possible.", "targets": "Semi-supervised Convolutional Networks for Translation Adaptation with Tiny Amount of In-domain Data"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e2d034bd59af4ff39456fede2530c9aa", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The pre-image problem has to be solved during inference by most structured output predictors. For string kernels, this problem corresponds to finding the string associated to a given input. An algorithm capable of solving or finding good approximations to this problem would have many applications in computational biology and other fields. This work uses a recent result on combinatorial optimization of linear predictors based on string kernels to develop, for the pre-image, a low complexity upper bound valid for many string kernels. This upper bound is used with success in a branch and bound searching algorithm. Applications and results in the discovery of druggable peptides are presented and discussed.", "targets": "On the String Kernel Pre-Image Problem with Applications in Drug Discovery"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-77e0f04773ac43dd806cf3ba8e515ea9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We consider the problem of online active learning to collect data for regression modeling. Specifically, we consider a decision maker with a limited experimentation budget who must efficiently learn an underlying linear population model. Our main contribution is a novel threshold-based algorithm for selection of most informative observations; we characterize its performance and fundamental lower bounds. We extend the algorithm and its guarantees to sparse linear regression in high-dimensional settings. Simulations suggest the algorithm is remarkably robust: it provides significant benefits over passive random sampling in real-world datasets that exhibit high nonlinearity and high dimensionality \u2014 significantly reducing both the mean and variance of the squared error.", "targets": "Online Active Linear Regression via Thresholding"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-47869e7845e444679288d7cf6e2df77c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Americans spend about a third of their time online, with many participating in online conversations on social and political issues. We hypothesize that social media arguments on such issues may be more engaging and persuasive than traditional media summaries, and that particular types of people may be more or less convinced by particular styles of argument, e.g. emotional arguments may resonate with some personalities while factual arguments resonate with others. We report a set of experiments testing at large scale how audience variables interact with argument style to affect the persuasiveness of an argument, an under-researched topic within natural language processing. We show that belief change is affected by personality factors, with conscientious, open and agreeable people being more convinced by emotional arguments.", "targets": "Argument Strength is in the Eye of the Beholder: Audience Effects in Persuasion"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9a90e599089e4d23b6786d01a6d5435d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper introduces a self-organizing traffic signal system for an urban road network. The key elements of this system are agents that control traffic signals at intersections. Each agent uses an interval microscopic traffic model to predict effects of its possible control actions in a short time horizon. The executed control action is selected on the basis of predicted delay intervals. Since the prediction results are represented by intervals, the agents can recognize and suspend those control actions, whose positive effect on the performance of traffic control is uncertain. Evaluation of the proposed traffic control system was performed in a simulation environment. The simulation experiments have shown that the proposed approach results in an improved performance, particularly for non-uniform traffic streams.", "targets": "A self-organizing system for urban traffic control based on predictive interval microscopic model"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-275ae3f186594ae18ea81b012822ab77", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Imitation learning has traditionally been applied to learn a single task from demonstrations thereof. The requirement of structured and isolated demonstrations limits the scalability of imitation learning approaches as they are difficult to apply to real-world scenarios, where robots have to be able to execute a multitude of tasks. In this paper, we propose a multi-modal imitation learning framework that is able to segment and imitate skills from unlabelled and unstructured demonstrations by learning skill segmentation and imitation learning jointly. The extensive simulation results indicate that our method can efficiently separate the demonstrations into individual skills and learn to imitate them using a single multi-modal policy. The video of our experiments is available at http://sites.google.com/view/nips17intentiongan.", "targets": "Multi-Modal Imitation Learning from Unstructured Demonstrations using Generative Adversarial Nets"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7e98c5e6bc2245b09783f5c38ee5adeb", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Conditional Simple Temporal Network (CSTN) is a constraint-based graph-formalism for conditional temporal planning. It offers a more flexible formalism than the equivalent CSTP model of Tsamardinos, Vidal and Pollack, from which it was derived mainly as a sound formalization. Three notions of consistency arise for CSTNs and CSTPs: weak, strong, and dynamic. Dynamic consistency is the most interesting notion, but it is also the most challenging and it was conjectured to be hard to assess. Tsamardinos, Vidal and Pollack gave a doubly-exponential time algorithm for deciding whether a CSTN is dynamicallyconsistent and to produce, in the positive case, a dynamic execution strategy of exponential size. In the present work we offer a proof that deciding whether a CSTN is dynamicallyconsistent is coNP-hard and provide the first singly-exponential time algorithm for this problem, also producing a dynamic execution strategy whenever the input CSTN is dynamically-consistent. The algorithm is based on a novel connection with Mean Payoff Games, a family of two-player infinite games played on finite graphs, well known for having applications in model-checking and formal verification. The presentation of such connection is mediated by the Hyper Temporal Network model, a tractable generalization of Simple Temporal Networks whose consistency checking is equivalent to determining Mean Payoff Games. In order to analyze the algorithm we introduce a refined notion of dynamic-consistency, named -dynamic-consistency, and present a sharp lower bounding analysis on the critical value of the reaction time \u03b5\u0302 where the CSTN transits from being, to not being, dynamically-consistent. The proof technique introduced in this analysis of \u03b5\u0302 is applicable more generally when dealing with linear difference constraints which include strict inequalities.", "targets": "Dynamic Consistency of Conditional Simple Temporal Networks via Mean Payoff Games: a Singly-Exponential Time DC-Checking"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-25956d8644224911ac3cfcc0aae34761", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Machine-learning techniques have been recently used with spectacular results to generate artefacts such as music or text. However, these techniques are still unable to capture and generate artefacts that are convincingly structured. In this paper we present an approach to generate structured musical sequences. We introduce a mechanism for sampling efficiently variations of musical sequences. Given a input sequence and a statistical model, this mechanism samples a set of sequences whose distance to the input sequence is approximately within specified bounds. This mechanism is implemented as an extension of belief propagation, and uses local fields to bias the generation. We show experimentally that sampled sequences are indeed closely correlated to the standard musical similarity measure defined by Mongeau and Sankoff. We then show how this mechanism can used to implement composition strategies that enforce arbitrary structure on a musical lead sheet generation problem.", "targets": "Sampling Variations of Lead Sheets"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6912cb201d3b4a9c9139102a08631e82", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Learning representations of data, and in particular learning features for a subsequent prediction task, has been a fruitful area of research delivering impressive empirical results in recent years. However, relatively little is understood about what makes a representation \u2018good\u2019. We propose the idea of a risk gap induced by representation learning for a given prediction context, which measures the difference in the risk of some learner using the learned features as compared to the original inputs. We describe a set of sufficient conditions for unsupervised representation learning to provide a benefit, as measured by this risk gap. These conditions decompose the problem of when representation learning works into its constituent parts, which can be separately evaluated using an unlabeled sample, suitable domain-specific assumptions about the joint distribution, and analysis of the feature learner and subsequent supervised learner. We provide two examples of such conditions in the context of specific properties of the unlabeled distribution, namely when the data lies close to a low-dimensional manifold and when it forms clusters. We compare our approach to a recently proposed analysis of semi-supervised learning.", "targets": "A Modular Theory of Feature Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d8128bbce0aa4c08a08f6b0e8defd00b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In many real applications that use and analyze networked data, the links in the network graph may be erroneous, or derived from probabilistic techniques. In such cases, the node classification problem can be challenging, since the unreliability of the links may affect the final results of the classification process. If the information about link reliability is not used explicitly, the classification accuracy in the underlying network may be affected adversely. In this paper, we focus on situations that require the analysis of the uncertainty that is present in the graph structure. We study the novel problem of node classification in uncertain graphs, by treating uncertainty as a first-class citizen. We propose two techniques based on a Bayes model and automatic parameter selection, and show that the incorporation of uncertainty in the classification process as a first-class citizen is beneficial. We experimentally evaluate the proposed approach using different real data sets, and study the behavior of the algorithms under different conditions. The results demonstrate the effectiveness and efficiency of our approach.", "targets": "Node Classification in Uncertain Graphs"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2657b04aa0534bd7be115782f58a6514", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The paper continues the investigation of Poincare and Russel\u2019s Vicious Circle Principle (VCP) in the context of the design of logic programming languages with sets. We expand previously introduced language Alog with aggregates by allowing infinite sets and several additional set related constructs useful for knowledge representation and teaching. In addition, we propose an alternative formalization of the original VCP and incorporate it into the semantics of new language, Slog, which allows more liberal construction of sets and their use in programming rules. We show that, for programs without disjunction and infinite sets, the formal semantics of aggregates in Slog coincides with that of several other known languages. Their intuitive and formal semantics, however, are based on quite different ideas and seem to be more involved than that of Slog.", "targets": "Vicious Circle Principle and Formation of Sets in ASP Based Languages"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-089834d542624c858b6f7f85220cba33", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Information hierarchies are organizational structures that often used to organize and present large and complex information as well as provide a mechanism for effective human navigation. Fortunately, many statistical and computational models exist that automatically generate hierarchies; however, the existing approaches do not consider linkages in information networks that are increasingly common in real-world scenarios. Current approaches also tend to present topics as an abstract probably distribution over words, etc rather than as tangible nodes from the original network. Furthermore, the statistical techniques present in many previous works are not yet capable of processing data at Web-scale. In this paper we present the Hierarchical Document Topic Model (HDTM), which uses a distributed vertex-programming process to calculate a nonparametric Bayesian generative model. Experiments on three medium size data sets and the entire Wikipedia dataset show that HDTM can infer accurate hierarchies even over large information networks.", "targets": "Scalable Models for Computing Hierarchies in Information Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ef99eeb7948048c0b49ec30e036e2ab7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A landmark based heuristic is investigated for reducing query phase run-time of the probabilistic roadmap (PRM) motion planning method. The heuristic is generated by storing minimum spanning trees from a small number of vertices within the PRM graph and using these trees to approximate the cost of a shortest path between any two vertices of the graph. The intermediate step of preprocessing the graph increases the time and memory requirements of the classical motion planning technique in exchange for speeding up individual queries making the method advantageous in multi-query applications. This paper investigates these trade-offs on PRM graphs constructed in randomized environments as well as a practical manipulator simulation. We conclude that the method is preferable to Dijkstra\u2019s algorithm or the A\u2217 algorithm with conventional heuristics in multi-query applications.", "targets": "Landmark Guided Probabilistic Roadmap Queries"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-bc8a2797a9c24ef3a255201cd64d044b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose Diverse Embedding Neural Network (DENN), a novel architecture for language models (LMs). A DENNLM projects the input word history vector onto multiple diverse low-dimensional sub-spaces instead of a single higherdimensional sub-space as in conventional feed-forward neural network LMs. We encourage these sub-spaces to be diverse during network training through an augmented loss function. Our language modeling experiments on the Penn Treebank data set show the performance benefit of using a DENNLM.", "targets": "DIVERSE EMBEDDING NEURAL NETWORK LANGUAGE MODELS"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-02c8131e4f5645598305e77d3a0af9fd", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Non-negative matrix factorization (NMF) is a natural model of admixture and is widely used in science and engineering. A plethora of algorithms have been developed to tackle NMF, but due to the non-convex nature of the problem, there is little guarantee on how well these methods work. Recently a surge of research have focused on a very restricted class of NMFs, called separable NMF, where provably correct algorithms have been developed. In this paper, we propose the notion of subset-separable NMF, which substantially generalizes the property of separability. We show that subset-separability is a natural necessary condition for the factorization to be unique or to have minimum volume. We developed the Face-Intersect algorithm which provably and efficiently solves subset-separable NMF under natural conditions, and we prove that our algorithm is robust to small noise. We explored the performance of Face-Intersect on simulations and discuss settings where it empirically outperformed the state-of-art methods. Our work is a step towards finding provably correct algorithms that solve large classes of NMF problems.", "targets": "Intersecting Faces: Non-negative Matrix Factorization With New Guarantees"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-154f1c834e0e409cb68832b40f98ec25", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present a novel approach to constraintbased causal discovery, that takes the form of straightforward logical inference, applied to a list of simple, logical statements about causal relations that are derived directly from observed (in)dependencies. It is both sound and complete, in the sense that all invariant features of the corresponding partial ancestral graph (PAG) are identified, even in the presence of latent variables and selection bias. The approach shows that every identifiable causal relation corresponds to one of just two fundamental forms. More importantly, as the basic building blocks of the method do not rely on the detailed (graphical) structure of the corresponding PAG, it opens up a range of new opportunities, including more robust inference, detailed accountability, and application to large models.", "targets": "A Logical Characterization of Constraint-Based Causal Discovery"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a241fbc04f1c48c2b4e6048ec27b432a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "*Rajdeep Borgohain Department of Computer Science and Engineering, Dibrugarh University Institute of Engineering and Technology, Dibrugarh, Assam Email: rajdeepgohain@gmail.com Sugata Sanyal School of Technology and Computer Science, Tata Ins titute of Fundamental Research, Mumbai, India Email: sanyals@gmail.com *Corresponding Author -------------------------------------------------------------------ABSTRACT-----------------------------------------------------------------The use of Artificial Intelligence is finding prominence not only in core computer areas, but also in cross disciplinary areas including medical diagnosis. In this paper, we present a rule based Expert System used in diagnosis of Cerebral Palsy. The expert system takes user input and depending on the symptoms of the patient, diagnoses if the patient is suffering from Cerebral Palsy. The Expert System also classifies the Cerebral Palsy as mil d, moderate or severe based on the presented symptoms.", "targets": "Rule Based Expert System for Cerebral Palsy Diagnosis"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-325c395351a74790a9f74ea2b9b0cc4b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "While end-to-end neural machine translation (NMT) has made remarkable progress recently, NMT systems only rely on parallel corpora for parameter estimation. Since parallel corpora are usually limited in quantity, quality, and coverage, especially for low-resource languages, it is appealing to exploit monolingual corpora to improve NMT. We propose a semisupervised approach for training NMT models on the concatenation of labeled (parallel corpora) and unlabeled (monolingual corpora) data. The central idea is to reconstruct the monolingual corpora using an autoencoder, in which the sourceto-target and target-to-source translation models serve as the encoder and decoder, respectively. Our approach can not only exploit the monolingual corpora of the target language, but also of the source language. Experiments on the ChineseEnglish dataset show that our approach achieves significant improvements over state-of-the-art SMT and NMT systems.", "targets": "Semi-Supervised Learning for Neural Machine Translation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f7147019362b4a96bdff7fbba0d116cb", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A Bayesian net (BN) is more than a succinct way to encode a probabilistic distribution; it also corresponds to a function used to answer queries. A BN can therefore be evaluated by the accuracy of the answers it returns. Many algorithms for learning BNs, however, attempt to optimize another criterion (usu\u00ad ally likelihood, possibly augmented with a regularizing term) , which is independent of the distribution of queries that are posed. This paper takes the \"performance criteria\" seriously, and considers the challenge of com\u00ad puting the BN whose performance read \"accuracy over the distribution of queries\" is optimal. We show that many aspects of this learning task are more difficult than the corresponding subtasks in the standard model.", "targets": "Learning Bayesian Nets that Perform Well"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-64bb0daf2876413f836c883571394c75", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recently there has been significant activity in developing algorithms with provable guarantees for topic modeling. In standard topic models, a topic (such as sports, business, or politics) is viewed as a probability distribution ai over words, and a document is generated by first selecting a mixture w over topics, and then generating words i.i.d. from the associated mixture Aw. Given a large collection of such documents, the goal is to recover the topic vectors and then to correctly classify new documents according to their topic mixture. In this work we consider a broad generalization of this framework in which words are no longer assumed to be drawn i.i.d. and instead a topic is a complex distribution over sequences of paragraphs. Since one could not hope to even represent such a distribution in general (even if paragraphs are given using some natural feature representation), we aim instead to directly learn a document classifier. That is, we aim to learn a predictor that given a new document, accurately predicts its topic mixture, without learning the distributions explicitly. We present several natural conditions under which one can do this efficiently and discuss issues such as noise tolerance and sample complexity in this model. More generally, our model can be viewed as a generalization of the multi-view or co-training setting in machine learning. \u2217Supported in part by National Science Foundation grants CCF-1525971 and CCF-1535967. \u2020Supported in part by National Science Foundation grant CCF-1525971 and by a Microsoft Research Graduate Fellowship and an IBM Ph.D Fellowship. ar X iv :1 61 1. 01 25 9v 1 [ cs .L G ] 4 N ov 2 01 6", "targets": "Generalized Topic Modeling"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-bbf335b061684ed5870c7207ecb474f6", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We address the problem of extracting structured representations of economic events from a large corpus of news articles, using a combination of natural language processing and machine learning techniques. The developed techniques allow for semi-automatic population of a financial knowledge base, which, in turn, may be used to support a range of data mining and exploration tasks. The key challenge we face in this domain is that the same event is often reported multiple times, with varying correctness of details. We address this challenge by first collecting all information pertinent to a given event from the entire corpus, then considering all possible representations of the event, and finally, using a supervised learning method, to rank these representations by the associated confidence scores. A main innovative element of our approach is that it jointly extracts and stores all attributes of the event as a single representation (quintuple). Using a purpose-built test set we demonstrate that our supervised learning approach can achieve 25% improvement in F1-score over baseline methods that consider the earliest, the latest or the most frequent reporting of the event.", "targets": "Towards Building a Knowledge Base of Monetary Transactions from a News Collection"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c5d252f16b004578beddc91829478a55", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Many natural language understanding (NLU) tasks, such as shallow parsing (i.e., text chunking) and semantic slot filling, require the assignment of representative labels to the meaningful chunks in a sentence. Most of the current deep neural network (DNN) based methods consider these tasks as a sequence labeling problem, in which a word, rather than a chunk, is treated as the basic unit for labeling. These chunks are then inferred by the standard IOB (Inside-OutsideBeginning) labels. In this paper, we propose an alternative approach by investigating the use of DNN for sequence chunking, and propose three neural models so that each chunk can be treated as a complete unit for labeling. Experimental results show that the proposed neural sequence chunking models can achieve start-of-the-art performance on both the text chunking and slot filling tasks.", "targets": "Neural Models for Sequence Chunking"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7ce09eeb240540d49dcec8613976bd13", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Most work in the area of statistical relational learning (SRL) is focussed on discrete data, even though a few approaches for hybrid SRL models have been proposed that combine numerical and discrete variables. In this paper we distinguish numerical random variables for which a probability distribution is defined by the model from numerical input variables that are only used for conditioning the distribution of discrete response variables. We show how numerical input relations can very easily be used in the Relational Bayesian Network framework, and that existing inference and learning methods need only minor adjustments to be applied in this generalized setting. The resulting framework provides natural relational extensions of classical probabilistic models for categorical data. We demonstrate the usefulness of RBN models with numeric input relations by several examples. In particular, we use the augmented RBN framework to define probabilistic models for multi-relational (social) networks in which the probability of a link between two nodes depends on numeric latent feature vectors associated with the nodes. A generic learning procedure can be used to obtain a maximum-likelihood fit of model parameters and latent feature values for a variety of models that can be expressed in the high-level RBN representation. Specifically, we propose a model that allows us to interpret learned latent feature values as community centrality degrees by which we can identify nodes that are central for one community, that are hubs between communities, or that are isolated nodes. In a multi-relational setting, the model also provides a characterization of how different relations are associated with each community.", "targets": "Numeric Input Relations for Relational Learning with Applications to Community Structure Analysis"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9dc28fb05da84e088df9d1c550c9eb2e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In recent years significant progress has been made in successfully training recurrent neural networks (RNNs) on sequence learning problems involving long range temporal dependencies. The progress has been made on three fronts: (a) Algorithmic improvements involving sophisticated optimization techniques, (b) network design involving complex hidden layer nodes and specialized recurrent layer connections and (c) weight initialization methods. In this paper, we focus on recently proposed weight initialization with identity matrix for the recurrent weights in a RNN. This initialization is specifically proposed for hidden nodes with Rectified Linear Unit (ReLU) non linearity. We offer a simple dynamical systems perspective on weight initialization process, which allows us to propose a modified weight initialization strategy. We show that this initialization technique leads to successfully training RNNs composed of ReLUs. We demonstrate that our proposal produces comparable or better solution for three toy problems involving long range temporal structure: the addition problem, the multiplication problem and the MNIST classification problem using sequence of pixels. In addition, we present results for a benchmark action recognition problem.", "targets": "IMPROVING PERFORMANCE OF RECURRENT NEURAL NETWORK WITH RELU NONLINEARITY"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-39eb385d24bf43289c81b8ff9c65973e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper investigates neural characterbased morphological tagging for languages with complex morphology and large tag sets. We systematically explore a variety of neural architectures (DNN, CNN, CNNHighway, LSTM, BLSTM) to obtain character-based word vectors combined with bidirectional LSTMs to model across-word context in an end-to-end setting. We explore supplementary use of word-based vectors trained on large amounts of unlabeled data. Our experiments for morphological tagging suggest that for \u201dsimple\u201d model configurations, the choice of the network architecture (CNN vs. CNNHighway vs. LSTM vs. BLSTM) or the augmentation with pre-trained word embeddings can be important and clearly impact the accuracy. Increasing the model capacity by adding depth, for example, and carefully optimizing the neural networks can lead to substantial improvements, and the differences in accuracy (but not training time) become much smaller or even negligible. Overall, our best morphological taggers for German and Czech outperform the best results reported in the literature by a large margin.", "targets": "Neural Morphological Tagging from Characters for Morphologically Rich Languages"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-10bc44438a52415aa488dbe575806097", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Reordering poses a major challenge in machine translation (MT) between two languages with significant differences in word order. In this paper, we present a novel reordering approach utilizing sparse features based on dependency word pairs. Each instance of these features captures whether two words, which are related by a dependency link in the source sentence dependency parse tree, follow the same order or are swapped in the translation output. Experiments on Chinese-to-English translation show a statistically significant improvement of 1.21 BLEU point using our approach, compared to a state-of-the-art statistical MT system that incorporates prior reordering approaches.", "targets": "To Swap or Not to Swap? Exploiting Dependency Word Pairs for Reordering in Statistical Machine Translation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-234c10329fb74effb7418370218eaeb2", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We study the distributed computing setting in which there are multiple servers, each holding a set of points, who wish to compute functions on the union of their point sets. A key task in this setting is Principal Component Analysis (PCA), in which the servers would like to compute a low dimensional subspace capturing as much of the variance of the union of their point sets as possible. Given a procedure for approximate PCA, one can use it to approximately solve problems such as k-means clustering and low rank approximation. The essential properties of an approximate distributed PCA algorithm are its communication cost and computational efficiency for a given desired accuracy in downstream applications. We give new algorithms and analyses for distributed PCA which lead to improved communication and computational costs for k-means clustering and related problems. Our empirical study on real world data shows a speedup of orders of magnitude, preserving communication with only a negligible degradation in solution quality. Some of these techniques we develop, such as a general transformation from a constant success probability subspace embedding to a high success probability subspace embedding with a dimension and sparsity independent of the success probability, may be of independent interest.", "targets": "Improved Distributed Principal Component Analysis"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-84bd89b854b1491fb6c5b0b82e6fd729", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Deep neural networks (DNN) are the state of the art on many engineering problems such as computer vision and audition. A key factor in the success of the DNN is scalability \u2013 bigger networks work better. However, the reason for this scalability is not yet well understood. Here, we interpret the DNN as a discrete system, of linear filters followed by nonlinear activations, that is subject to the laws of sampling theory. In this context, we demonstrate that over-sampled networks are more selective, learn faster and learn more robustly. Our findings may ultimately generalize to the human brain.", "targets": "Over-Sampling in a Deep Neural Network_arxiv"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d2092c1e985f42a6840c8d4cf5536274", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We consider the problem of learning for planning, where knowledge acquired while planning is reused to plan faster in new problem instances. For robotic tasks, among others, plan execution can be captured as a sequence of visual images. For such domains, we propose to use deep neural networks in learning for planning, based on learning a reactive policy that imitates execution traces produced by a planner. We investigate architectural properties of deep networks that are suitable for learning long-horizon planning behavior, and explore how to learn, in addition to the policy, a heuristic function that can be used with classical planners or search algorithms such as A\u2217. Our results on the challenging Sokoban domain show that, with a suitable network design, complex decision making policies and powerful heuristic functions can be learned through imitation. Videos available at https://sites.google.com/site/learn2plannips/.", "targets": "Learning Generalized Reactive Policies using Deep Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-02d29b69f1134ff1b18a84a4eb83ae3e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We show that collaborative filtering can be viewed as a sequence prediction problem, and that given this interpretation, recurrent neural networks offer very competitive approach. In particular we study how the long short-term memory (LSTM) can be applied to collaborative filtering, and how it compares to standard nearest neighbors and matrix factorization methods on movie recommendation. We show that the LSTM is competitive in all aspects, and largely outperforms other methods in terms of item coverage and short term predictions.", "targets": "Collaborative Filtering with Recurrent Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-64bf51140b3a4942b400ad479df10f63", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recently, many variance reduced stochastic alternating direction method of multipliers (ADMM) methods (e.g. SAGADMM, SDCA-ADMM and SVRG-ADMM) have made exciting progress such as linear convergence rates for strongly convex problems. However, the best known convergence rate for general convex problems is O(1/T ) as opposed to O(1/T ) of accelerated batch algorithms, where T is the number of iterations. Thus, there still remains a gap in convergence rates between existing stochastic ADMM and batch algorithms. To bridge this gap, we introduce the momentum acceleration trick for batch optimization into the stochastic variance reduced gradient based ADMM (SVRG-ADMM), which leads to an accelerated (ASVRG-ADMM) method. Then we design two different momentum term update rules for strongly convex and general convex cases. We prove that ASVRG-ADMM converges linearly for strongly convex problems. Besides having a low per-iteration complexity as existing stochastic ADMM methods, ASVRG-ADMM improves the convergence rate on general convex problems from O(1/T ) to O(1/T ). Our experimental results show the effectiveness of ASVRG-ADMM. Introduction In this paper, we consider a class of composite convex optimization problems min x\u2208Rd1 f(x) + h(Ax), (1) where A\u2208Rd2\u00d7d1 is a given matrix, f(x) := 1 n \u2211n i=1fi(x), each fi(x) is a convex function, and h(Ax) is convex but possibly non-smooth. With regard to h(\u00b7), we are interested in a sparsity-inducing regularizer, e.g. `1-norm, group Lasso and nuclear norm. When A is an identity matrix, i.e. A = Id1 , the above formulation (1) arises in many places in machine learning, statistics, and operations research (Bubeck 2015), such as logistic regression, Lasso and support vector machine (SVM). We mainly focus on the large sample regime. In this regime, even first-order batch methods, e.g. FISTA (Beck and Teboulle 2009), become computationally burdensome due to their per-iteration complexity of O(nd1). As a result, stochastic gradient descent (SGD) with per-iteration complexity of O(d1) has Copyright c \u00a9 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. witnessed tremendous progress in the recent years. Especially, a number of stochastic variance reduced gradient methods such as SAG (Roux, Schmidt, and Bach 2012), SDCA (Shalev-Shwartz and Zhang 2013) and SVRG (Johnson and Zhang 2013) have been proposed to successfully address the problem of high variance of the gradient estimate in ordinary SGD, resulting in a linear convergence rate (for strongly convex problems) as opposed to sub-linear rates of SGD. More recently, the Nesterov\u2019s acceleration technique (Nesterov 2004) was introduced in (Allen-Zhu 2016; Hien et al. 2016) to further speed up the stochastic variancereduced algorithms, which results in the best known convergence rates for both strongly convex and general convex problems. This motivates us to integrate the momentum acceleration trick into the stochastic alternating direction method of multipliers (ADMM) below. When A is a more general matrix, i.e. A 6=Id1 , the formulation (1) becomes many more complicated problems arising from machine learning, e.g. graph-guided fuzed Lasso (Kim, Sohn, and Xing 2009) and generalized Lasso (Tibshirani and Taylor 2011). To solve this class of composite optimization problems with an auxiliary variable y = Ax, which are the special case of the general ADMM form, min x\u2208Rd1,y\u2208Rd2 f(x) + h(y), s.t. Ax+By = c, (2) the ADMM is an effective optimization tool (Boyd et al. 2011), and has shown attractive performance in a wide range of real-world problems, such as big data classification (Nie et al. 2014). To tackle the issue of high periteration complexity of batch (deterministic) ADMM (as a popular first-order optimization method), Wang and Banerjee (2012), Suzuki (2013) and Ouyang et al. (2013) proposed some online or stochastic ADMM algorithms. However, all these variants only achieve the convergence rate of O(1/ \u221a T ) for general convex problems and O(log T/T ) for strongly convex problems, respectively, as compared with the O(1/T ) and linear convergence rates of accelerated batch algorithms (Nesterov 1983), e.g. FISTA, where T is the number of iterations. By now several accelerated and faster converging versions of stochastic ADMM, which are all based on variance reduction techniques, have been proposed, e.g. SAG-ADMM (Zhong and Kwok 2014b), SDCA-ADMM (Suzuki 2014) and SVRG-ADMM (Zheng and Kwok 2016). With regard to strongly convex problems, ar X iv :1 70 7. 03 19 0v 1 [ cs .L G ] 1 1 Ju l 2 01 7 Table 1: Comparison of convergence rates and memory requirements of some stochastic ADMM algorithms. General convex Strongly-convex Space requirement SAG-ADMM O(1/T ) unknown O(d1d2+nd1) SDCA-ADMM unknown linear rate O(d1d2+n) SCAS-ADMM O(1/T ) O(1/T ) O(d1d2) SVRG-ADMM O(1/T ) linear rate O(d1d2) ASVRG-ADMM O(1/T ) linear rate O(d1d2) Suzuki (2014) and Zheng and Kwok (2016) proved that linear convergence can be obtained for the special ADMM form (i.e. B = \u2212Id2 and c = 0) and the general ADMM form, respectively. In SAG-ADMM and SVRG-ADMM, an O(1/T ) convergence rate can be guaranteed for general convex problems, which implies that there still remains a gap in convergence rates between the stochastic ADMM and accelerated batch algorithms. To bridge this gap, we integrate the momentum acceleration trick in (Tseng 2010) for deterministic optimization into the stochastic variance reduction gradient (SVRG) based stochastic ADMM (SVRG-ADMM). Naturally, the proposed method has low per-iteration time complexity as existing stochastic ADMM algorithms, and does not require the storage of all gradients (or dual variables) as in SCAS-ADMM (Zhao, Li, and Zhou 2015) and SVRGADMM (Zheng and Kwok 2016), as shown in Table 1. We summarize our main contributions below. \u2022 We propose an accelerated variance reduced stochastic ADMM (ASVRG-ADMM) method, which integrates both the momentum acceleration trick in (Tseng 2010) for batch optimization and the variance reduction technique of SVRG (Johnson and Zhang 2013). \u2022 We prove that ASVRG-ADMM achieves a linear convergence rate for strongly convex problems, which is consistent with the best known result in SDCA-ADMM (Suzuki 2014) and SVRG-ADMM (Zheng and Kwok 2016). \u2022 We also prove that ASVRG-ADMM has a convergence rate of O(1/T ) for non-strongly convex problems, which is a factor of T faster than SAG-ADMM and SVRG-ADMM, whose convergence rates are O(1/T ). \u2022 Our experimental results further verified that our ASVRG-ADMM method has much better performance than the state-of-the-art stochastic ADMM methods. Related Work Introducing y = Ax \u2208R2 , problem (1) becomes min x\u2208Rd1,y\u2208Rd2 f(x) + h(y), s.t. Ax\u2212 y = 0. (3) Although (3) is only a special case of the general ADMM form (2), when B = \u2212Id2 and c = 0, the stochastic (or online) ADMM algorithms and theoretical results in (Wang and Banerjee 2012; Ouyang et al. 2013; Zhong and Kwok 2014b; Zheng and Kwok 2016) and this paper are all for the more general problem (2). To minimize (2), together with the dual variable \u03bb, the update steps of batch ADMM are yk = argminy h(y) + \u03b2 2 \u2016Axk\u22121+By\u2212c+\u03bbk\u22121\u2016 , (4) xk = argminx f(x) + \u03b2 2 \u2016Ax+Byk\u2212c+\u03bbk\u22121\u2016 , (5) \u03bbk = \u03bbk\u22121 +Axk +Byk \u2212 c, (6) where \u03b2>0 is a penalty parameter. To extend the batch ADMM to the online and stochastic settings, the update steps for yk and \u03bbk remain unchanged. In (Wang and Banerjee 2012; Ouyang et al. 2013), the update step of xk is approximated as follows: xk = argmin x x\u2207fik(xk\u22121) + 1 2\u03b7k \u2016x\u2212 xk\u22121\u2016G + \u03b2 2 \u2016Ax+Byk\u2212c+\u03bbk\u22121\u2016, (7) where we draw ik uniformly at random from [n] := {1, . . . , n}, \u03b7k \u221d 1/ \u221a k is the step-size, and \u2016z\u2016G = zGz with given positive semi-definite matrix G, e.g. G = Id1 in (Ouyang et al. 2013). Analogous to SGD, the stochastic ADMM variants use an unbiased estimate of the gradient at each iteration. However, all those algorithms have much slower convergence rates than their batch counterpart, as mentioned above. This barrier is mainly due to the variance introduced by the stochasticity of the gradients. Besides, to guarantee convergence, they employ a decaying sequence of step sizes \u03b7k, which in turn impacts the rates. More recently, a number of variance reduced stochastic ADMM methods (e.g. SAG-ADMM, SDCA-ADMM and SVRG-ADMM) have been proposed and made exciting progress such as linear convergence rates. SVRG-ADMM in (Zheng and Kwok 2016) is particularly attractive here because of its low storage requirement compared with the algorithms in (Zhong and Kwok 2014b; Suzuki 2014). Within each epoch of SVRG-ADMM, the full gradient p\u0303 =\u2207f(x\u0303) is first computed, where x\u0303 is the average point of the previous epoch. Then \u2207fik(xk\u22121) and \u03b7k in (7) are replaced by \u2207\u0303fIk(xk\u22121) = 1 |Ik| \u2211 ik\u2208Ik (\u2207fik(xk\u22121)\u2212\u2207fik(x\u0303)) + p\u0303 (8) and a constant step-size \u03b7, respectively, where Ik \u2282 [n] is a mini-batch of size b (which is a useful technique to reduce the variance). In fact, \u2207\u0303fIk(xk\u22121) is an unbiased estimator of the gradient \u2207f(xk\u22121), i.e. E[\u2207\u0303fIk(xk\u22121)]=\u2207f(xk\u22121). Accelerated Variance Reduced Stochastic ADMM In this section, we design an accelerated variance reduced stochastic ADMM method for both strongly convex and general convex problems. We first make the following assumptions: Each convex fi(\u00b7) is Li-smooth, i.e. there exists a constant Li>0 such that \u2016\u2207fi(x)\u2212\u2207fi(y)\u2016\u2264Li\u2016x\u2212y\u2016, \u2200x, y \u2208 R, and L , maxi Li; f(\u00b7) is \u03bc-strongly convex, i.e. there is \u03bc > 0 such that f(x) \u2265 f(y) +\u2207f(y)(x\u2212 y)+ \u03bc2 \u2016x\u2212y\u2016 2 for all x, y \u2208R; The matrix A has full row rank. The first two assumptions are common in the analysis of first-order optimization methods, while the last one has Algorithm 1 ASVRG-ADMM for strongly-convex case Input: m, \u03b7, \u03b2 > 0, 1 \u2264 b \u2264 n. Initialize: x\u0303= z\u0303, \u1ef9, \u03b8, \u03bb\u0303=\u2212 1 \u03b2 (A T )\u2207f(x\u03030). 1: for s = 1, 2, . . . , T do 2: x0 = z s 0 = x\u0303 s\u22121, y 0 = \u1ef9 s\u22121, \u03bb0 = \u03bb\u0303 s\u22121; 3: p\u0303 = \u2207f(x\u0303s\u22121); 4: for k = 1, 2, . . . ,m do 5: Choose Ik\u2286 [n]of size b, uniformly at random; 6: y k=argminy h(y)+ \u03b2 2 \u2016Az s k\u22121+By\u2212 c+\u03bbk\u22121\u2016; 7: z k=z s k\u22121\u2212 \u03b7(\u2207\u0303fIk(x s k\u22121)+\u03b2A (Az k\u22121+By s k\u2212c+\u03bb s k\u22121)) \u03b3\u03b8 ; 8: xk=(1\u2212 \u03b8)x\u0303s\u22121 + \u03b8z k; 9: \u03bbk=\u03bb s k\u22121 +Az s k +By s k \u2212 c; 10: end for 11: x\u0303= 1 m \u2211m k=1x s k, \u1ef9 s=(1\u2212\u03b8)\u1ef9s\u22121+ \u03b8 m \u2211m k=1y s k, 12: \u03bb\u0303=\u2212 1 \u03b2 (A T )\u2020\u2207f(x\u0303s); 13: end for Output: x\u0303 , \u1ef9 . been used in the convergence analysis of batch ADMM (?; Nishihara et al. 2015; Deng and Yin 2016) and stochastic ADMM (Zheng and Kwok 2016). The Strongly Convex Case In this part, we consider the case of (2) when each fi(\u00b7) is convex,L-smooth, and f(\u00b7) is \u03bc-strongly convex. Recall that this class of problems include graph-guided Logistic Regression and SVM as notable examples. To efficiently solve this class of problems, we incorporate both the momentum acceleration and variance reduction techniques into stochastic ADMM. Our algorithm is divided into T epochs, and each epoch consists of m stochastic updates, where m is usually chosen to be O(n) as in (Johnson and Zhang 2013). Let z be an important auxiliary variable, its update rule is given as follows. Similar to (Zhong and Kwok 2014b; Zheng and Kwok 2016), we also use the inexact Uzawa method (Zhang, Burger, and Osher 2011) to approximate the sub-problem (7), which can avoid computing the inverse of the matrix ( 1 \u03b7 Id1+\u03b2A A). Moreover, the momentum weight 0\u2264 \u03b8s \u2264 1 (the update rule for \u03b8s is provided below) is introduced into the proximal term 1 2\u03b7\u2016x\u2212xk\u22121\u2016 2 G similar to that of (7), and then the sub-problem with respect to z is formulated as follows: min z (z \u2212z k\u22121) \u2207\u0303fIk(xk\u22121)+ \u03b8s\u22121 2\u03b7 \u2016z \u2212z k\u22121\u2016G + \u03b2 2 \u2016Az +By k \u2212 c+ \u03bbk\u22121\u2016, (9) where \u2207\u0303fIk(xk\u22121) is defined in (8), \u03b7 < 1 2L , and G = \u03b3Id1\u2212 \u03b7\u03b2 \u03b8s\u22121 AA with \u03b3 \u2265 \u03b3min \u2261 \u03b7\u03b2\u2016A A\u20162 \u03b8s\u22121 +1 to ensure that G I similar to (Zheng and Kwok 2016), where \u2016\u00b7\u20162 is the spectral norm, i.e. the largest singular value of the matrix. Furthermore, the update rule for x is given by xk= x\u0303 +\u03b8s\u22121(z s k\u2212 x\u0303)=(1\u2212\u03b8s\u22121)x\u0303+\u03b8s\u22121z k, (10) where \u03b8s\u22121(z k \u2212 x\u0303s\u22121) is the key momentum term (similar to those in accelerated batch methods (Nesterov 2004)), which helps accelerate our algorithm by using the iterate of the previous epoch, i.e. x\u0303s\u22121. Similar to xk, \u1ef9 s = (1\u2212 \u03b8s\u22121)\u1ef9 s\u22121+ \u03b8s\u22121 m \u2211m k=1y s k. Moreover, \u03b8s can be set to a constant \u03b8 in all epochs of our algorithm, which must satisfy 0 \u2264 \u03b8 \u2264 1\u2212 \u03b4(b)/(\u03b1\u22121), where \u03b1 = 1 L\u03b7 > 1+ \u03b4(b), and \u03b4(b) is defined below. The optimal value of \u03b8 is provided in Proposition 1 below. The detailed procedure is shown in Algorithm 1, where we adopt the same initialization technique for \u03bb\u0303 as in (Zheng and Kwok 2016), and (\u00b7)\u2020 is the pseudo-inverse. Note that, when \u03b8=1, ASVRG-ADMM degenerates to SVRG-ADMM in (Zheng and Kwok 2016). The Non-Strongly Convex Case In this part, we consider general convex problems of the form (2) when each fi(\u00b7) is convex, L-smooth, and h(\u00b7) is not necessarily strongly convex (but possibly non-smooth). Different from the strongly convex case, the momentum weight \u03b8s is required to satisfy the following inequalities: 1\u2212 \u03b8s \u03b82 s \u2264 1 \u03b82 s\u22121 and 0 \u2264 \u03b8s \u2264 1\u2212 \u03b4(b) \u03b1\u2212 1 , (11) where \u03b4(b) := n\u2212b b(n\u22121) is a decreasing function with respect to the mini-batch size b. The condition (20) allows the momentum weight to decease, but not too fast, similar to the requirement on the step-size \u03b7k in classical SGD and stochastic ADMM (?). Unlike batch acceleration methods, the weight must satisfy both inequalities in (20). Motivated by the momentum acceleration techniques in (Tseng 2010; Nesterov 2004) for batch optimization, we give the update rule of the weight \u03b8s for the mini-batch case: \u03b8s = \u221a \u03b84 s\u22121+ 4\u03b8 2 s\u22121 \u2212 \u03b8 s\u22121 2 and \u03b80 = 1\u2212 \u03b4(b) \u03b1\u2212 1 . (12) For the special case of b = 1, we have \u03b4(1) = 1 and \u03b80 = 1\u2212 1 \u03b1\u22121 , while b=n (i.e. batch version), \u03b4(n)=0 and \u03b80= 1. Since {\u03b8s} is decreasing, then \u03b8s \u2264 1\u2212 \u03b4(b) \u03b1\u22121 is satisfied. The detailed procedure is shown in Algorithm 2, which has many slight differences in the initialization and output of each epoch from Algorithm 1. In addition, the key difference between them is the update rule for the momentum weight \u03b8s. That is, \u03b8s in Algorithm 1 can be set to a constant, while that in Algorithm 2 is adaptively adjusted as in (12). Convergence Analysis This section provides the convergence analysis of our ASVRG-ADMM algorithms (i.e. Algorithms 1 and 2) for strongly convex and general convex problems, respectively. Following (Zheng and Kwok 2016), we first introduce the following function P (x, y) := f(x)\u2212f(x\u2217)\u2212\u2207f(x\u2217)T(x\u2212 x\u2217)+h(y)\u2212h(y\u2217)\u2212h\u2032(y\u2217)T(y\u2212 y\u2217) as a convergence criterion, where h\u2032(y) denotes the (sub)gradient of h(\u00b7) at y. Indeed, P (x, y)\u2265 0 for all x, y \u2208 R. In the following, we give the intermediate key results for our analysis. Algorithm 2 ASVRG-ADMM for general convex case Input: m, \u03b7, \u03b2 > 0, 1 \u2264 b \u2264 n. Initialize: x\u0303 = z\u0303, \u1ef9, \u03bb\u0303, \u03b80 = 1\u2212 L\u03b7\u03b4(b) 1\u2212L\u03b7 . 1: for s = 1, 2, . . . , T do 2: x0=(1\u2212\u03b8s\u22121)x\u0303+\u03b8s\u22121z\u0303, y 0= \u1ef9s\u22121, \u03bb0= \u03bb\u0303s\u22121; 3: p\u0303 = \u2207f(x\u0303s\u22121), z 0= z\u0303s\u22121; 4: for k = 1, 2, . . . ,m do 5: Choose Ik\u2286 [n]of size b, uniformly at random; 6: y k=argminy h(y)+ \u03b2 2 \u2016Az s k\u22121+By\u2212 c+\u03bbk\u22121\u2016; 7: z k=z s k\u22121\u2212 \u03b7(\u2207\u0303fIk(x s k\u22121)+\u03b2A (Az k\u22121+By s k\u2212c+\u03bb s k\u22121)) \u03b3\u03b8s\u22121 ; 8: xk=(1\u2212 \u03b8s\u22121)x\u0303 + \u03b8s\u22121z k; 9: \u03bbk=\u03bb s k\u22121 +Az s k +By s k \u2212 c; 10: end for 11: x\u0303= 1 m \u2211m k=1x s k, \u1ef9 =(1\u2212\u03b8s\u22121)\u1ef9+ \u03b8s\u22121 m \u2211m k=1y s k, 12: \u03bb\u0303=\u03bbm, z\u0303 =z m, \u03b8s= \u221a \u03b84 s\u22121+4\u03b8 2 s\u22121\u2212\u03b8 2 s\u22121 2 ; 13: end for Output: x\u0303 , \u1ef9 . Lemma 1. E[\u2016\u2207\u0303fIk(xk\u22121)\u2212\u2207f(xk\u22121)\u2016] \u22642L\u03b4(b) [ f(x\u0303)\u2212f(xk\u22121)+(xk\u22121\u2212 x\u0303)\u2207f(xk\u22121) ] , where \u03b4(b)= n\u2212b b(n\u22121)\u22641 and 1 \u2264 b \u2264 n. Lemma 2. Using the same notation as in Lemma 1, let (x\u2217, y\u2217, \u03bb\u2217) denote an optimal solution of problem (2), and {(z k, xk, y k, \u03bbk, x\u0303, \u1ef9)} be the sequence generated by Algorithm 1 or 2 with \u03b8s \u2264 1\u2212 \u03b4(b) \u03b1\u22121, where \u03b1= 1 L\u03b7 . Then the following holds for all k, E [ P (x\u0303, \u1ef9)\u2212 \u03b8s\u22121 m m \u2211 k=1 ( (x\u2217\u2212zs k)A\u03c6k+(y\u2212y k)B\u03c6k )] \u2264E [ P (x\u0303s\u22121, \u1ef9s\u22121) 1/(1\u2212\u03b8s\u22121) + \u03b8 s\u22121 ( \u2016x\u2217\u2212 z 0\u2016G\u2212\u2016x\u2212 z m\u2016G )", "targets": "Accelerated Variance Reduced Stochastic ADMM"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-dde06b69e89243c2bc37293de2b4d706", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "For high level path planning, environments are usually modeled as distance graphs, and path planning problems are reduced to com\u00ad puting the shortest path in distance graphs. One major drawback of this modeling is the inability to model uncertainties, which are of\u00ad ten encountered in practice. In this paper, a new tool, called U-graph, is proposed for environment modeling. A U-graph is an ex\u00ad tension of distance graphs with the ability to handle a kind of uncertainty. By model\u00ad ing an uncertain environment as a U-graph, and a navigation problem as a Markovian decision process, we can precisely define a new optimality criterion for navigation plans, and more importantly, we can come up with a general algorithm for computing optimal plans for navigation tasks.", "targets": "High Level Path Planning with Uncertainty"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e1c5ef6d1c7645d89f2ffb0c46dd4841", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In many applications data is naturally presented in terms of orderings of some basic elements or symbols. Reasoning about such data requires a notion of similarity capable of handling sequences of different lengths. In this paper we describe a family of Mercer kernel functions for such sequentially structured data. The family is characterized by a decomposable structure in terms of symbol-level and structure-level similarities, representing a specific combination of kernels which allows for efficient computation. We provide an experimental evaluation on sequential classification tasks comparing kernels from our family of kernels to a state of the art sequence kernel called the Global Alignment kernel which has been shown to outperform Dynamic Time Warping.", "targets": "On a Family of Decomposable Kernels on Sequences On a Family of Decomposable Kernels on Sequences"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a852bb43fc8242e2bc85dc8aaeabed9a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we introduce reactive multi-context systems (rMCSs), a framework for reactive reasoning in the presence of heterogeneous knowledge sources. In particular, we show how to integrate data streams into multi-context systems (MCSs) and how to model the dynamics of the systems, based on two types of bridge rules. We illustrate how several typical problems arising in the context of stream reasoning can be handled using our framework. Reasoning based on multiple knowledge sources that need to be integrated faces the problem of potential inconsistencies. We discuss various methods for handling inconsistencies, with a special focus on non-existence of equilibria. In particular, we show how methods developed for managed MCSs can be generalized to rMCSs. We also study the issue of nondeterminism in rMCSs. One way of avoiding nondeterminism is by applying an alternative, skeptical semantics. We show how such a semantics, called well-founded semantics, can be defined for rMCSs, and what the effect of using this semantics instead of the original one is. We investigate the complexity of various reasoning problems related to rMCSs. Finally, we discuss related work, with a special focus on two of the most relevant approaches w.r.t. stream reasoning, namely LARS and STARQL.", "targets": "Reactive Multi-Context Systems: Heterogeneous Reasoning in Dynamic Environments"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e3b7f72fb1cf4bdfac43f3ae5b2970d1", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Two-timescale Stochastic Approximation (SA) algorithms are widely used in Reinforcement Learning (RL). Their iterates have two parts that are updated with distinct stepsizes. In this work we provide a recipe for analyzing two-timescale SA. Using it, we develop the first convergence rate result for them. From this result we extract key insights on stepsize selection. As an application, we obtain convergence rates for two-timescale RL algorithms such as GTD(0), GTD2, and TDC.", "targets": "Two-Timescale Stochastic Approximation Convergence Rates with Applications to Reinforcement Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7b3b063381684a2791cba9b4d216d6c4", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "One of the most promising approaches for complex technical systems analysis employs ensemble methods of classification. Ensemble methods enable to build a reliable decision rules for feature space classification in the presence of many possible states of the system. In this paper, novel techniques based on decision trees are used for evaluation of the reliability of the regime of electric power systems. We proposed hybrid approach based on random forests models and boosting models. Such techniques can be applied to predict the interaction of increasing renewable power, strage devices and swiching of smart loads from intelligent domestic appliances, storage heaters and air-conditioning units and electric vehicles with grid for enhanced decision making. The ensemble classification methods were tested on the modified 118-bus IEEE power system showing that proposed technique can be employed to examine whether the power system is secured under steady-state operating conditions.", "targets": "Ensemble Methods of Classification for Power Systems Security Assessment"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0afa4e6d5e1f47238e5cee97bd9c13a9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "To achieve state-of-the-art results on challenges in vision, Convolutional Neural Networks learn stationary filters that take advantage of the underlying image structure. Our purpose is to propose an efficient layer formulation that extends this property to any domain described by a graph. Namely, we use the support of its adjacency matrix to design learnable weight sharing filters able to exploit the underlying structure of signals. The proposed formulation makes it possible to learn the weights of the filter as well as a scheme that controls how they are shared across the graph. We perform validation experiments with image datasets and show that these filters offer performances comparable with convolutional ones.", "targets": "Learning Local Receptive Fields and their Weight Sharing Scheme on Graphs"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-60bf9601191748198ac9755f5b138466", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Abstract The nearest neighbor rule is one of the most widely used models for classification, and selecting a compact set of prototype instances is a primary challenges for its applications. Many existing approaches for prototype selection exploit instance-based analyses and locally-defined criteria on the class distribution, which are intractable for numerical optimization techniques. In this paper, we explore a parametric framework with an adjusted nearest neighbor rule, in which the selection of the neighboring prototypes is modified by their respective parameters. The framework allows us to formulate a minimization problem of the violation of the adjusted nearest neighbor rule over the training set with regards to numerical parameters. We show that the problem reduces to a large-margin principled learning and demonstrate its advantage by empirical comparisons with recent state-ofthe-art methods using public benchmark data.", "targets": "Discriminative Learning of the Prototype Set for Nearest Neighbor Classification"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a46db45073f64427b8384d67bfb280bc", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Semantic parsing methods are used for capturing and representing semantic meaning of text. Meaning representation capturing all the concepts in the text may not always be available or may not be sufficiently complete. Ontologies provide a structured and reasoning-capable way to model the content of a collection of texts. In this work, we present a novel approach to joint learning of ontology and semantic parser from text. The method is based on semi-automatic induction of a context-free grammar from semantically annotated text. The grammar parses the text into semantic trees. Both, the grammar and the semantic trees are used to learn the ontology on several levels \u2013 classes, instances, taxonomic and non-taxonomic relations. The approach was evaluated on the first sentences of Wikipedia pages describing people.", "targets": "Joint learning of ontology and semantic parser from text"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-88e66f326db344aaa24007c66faf4aca", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "following biological inspired system stated that human action recognition in the brain of mammalian leads two distinct pathways in the model, which are specialized for analysis of motion (optic flow) and form information. Principally, we have defined a novel and robust form features applying active basis model as form extractor in form pathway in the biological inspired model. An unbalanced synergetic neural net-work classifies shapes and structures of human objects along with tuning its attention parameter by quantum particle swarm optimization (QPSO) via initiation of Centroidal Voronoi Tessellations. These tools utilized and justified as strong tools for following biological system model in form pathway. But the final decision has done by combination of ultimate outcomes of both pathways via fuzzy inference which increases novality of proposed model. Combination of these two brain pathways is done by considering each feature sets in Gaussian membership functions with fuzzy product inference method. Two configurations have been proposed for form pathway: applying multi-prototype human action templates using two time synergetic neural network for obtaining uniform template regarding each actions, and second scenario that it uses abstracting human action in four key-frames. Experimental results showed promising accuracy performance on different datasets (KTH and Weizmann).", "targets": "Bio-Inspired Human Action Recognition using Hybrid Max-Product Neuro-Fuzzy Classifier and Quantum-Behaved PSO"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b574b1697fbb4af7aa033260b4a9ae94", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We introduce a model that learns active learning algorithms via metalearning. For a distribution of related tasks, our model jointly learns: a data representation, an item selection heuristic, and a method for constructing prediction functions from labeled training sets. Our model uses the item selection heuristic to gather labeled training sets from which to construct prediction functions. Using the Omniglot and MovieLens datasets, we test our model in synthetic and practical settings.", "targets": "Learning Algorithms for Active Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3f8dd5bcbf1a412f9e12a6ded7a05c86", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Randomized matrix compression techniques, such as the Johnson-Lindenstrauss transform, have emerged as an effective and practical way for solving large-scale problems efficiently. With a focus on computational efficiency, however, forsaking solutions quality and accuracy becomes the trade-off. In this paper, we investigate compressed least-squares problems and propose new models and algorithms that address the issue of error and noise introduced by compression. While maintaining computational efficiency, our models provide robust solutions that are more accurate\u2014relative to solutions of uncompressed least-squares\u2014than those of classical compressed variants. We introduce tools from robust optimization together with a form of partial compression to improve the error-time trade-offs of compressed least-squares solvers. We develop an efficient solution algorithm for our Robust Partially-Compressed (RPC) model based on a reduction to a one-dimensional search. We also derive the first approximation error bounds for Partially-Compressed least-squares solutions. Empirical results comparing numerous alternatives suggest that robust and partially compressed solutions are effectively insulated against aggressive randomized transforms.", "targets": "Robust Partially-Compressed Least-Squares"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2d31a182f52e471b8b7d00f61f14b473", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose to prune a random forest (RF) for resource-constrained prediction. We first construct a RF and then prune it to optimize expected feature cost & accuracy. We pose pruning RFs as a novel 0-1 integer program with linear constraints that encourages feature re-use. We establish total unimodularity of the constraint set to prove that the corresponding LP relaxation solves the original integer program. We then exploit connections to combinatorial optimization and develop an efficient primal-dual algorithm, scalable to large datasets. In contrast to our bottom-up approach, which benefits from good RF initialization, conventional methods are top-down acquiring features based on their utility value and is generally intractable, requiring heuristics. Empirically, our pruning algorithm outperforms existing state-of-the-art resource-constrained algorithms.", "targets": "Pruning Random Forests for Prediction on a Budget"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-16e779655b254850ab921e9fb9550aa0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The classification of opinion texts in positive and negative is becoming a subject of great interest in sentiment analysis. The existence of many labeled opinions motivates the use of statistical and machine-learning methods. First-order statistics have proven to be very limited in this field. The Opinum approach is based on the order of the words without using any syntactic and semantic information. It consists of building one probabilistic model for the positive and another one for the negative opinions. Then the test opinions are compared to both models and a decision and confidence measure are calculated. In order to reduce the complexity of the training corpus we first lemmatize the texts and we replace most namedentities with wildcards. Opinum presents an accuracy above 81% for Spanish opinions in the financial products domain. In this work we discuss which are the most important factors that have an impact on the classification performance.", "targets": "Statistical sentiment analysis performance in Opinum"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c690ab333133453fa074fbb1fdc1c076", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A prediction market is a useful means of aggregating information about a future event. To function, the market needs a trusted entity who will verify the true outcome in the end. Motivated by the recent introduction of decentralized prediction markets, we introduce a mechanism that allows for the outcome to be determined by the votes of a group of arbiters who may themselves hold stakes in the market. Despite the potential conflict of interest, we derive conditions under which we can incentivize arbiters to vote truthfully by using funds raised from market fees to implement a peer prediction mechanism. Finally, we investigate what parameter values could be used in a real-world implementation of our mechanism.", "targets": "Crowdsourced Outcome Determination in Prediction Markets"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-ee33400473714cefb9bf706b381dab9d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This article is about how the SP theory of intelligence and its realisation in the SP machine may, with advantage, be applied to the management and analysis of big data. The SP system\u2014introduced in the article and fully described elsewhere\u2014 may help to overcome the problem of variety in big data: it has potential as a universal framework for the representation and processing of diverse kinds of knowledge (UFK), helping to reduce the diversity of formalisms and formats for knowledge and the different ways in which they are processed. It has strengths in the unsupervised learning or discovery of structure in data, in pattern recognition, in the parsing and production of natural language, in several kinds of reasoning, and more. It lends itself to the analysis of streaming data, helping to overcome the problem of velocity in big data. Central in the workings of the system is lossless compression of information: making big data smaller and reducing problems of storage and management. There is potential for substantial economies in the transmission of data, for big cuts in the use of energy in computing, for faster processing, and for smaller and lighter computers. The system provides a handle on the problem of veracity in big data, with potential to assist in the management of errors and uncertainties in data. It lends itself to the visualisation of knowledge structures and inferential processes. A high-parallel, open-source version of the SP machine would provide a means for researchers everywhere to explore what can be done with the system and to create new versions of it.", "targets": "Big Data and the SP Theory of Intelligence"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-359c0bb99f294a42b2a4c01c7739d43d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A rhetorical structure tree (RS tree) is a representation of discourse relations among elementary discourse units (EDUs). A RS tree is very useful to many text processing tasks employing relationships among EDUs such as text understanding, summarization, and question-answering. Thai language with its unique linguistic characteristics requires a unique RS tree construction technique. This paper proposes an approach for Thai RS tree construction which consists of three major steps: EDU segmentation, Thai RS tree construction, and discourse relation (DR) identification. Two hidden markov models derived from grammatical rules are used to segment EDUs, a clustering technique with its similarity measure derived from Thai semantic rules is used to construct a Thai RS tree, and a decision tree whose features extracted from the rules is used to determine the DR between EDUs. The proposed technique is evaluated using three Thai corpora. The results show the Thai RS tree construction and the DR identification effectiveness of 94.90% and 82.81%, respectively. KeywordsThai Language, Element Discourse Unit, Rhetorical Structure Tree, Discourse Relation.", "targets": "THAI RHETORICAL STRUCTURE ANALYSIS"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-49bbea4673a74c889842b1bd5578c269", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "L\u2019articolo presenta un\u2019introduzione all\u2019Intelligenza Artificiale (IA) in forma divulgativa e informale ma precisa. L\u2019articolo affronta prevalentemente gli aspetti informatici della disciplina, presentando varie tecniche usate nei sistemi di IA e dividendole in simboliche e subsimboliche. L\u2019articolo si conclude presentando il dibattito in corso sull\u2019IA e in particolare sui vantaggi e i pericoli che sono stati individuati, terminando con l\u2019opinione dell\u2019autore al riguardo.", "targets": "Introduzione all\u2019Intelligenza Artificiale\u2217"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a7456a5143544a6a8cf0ca5341c936ce", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Techniques for plan recogmtwn under uncer\u00ad tainty require a stochastic model of the plan\u00ad generation process. We introduce probabilistic state-dependent grammars (PSDGs) to represent an agent's plan-generation process. The PSDG language model extends probabilistic context\u00ad free grammars (PCFGs) by allowing production probabilities to depend on an explicit model of the planning agent's internal and external state. Given a PSDG description of the plan-generation process, we can then use inference algorithms that exploit the particular independence proper\u00ad ties of the PSDG language to efficiently answer plan-recognition queries. The combination of the PSDG language model and inference algorithms extends the range of plan-recognition domains for which practical probabilistic inference is pos\u00ad sible, as illustrated by applications in traffic mon\u00ad itoring and air combat.", "targets": "Probabilistic State-Dependent Grammars for Plan Recognition"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-25b1f16f5d094745a8f88edcdee36ecf", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present a novel neural network model that learns POS tagging and graph-based dependency parsing jointly. Our model uses bidirectional LSTMs to learn feature representations shared for both POS tagging and dependency parsing tasks, thus handling the feature-engineering problem. Our extensive experiments, on 19 languages from the Universal Dependencies project, show that our model outperforms the state-of-the-art neural networkbased Stack-propagation model for joint POS tagging and transition-based dependency parsing, resulting in a new state of the art. Our code is open-source and available at: https://github.com/ datquocnguyen/jPTDP.", "targets": "A Novel Neural Network Model for Joint POS Tagging and Graph-based Dependency Parsing"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6e1a596513514995ac635edac126192f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Classes in natural images tend to follow long tail distributions. This is problematic when there are insufficient training examples for rare classes. This effect is emphasized in compound classes, involving the conjunction of several concepts, such as those appearing in action-recognition datasets. In this paper, we propose to address this issue by learning how to utilize common visual concepts which are readily available. We detect the presence of prominent concepts in images and use them to infer the target labels instead of using visual features directly, combining tools from vision and natural-language processing. We validate our method on the recently introduced HICO dataset reaching a mAP of 31.54% and on the Stanford40 Actions dataset, where the proposed method outperforms current state-of-the art and, combined with direct visual features, obtains an accuracy 83.12%. Moreover, the method provides for each class a semantically meaningful list of keywords and relevant image regions relating it to its constituent concepts.", "targets": "Action Classification via Concepts and Attributes"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-504a0b67d33f45bcb867c5908d406a3b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Many classification problems involve data instances that are interlinked with each other, such as webpages connected by hyperlinks. Techniques for collective classification (CC) often increase accuracy for such data graphs, but usually require a fully-labeled training graph. In contrast, we examine how to improve the semi-supervised learning of CC models when given only a sparsely-labeled graph, a common situation. We first describe how to use novel combinations of classifiers to exploit the different characteristics of the relational features vs. the non-relational features. We also extend the ideas of label regularization to such hybrid classifiers, enabling them to leverage the unlabeled data to bias the learning process. We find that these techniques, which are efficient and easy to implement, significantly increase accuracy on three real datasets. In addition, our results explain conflicting findings from prior related studies.", "targets": "Semi-Supervised Collective Classification via Hybrid Label Regularization"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7f0f693656904aedacb7602ad53ae4fd", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present a converged algorithm for Tikhonov regularized nonnegative matrix factorization (NMF). We specially choose this regularization because it is known that Tikhonov regularized least square (LS) is the more preferable form in solving linear inverse problems than the conventional LS. Because an NMF problem can be decomposed into LS subproblems, it can be expected that Tikhonov regularized NMF will be the more appropriate approach in solving NMF problems. The algorithm is derived using additive update rules which have been shown to have convergence guarantee. We equip the algorithm with a mechanism to automatically determine the regularization parameters based on the L-curve, a well-known concept in the inverse problems community, but is rather unknown in the NMF research. The introduction of this algorithm thus solves two inherent problems in Tikhonov regularized NMF algorithm research, i.e., convergence guarantee and regularization parameters determination.", "targets": "A Converged Algorithm for Tikhonov Regularized Nonnegative Matrix Factorization with Automatic Regularization Parameters Determination"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8ba78b37fe164e74b35715c2a8bee5ab", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We introduce T-LESS, a new public dataset for estimating the 6D pose, i.e. translation and rotation, of texture-less rigid objects. The dataset features thirty industry-relevant objects with no significant texture and no discriminative color or reflectance properties. The objects exhibit symmetries and mutual similarities in shape and/or size. Compared to other datasets, a unique property is that some of the objects are parts of others. The dataset includes training and test images that were captured with three synchronized sensors, specifically a structured-light and a time-of-flight RGB-D sensor and a high-resolution RGB camera. There are approximately 39K training and 10K test images from each sensor. Additionally, two types of 3D models are provided for each object, i.e. a manually created CAD model and a semi-automatically reconstructed one. Training images depict individual objects against a black background. Test images originate from twenty test scenes having varying complexity, which increases from simple scenes with several isolated objects to very challenging ones with multiple instances of several objects and with a high amount of clutter and occlusion. The images were captured from a systematically sampled view sphere around the object/scene, and are annotated with accurate ground truth 6D poses of all modeled objects. Initial evaluation results indicate that the state of the art in 6D object pose estimation has ample room for improvement, especially in difficult cases with significant occlusion. The T-LESS dataset is available online at cmp.felk.cvut.cz/t-less.", "targets": "T-LESS: An RGB-D Dataset for 6D Pose Estimation of Texture-less Objects"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-95959e057e1146038dadcb312731d928", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Being able to automatically and quickly under-stand the user context during a session is a main issue forrecommender systems. As a first step toward achieving thatgoal, we propose a model that observes in real time thediversity brought by each item relatively to a short sequenceof consultations, corresponding to the recent user history. Ourmodel has a complexity in constant time, and is generic sinceit can apply to any type of items within an online service (e.g.profiles, products, music tracks) and any application domain (e-commerce, social network, music streaming), as long as we havepartial item descriptions. The observation of the diversity levelover time allows us to detect implicit changes. In the long term,we plan to characterize the context, i.e. to find common featuresamong a contiguous sub-sequence of items between two changesof context determined by our model. This will allow us tomake context-aware and privacy-preserving recommendations,to explain them to users. As this is an on-going research, thefirst step consists here in studying the robustness of our modelwhile detecting changes of context. In order to do so, we use amusic corpus of 100 users and more than 210,000 consultations(number of songs played in the global history). We validatethe relevancy of our detections by finding connections betweenchanges of context and events, such as ends of session. Ofcourse, these events are a subset of the possible changes ofcontext, since there might be several contexts within a session.We altered the quality of our corpus in several manners, so asto test the performances of our model when confronted withsparsity and different types of items. The results show that ourmodel is robust and constitutes a promising approach. Keywords-User Modeling; Diversity; Context; Real-TimeAnalysis of Navigation Path; Recommender Systems", "targets": "Toward a Robust Diversity-Based Model to Detect Changes of Context"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-bb561696d28f4db59098b170301773cb", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper presents a novel communication-efficient parallel belief propagation (CE-PBP) algorithm for training latent Dirichlet allocation (LDA). Based on the synchronous belief propagation (BP) algorithm, we first develop a parallel belief propagation (PBP) algorithm on the parallel architecture. Because the extensive communication delay often causes a low efficiency of parallel topic modeling, we further use Zipf\u2019s law to reduce the total communication cost in PBP. Extensive experiments on different data sets demonstrate that CE-PBP achieves a higher topic modeling accuracy and reduces more than 80% communication cost than the state-of-the-art parallel Gibbs sampling (PGS) algorithm.", "targets": "Communication-Efficient Parallel Belief Propagation for Latent Dirichlet Allocation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d9d4e20829734dde80d650caf8237987", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Multivariate time series forecasting is an important machine learning problem across many domains, including predictions of solar plant energy output, electricity consumption, and traffic jam situation. Temporal data arise in these realworld applications often involves a mixture of long-term and short-term patterns, for which traditional approaches such as Autoregressive models and Gaussian Process may fail. In this paper, we proposed a novel deep learning framework, namely Longand Short-term Time-series network (LSTNet), to address this open challenge. LSTNet uses the Convolution Neural Network (CNN) to extract short-term local dependency patterns among variables, and the Recurrent Neural Network (RNN) to discover long-term patterns and trends. In our evaluation on real-world data with complex mixtures of repetitive patterns, LSTNet achieved significant performance improvements over that of several state-of-the-art baseline methods. The dataset and experiment code both are uploaded to Github.", "targets": "Modeling Long- and Short-Term Temporal Patterns with Deep Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a9538d9604a148018d327deb2c23bb17", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this work we introduce a conditional accelerated lazy stochastic gradient descent algorithm with optimal number of calls to a stochastic first-order oracle and convergence rate O( 1 \u03b52 ) improving over the projection-free, Online Frank-Wolfe based stochastic gradient descent of Hazan and Kale [2012] with convergence rate O( 1 \u03b54 ).", "targets": "Conditional Accelerated Lazy Stochastic Gradient Descent"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-62cb6500eed246a49e05f3f2dd8a99a6", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In the problem of edge sign prediction, we are given a directed graph (representing a social network), and our task is to predict the binary labels of the edges (i.e., the positive or negative nature of the social relationships). Many successful heuristics for this problem are based on the troll-trust features, estimating at each node the fraction of outgoing and incoming positive/negative edges. We show that these heuristics can be understood, and rigorously analyzed, as approximators to the Bayes optimal classifier for a simple probabilistic model of the edge labels. We then show that the maximum likelihood estimator for this model approximately corresponds to the predictions of a Label Propagation algorithm run on a transformed version of the original social graph. Extensive experiments on a number of real-world datasets show that this algorithm is competitive against state-ofthe-art classifiers in terms of both accuracy and scalability. Finally, we show that trolltrust features can also be used to derive online learning algorithms which have theoretical guarantees even when edges are adversarially labeled.", "targets": "On the Troll-Trust Model for Edge Sign Prediction in Social Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-edd2af67a0bd46c0bca09ffb1422b8a8", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We study response selection for multi-turn conversation in retrieval based chatbots. Existing works either ignores relationships among utterances, or misses important information in context when matching a response with a highly abstract context vector finally. We propose a new session based matching model to address both problems. The model first matches a response with each utterance on multiple granularities, and distills important matching information from each pair as a vector with convolution and pooling operations. The vectors are then accumulated in a chronological order through a recurrent neural network (RNN) which models the relationships among the utterances. The final matching score is calculated with the hidden states of the RNN. Empirical study on two public data sets shows that our model can significantly outperform the state-of-the-art methods for response selection in multi-turn conversation.", "targets": "Sequential Match Network: A New Architecture for Multi-turn Response Selection in Retrieval-based Chatbots"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9361f6b4a092423eb16b0d8be5f43345", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Patient time series classification faces challenges in high degrees of dimensionality and missingness. In light of patient similarity theory, this study explores effective temporal feature engineering and reduction, missing value imputation, and change point detection methods that can afford similarity-based classification models with desirable accuracy enhancement. We select a piecewise aggregation approximation method to extract fine-grain temporal features and propose a minimalist method to impute missing values in temporal features. For dimensionality reduction, we adopt a gradient descent search method for feature weight assignment. We propose new patient status and directional change definitions based on medical knowledge or clinical guidelines about the value ranges for different patient status levels, and develop a method to detect change points indicating positive or negative patient status changes. We evaluate the effectiveness of the proposed methods in the context of early Intensive Care Unit mortality prediction. The evaluation results show that the k-Nearest Neighbor algorithm that incorporates methods we select and propose significantly outperform the relevant benchmarks for early ICU mortality prediction. This study makes contributions to time series classification and early ICU mortality prediction via identifying and enhancing temporal feature engineering and reduction methods for similarity-based time series classification.", "targets": "Leveraging Time Series Data in Similarity Based Healthcare Predictive Models"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-fff6584e160e4c848a6a5894b07abac7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Mean-field variational inference is a method for approximate Bayesian posterior inference. It approximates a full posterior distribution with a factorized set of distributions by maximizing a lower bound on the marginal likelihood. This requires the ability to integrate a sum of terms in the log joint likelihood using this factorized distribution. Often not all integrals are in closed form, which is typically handled by using a lower bound. We present an alternative algorithm based on stochastic optimization that allows for direct optimization of the variational lower bound. This method uses control variates to reduce the variance of the stochastic search gradient, in which existing lower bounds can play an important role. We demonstrate the approach on two non-conjugate models: logistic regression and an approximation to the HDP.", "targets": "Variational Bayesian Inference with Stochastic Search"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-af8d33c36b09432a89a728ec20412026", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM\u2019s BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a strong phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which beats the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM\u2019s performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.", "targets": "Sequence to Sequence Learning with Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5cf927a8fcf649eba162baf82b1e1f9e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In Recommender Systems research, algorithms are often characterized as either Collaborative Filtering (CF) or Content Based (CB). CF algorithms are trained using a dataset of user explicit or implicit preferences while CB algorithms are typically based on item profiles. These approaches harness very different data sources hence the resulting recommended items are generally also very different. This paper presents a novel model that serves as a bridge from items content into their CF representations. We introduce a multiple input deep regression model to predict the CF latent embedding vectors of items based on their textual description and metadata. We showcase the effectiveness of the proposed model by predicting the CF vectors of movies and apps based on their textual descriptions. Finally, we show that the model can be further improved by incorporating metadata such as the movie release year and tags which contribute to a higher accuracy.", "targets": "Microsoft Word - cb2cf_arxiv.docx"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9100e902fbfd41e8819b8dfd6ab52618", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Extraction of Electrocardiography (ECG or EKG) signals of mother and baby is a challenging task, because one single device is used and it receives a mixture of multiple heart beats. In this paper, we would like to design a filter to separate the signals from each other.", "targets": "Electrocardiography Separation of Mother and Baby"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3b65fa931c2a4db388541d7f11999e4d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper describes a new kind of knowledge representation and mining system which we are calling the Semantic Knowledge Graph. At its heart, the Semantic Knowledge Graph leverages an inverted index, along with a complementary uninverted index, to represent nodes (terms) and edges (the documents within intersecting postings lists for multiple terms/nodes). This provides a layer of indirection between each pair of nodes and their corresponding edge, enabling edges to materialize dynamically from underlying corpus statistics. As a result, any combination of nodes can have edges to any other nodes materialize and be scored to reveal latent relationships between the nodes. This provides numerous benefits: the knowledge graph can be built automatically from a real-world corpus of data, new nodes along with their combined edges can be instantly materialized from any arbitrary combination of preexisting nodes (using set operations), and a full model of the semantic relationships between all entities within a domain can be represented and dynamically traversed using a highly compact representation of the graph. Such a system has widespread applications in areas as diverse as knowledge modeling and reasoning, natural language processing, anomaly detection, data cleansing, semantic search, analytics, data classification, root cause analysis, and recommendations systems. The main contribution of this paper is the introduction of a novel system the Semantic Knowledge Graph which is able to dynamically discover and score interesting relationships between any arbitrary combination of entities (words, phrases, or extracted concepts) through dynamically materializing nodes and edges from a compact graphical representation built automatically from a corpus of data representative of a knowledge domain. The source code for our Semantic Knowledge Graph implementation is being published along with this paper to facilitate further research and extensions of this work.", "targets": "The Semantic Knowledge Graph: A compact, auto-generated model for real-time traversal and ranking of any relationship within a domain"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-02260778e634461fab5ee207358e9e05", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper investigates a new method for improving the learning algorithm of Mixture of Experts (ME) model using a hybrid of Modified Cuckoo Search (MCS) and Conjugate Gradient (CG) as a second order optimization technique. The CG technique is combined with Back-Propagation (BP) algorithm to yield a much more efficient learning algorithm for ME structure. In addition, the experts and gating networks in enhanced model are replaced by CG based Multi-Layer Perceptrons (MLPs) to provide faster and more accurate learning. The CG is considerably depends on initial weights of connections of Artificial Neural Network (ANN), so, a metaheuristic algorithm, the so-called Modified Cuckoo Search is applied in order to select the optimal weights. The performance of proposed method is compared with Gradient Decent Based ME (GDME) and Conjugate Gradient Based ME (CGME) in classification and regression problems. The experimental results show that hybrid MSC and CG based ME (MCS-CGME) has faster convergence and better performance in utilized benchmark data sets.", "targets": "Extended Mixture of MLP Experts by Hybrid of Conjugate Gradient Method and Modified Cuckoo Search"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-fc218dcd27a54ffc943c766669f58115", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In many robotic applications, some aspects of the system dynamics can be modeled accurately while others are difficult to obtain or model. We present a novel reinforcement learning (RL) method for continuous state and action spaces that learns with partial knowledge of the system and without active exploration. It solves linearly-solvable Markov decision processes (L-MDPs), which are well suited for continuous state and action spaces, based on an actor-critic architecture. Compared to previous RL methods for L-MDPs and path integral methods which are model based, the actor-critic learning does not need a model of the uncontrolled dynamics and, importantly, transition noise levels; however, it requires knowing the control dynamics for the problem. We evaluate our method on two synthetic test problems, and one real-world problem in simulation and using real traffic data. Our experiments demonstrate improved learning and policy performance.", "targets": "Actor-Critic for Linearly-Solvable Continuous MDP with Partially Known Dynamics"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6843132fe1bc4d8ba815d0b7a07e380d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper proposes to use probabilistic model checking to synthesize optimal robot policies in multi-tasking autonomous systems that are subject to human-robot interaction. Given the convincing empirical evidence that human behavior can be related to reinforcement models, we take as input a well-studied Q-table model of the human behavior for flexible scenarios. We first describe an automated procedure to distill a Markov decision process (MDP) for the human in an arbitrary but fixed scenario. The distinctive issue is that \u2013 in contrast to existing models \u2013 under-specification of the human behavior is included. Probabilistic model checking is used to predict the human\u2019s behavior. Finally, the MDP model is extended with a robot model. Optimal robot policies are synthesized by analyzing the resulting two-player stochastic game. Experimental results with a prototypical implementation using PRISM show promising results.", "targets": "Probabilistic Model Checking for Complex Cognitive Tasks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7d100c3627184b499663dd218b77f389", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Multilabel classification is a relatively recent subfield of machine learning. Unlike to the classical approach, where instances are labeled with only one category, in multilabel classification, an arbitrary number of categories is chosen to label an instance. Due to the problem complexity (the solution is one among an exponential number of alternatives), a very common solution (the binary method) is frequently used, learning a binary classifier for every category, and combining them all afterwards. The assumption taken in this solution is not realistic, and in this work we give examples where the decisions for all the labels are not taken independently, and thus, a supervised approach should learn those existing relationships among categories to make a better classification. Therefore, we show here a generic methodology that can improve the results obtained by a set of independent probabilistic binary classifiers, by using a combination procedure with a classifier trained on the co-occurrences of the labels. We show an exhaustive experimentation in three different standard corpora of labeled documents (Reuters-21578, Ohsumed-23 and RCV1), which present noticeable improvements in all of them, when using our methodology, in three probabilistic base classifiers.", "targets": "A probabilistic methodology for multilabel classification"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b50d8b468ef8482681fe921192f64df6", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Nearest neighbor methods are a popular class of nonparametric estimators with severaldesirable properties, such as adaptivity to different distance scales in different regions ofspace. Prior work on convergence rates for nearest neighbor classification has not fullyreflected these subtle properties. We analyze the behavior of these estimators in metricspaces and provide finite-sample, distribution-dependent rates of convergence under min-imal assumptions. As a by-product, we are able to establish the universal consistency ofnearest neighbor in a broader range of data spaces than was previously known. We illus-trate our upper and lower bounds by introducing smoothness classes that are customizedfor nearest neighbor classification.", "targets": "Rates of Convergence for Nearest Neighbor Classification"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d694c8f0a7ef4aa28ed3c72b52b3e076", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We analyze the structure of the state space of chess by means of transition path sampling Monte Carlo simulation. Based on the typical number of moves required to transpose a given configuration of chess pieces into another, we conclude that the state space consists of several pockets between which transitions are rare. Skilled players explore an even smaller subset of positions that populate some of these pockets only very sparsely. These results suggest that the usual measures to estimate both, the size of the state space and the size of the tree of legal moves, are not unique indicators of the complexity of the game, but that topological considerations are equally important. Chess is a two-player board game with a small set of rules according to which pieces can be moved. It belongs to the class of games with perfect information that have not been solved yet, due to the sheer size of its state space. The computerized analysis of chess started with a seminal paper by Claude Shannon in 1950 [1], and since about the year 2000 computer programs can regularly beat top-level human players [2]. They do so by employing well-tailored heuristic evaluation functions for the game\u2019s states, which allow one to short-cut the exploration of the vast game tree of possible moves. In this context, chess is often compared to Go, where computers only very recently started to match the performance of human champions [3]. The difference is usually attributed to the different sizes of the games\u2019 state spaces: the game-tree complexity of Go exceeds that of chess by some 200 orders of magnitude. However, while size is an important factor in determining the complexity of a game, the topology of the state space may be equally important. Intuitively, the different kinds of moves performed by different chess pieces impose a highly nontrivial (and directed) topology. It is not at all straightforward to establish whether a given point in the state space is reachable from another one by a sequence of legal moves. We thus face an interesting sampling problem: given two chess configurations, can one establish whether they are connected, i.e., whether there exists a sequence of legal moves that transforms the first configuration into the second? Furthermore, what is the typical distance (in plies, or half moves) between such configurations? Clearly, direct enumeration or standard Monte Carlo sampling are out of reach: after each ply, the game tree is estimated to branch into 30 to 35 subtrees [1]. Here we demonstrate that it is possible to analyze the topological structure of the state space of chess by stochastic-process rare-event sampling (SPRES) [4]. SPRES is a transition-path Monte Carlo sampling scheme that works in full non-equilibrium conditions, where the dynamics is neither stationary nor reversible. 1 Combining SPRES with an optimized chess-move generator [5], we estimate the distribution of path lengths between both randomly generated configurations and those encountered in games played by humans. Analyzing these distributions in terms of random-graph theory, we conjecture that the state space of chess consists of multiple distinct pockets, interconnected by relatively few paths. These pockets are only very sparsely populated by the states that are relevant for skilled play. Previous statistical-physics analyses of chess have focused mostly on the distribution of moves in human gameplay, or on games played by computer chess engines. For example, the popularity of opening sequences follows a power-law distribution according to Zipf\u2019s law [6] (in this 1Our analysis of chess also serves to demonstrate the versatility and power of SPRES as a technique that applies to abstract nonphysical dynamics. p-1 ar X iv :1 60 9. 04 64 8v 1 [ cs .A I] 1 4 Se p 20 16 A. Atashpendar, T. Schilling, and Th. Voigtmann context, Go is rather similar [7]), highly biased by the skill of the players involved [8, 9]. Optimal play (in the sense that moves are evaluated favorably by modern computer chess engines) has also been analyzed in the language of free-energy landscapes [10]. Our approach is entirely different: we consider the set of all legal moves, irrespective of their engine evaluation, in order to establish the connectivity of the state space of chess. Within this space, we then also study the relative size and structure of the subset of positions encountered in games played by chess masters. The state of a chess game at any point in time is entirely described by the board configuration (the positions of all chess pieces), a small set of additional variables that track the possibility of special moves (castling or en-passant capture) and the information regarding which player\u2019s turn it is. The set of possible states is given by all states that involve up to 16 chess pieces per color (there may be fewer due to captures, and the number of pieces and pawns may change due to pawn promotions). Only a subset of all possible states is legal, as for example, the two kings cannot be in check at the same time. Of interest in the following are states that are legal and also accessible from the given initial configuration. As an example of an inaccessible but legal state, consider the case where the position of a bishop differs from its initial position, while the positions of the pawns do not. This state is inaccessible, because pawns are initially placed in front of the other pieces of their colour, their moves are always irreversible and the other pieces (apart from the knights) cannot jump over the pawns. Thus, although the state is legal, it cannot be reached by legal moves. To sample the structure of the state space, we generate sequences of accessible states by randomly drawing moves evenly from all legal moves (Monte Carlo, MC). Most of these states entail dramatic disadvantages for at least one side. Therefore, the set of states encountered in optimalstrategy play is vastly smaller than the set we sample. As a proxy to these unknown optimal states, we use database (DB) states extracted from a database of about two million human-played games [11]. In both cases (MC and DB), we then pick pairs of states randomly and establish their connectivity with respect to the game tree by all legal (MC) moves, i.e., irrespective of whether the connecting pathway contains unfavorable positions in terms of gameplay. In the vicinity of the starting configuration, many randomly drawn pairs of positions are necessarily disconnected, since pawns only move forward and many of the pieces still have to gain freedom to move. At the other end of the game, mating positions act as absorbing states. And in addition, the MC dynamics has a set of absorbing states where only the kings are left on the board. In order to sample states that reflect the intrinsic topology of the state space, we thus restrict the discussion to pairs of states drawn from a depth between 5 and 50 plies into the game. This corresponds loosely to chess players\u2019 notion of the middle game. Inside this window, we did not find an obvious correlation between the ply-depth from which a pair of states was drawn and the separation between them. We sample the pathways between states by means of SPRES [4]. In this method, interfaces in state space are defined by constant values of a scalar reaction coordinate, which quantifies the progress made from one state to the other. Then adaptive sampling of dynamic pathways is carried out such that a constant number of forward transitions between these interfaces is obtained. Once the sampling is completed, observables can be averaged over the ensemble of sampled pathways. In the case of chess, we are in particular interested in the length (number of plies) of the shortest path between two configurations. While the choice of an optimal reaction coordinate is a topic in its own right [10], we make use of the fact that SPRES will sample paths faithfully even for non-optimal choices [4]. As the reaction coordinate, we chose a Euclidean geometric measure of distance from the target configuration. For each piece, the geometric distance is calculated using a metric that is adapted to the type of moves performed by that piece: Chebyshev metric for queens, kings, and bishops, the ceil of half the Chebyshev distance for knights, the Manhattan distance for rooks, and the rank separation for pawns. (For details, see Ref. [5]). Pairs are discarded as disconnected if they are farther apart than 120 plies; this approximation is adapted to the typical length of real chess games. Trivially disconnected pairs are discarded by an initial test based on the reaction coordinate, the pawn structure and the piece count. For the estimation of path lengths, 4000 (3000) pairs generated from MC (DB) that have passed this test have been sampled. Figure 1 shows the histogram of path lengths between those randomly chosen pairs that are connected according to SPRES (corresponding to 79% of all randomly drawn MC pairs and 85% of all pairs drawn from the DB). For pairs generated via MC, the path-length distribution has two distinct contributions, one with a peak at `1 \u2248 20 plies, and a smaller one at `2 \u2248 45 plies. The path-length distribution between pairs sampled from the database is biased to smaller path lengths and has only one prominent peak at a path length slightly below `1, `1 \u2248 18 plies. A tail towards large distances is still seen as a remnant of the second peak found in the MC distribution. Note that the paths found by SPRES for the DB pairs almost certainly pass through non-DB states (i.e. states that are usually not found in games played by humans). A typical engine evaluation function (Stockfish [13]) displays huge fluctuations along the SPRES paths, indicating that these paths will probably never be chosen by skilled human players. The results shown in Fig. 1 reveal that real chess games take place in a subspace that is much more tightly connected than the space of accessible states. The doublepeaked histogram suggests a \u201cblob\u201d structure (see sketch", "targets": "epl draft Sequencing Chess"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-41bbeb6cd54b48d98dcf73d795c3befc", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a structured prediction architecture for images centered around deep recurrent neural networks. The proposed network, called ReSeg, is based on the recently introduced ReNet model for object classification. We modify and extend it to perform object segmentation, noting that the avoidance of pooling can greatly simplify pixel-wise tasks for images. The ReSeg layer is composed of four recurrent neural networks that sweep the image horizontally and vertically in both directions, along with a final layer that expands the prediction back to the original image size. ReSeg combines multiple ReSeg layers with several possible input layers as well as a final layer which expands the prediction back to the original image size, making it suitable for a variety of structured prediction tasks. We evaluate ReSeg on the specific task of object segmentation with three widely-used image segmentation datasets, namely Weizmann Horse, Fashionista and Oxford Flower. The results suggest that ReSeg can challenge the state of the art in object segmentation, and may have further applications in structured prediction at large.", "targets": "RESEG: A RECURRENT NEURAL NETWORK FOR OBJECT SEGMENTATION"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-6543fa794fe84368bdc1a309765b153b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Artificial neural networks are powerful pattern classifiers; however, they have been surpassed in accuracy by methods such as support vector machines and random forests that are also easier to use and faster to train. Backpropagation, which is used to train artificial neural networks, suffers from the herd effect problem which leads to long training times and limit classification accuracy. We use the disjunctive normal form and approximate the boolean conjunction operations with products to construct a novel network architecture. The proposed model can be trained by minimizing an error function and it allows an effective and intuitive initialization which solves the herdeffect problem associated with backpropagation. This leads to state-of-the art classification accuracy and fast training times. In addition, our model can be jointly optimized with convolutional features in an unified structure leading to state-of-the-art results on computer vision problems with fast convergence rates. A GPU implementation of LDNN with optional convolutional features is also available", "targets": "Disjunctive Normal Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0e937c59770c492689628ef6d4db4117", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a new training method for a feedforward neural network having the activation functions with the geometric contraction property. The method consists of constructing a new functional that is less nonlinear in comparison with the classical functional by removing the nonlinearity of the activation functions from the output layer. We validate this new method by a series of experiments that show an improved learning speed and also a better classification error. MSC: 92B20, 68T05", "targets": "A New Training Method for Feedforward Neural Networks Based on Geometric Contraction Property of Activation Functions"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-39209dad3a0b426992c84e1f0969555c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "First-order factoid question answering assumes that the question can be answered by a single fact in a knowledge base (KB). While this does not seem like a challenging task, many recent attempts that apply either complex linguistic reasoning or deep neural networks achieve 35%\u201365% accuracy on benchmark sets. Our approach formulates the task as two machine learning problems: detecting the entities in the question, and classifying the question as one of the relation types in the KB. Based on this assumption of the structure, our simple yet effective approach trains two recurrent neural networks to outperform state of the art by significant margins \u2014 relative improvement reaches 16% for WebQuestions, and surpasses 38% for SimpleQuestions.", "targets": "Simple and Effective Question Answering with Recurrent Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-199a2932f7854ecd9e66fae49a782817", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Natural language generation plays a critical role in any spoken dialogue system. We present a new approach to natural language generation using recurrent neural networks in an encoderdecoder framework. In contrast with previous work, our model uses both lexicalized and delexicalized versions of slot-value pairs for each dialogue act. This allows our model to learn from all available data, rather than being restricted to learning only from delexicalized slot-value pairs. We show that this helps our model generate more natural sentences with better grammar. We further improve our model\u2019s performance by initializing its weights from a pretrained language model. Human evaluation of our best-performing model indicates that it generates sentences which users find more natural and appealing.", "targets": "Natural Language Generation in Dialogue using Lexicalized and Delexicalized Data"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-609bec9b85a3467c8659c095d91f3ce6", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a multi-agent algorithm able to automatically discover relevant regularities in a given dataset, determining at the same time the set of configurations of the adopted parametric dissimilarity measure yielding compact and separated clusters. Each agent operates independently by performing a Markovian random walk on a suitable weighted graph representation of the input dataset. Such a weighted graph representation is induced by the specific parameter configuration of the dissimilarity measure adopted by the agent, which searches and takes decisions autonomously for one cluster at a time. Results show that the algorithm is able to discover parameter configurations Corresponding Author Email addresses: filippo.binachi@ryerson.ca (Filippo Maria Bianchi), enrico.maiorino@uniroma1.it (Enrico Maiorino), llivi@scs.ryerson.ca (Lorenzo Livi), antonello.rizzi@uniroma1.it (Antonello Rizzi), asadeghi@ryerson.ca (Alireza Sadeghian) URL: https://sites.google.com/site/lorenzlivi/ (Lorenzo Livi), http://infocom.uniroma1.it/~rizzi/ (Antonello Rizzi), http://www.scs.ryerson.ca/~asadeghi/ (Alireza Sadeghian) Preprint submitted to Information Sciences September 18, 2014 that yield a consistent and interpretable collection of clusters. Moreover, we demonstrate that our algorithm shows comparable performances with other similar state-of-the-art algorithms when facing specific clustering problems.", "targets": "An Agent-Based Algorithm exploiting Multiple Local Dissimilarities for Clusters Mining and Knowledge Discovery"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8825b3e340a642828e6c49ec147392c0", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Microsoft Kinect camera and its skeletal tracking capabilities have been embraced by many researchers and commercial developers in various applications of real-time human movement analysis. In this paper, we evaluate the accuracy of the human kinematic motion data in the first and second generation of the Kinect system, and compare the results with an optical motion capture system. We collected motion data in 12 exercises for 10 different subjects and from three different viewpoints. We report on the accuracy of the joint localization and bone length estimation of Kinect skeletons in comparison to the motion capture. We also analyze the distribution of the joint localization offsets by fitting a mixture of Gaussian and uniform distribution models to determine the outliers in the Kinect motion data. Our analysis shows that overall Kinect 2 has more robust and more accurate tracking of human pose as compared to Kinect 1.", "targets": "Evaluation of Pose Tracking Accuracy in the First and Second Generations of Microsoft Kinect"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5f70a4964b8d4d0288e0a501ce92f9e6", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Multimedia or spoken content presents more attractive information than plain text content, but the former is more difficult to display on a screen and be selected by a user. As a result, accessing large collections of the former is much more difficult and time-consuming than the latter for humans. It\u2019s therefore highly attractive to develop machines which can automatically understand spoken content and summarize the key information for humans to browse over. In this endeavor, a new task of machine comprehension of spoken content was proposed recently. The initial goal was defined as the listening comprehension test of TOEFL, a challenging academic English examination for English learners whose native languages are not English. An Attention-based Multi-hop Recurrent Neural Network (AMRNN) architecture was also proposed for this task, which considered only the sequential relationship within the speech utterances. In this paper, we propose a new Hierarchical Attention Model (HAM), which constructs multi-hopped attention mechanism over tree-structured rather than sequential representations for the utterances. Improved comprehension performance robust with respect to ASR errors were obtained.", "targets": "HIERARCHICAL ATTENTION MODEL FOR IMPROVED MACHINE COMPREHENSION OF SPOKEN CONTENT"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7d0eab86934c42e8be08bd5722e031e6", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "One popular method for dealing with large-scale data sets is sampling. For example, by using the empirical statistical leverage scores as an importance sampling distribution, the method of algorithmic leveraging samples and rescales rows/columns of data matrices to reduce the data size before performing computations on the subproblem. This method has been successful in improving computational efficiency of algorithms for matrix problems such as least-squares approximation, least absolute deviations approximation, and low-rank matrix approximation. Existing work has focused on algorithmic issues such as worst-case running times and numerical issues associated with providing high-quality implementations, but none of it addresses statistical aspects of this method. In this paper, we provide a simple yet effective framework to evaluate the statistical properties of algorithmic leveraging in the context of estimating parameters in a linear regression model with a fixed number of predictors. In particular, for several versions of leverage-based sampling, we derive results for the bias and variance, both conditional and unconditional on the observed data. We show that from the statistical perspective of bias and variance, neither leverage-based sampling nor uniform sampling dominates the other. This result is particularly striking, given the well-known result that, from the algorithmic perspective of worst-case analysis, leverage-based sampling provides uniformly superior worst-case algorithmic results, when compared with uniform sampling. Based on these theoretical results, we propose and analyze two new leveraging algorithms: one constructs a smaller least-squares problem with \u201cshrinked\u201d leverage scores (SLEV), and the other solves a smaller and unweighted (or biased) least-squares problem (LEVUNW). A detailed empirical evaluation of existing leverage-based methods as well as these two new methods is carried out on both synthetic and real data sets. The empirical results indicate that our theory is a good predictor of practical performance of existing and new leverage-based algorithms and that the new algorithms achieve improved performance. For example, with the same computation reduction as in the original algorithmic leveraging approach, our proposed SLEV typically leads to improved biases and variances both unconditionally and conditionally (on the observed data), and our proposed LEVUNW typically yields improved unconditional biases and variances. \u2217Department of Statistics, University of Illinois at Urbana-Champaign, Champaign, IL 61820. Email: pingma@illinois.edu. \u2020Department of Mathematics, Stanford University, Stanford, CA 94305. Email: mmahoney@cs.stanford.edu. \u2021Department of Statistics, University of California at Berkeley, Berkeley, CA 94720. Email: binyu@stat.berkeley.edu. 1 ar X iv :1 30 6. 53 62 v1 [ st at .M E ] 2 3 Ju n 20 13", "targets": "A Statistical Perspective on Algorithmic Leveraging"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-10c63994fab04fa8818776c084743a9c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recent developments show that Multiply Sectioned Bayesian Networks (MSBNs) can be used for diagnosis of natural systems as well as for model-based diagnosis of artificial systems. They can be applied to single-agent oriented reasoning systems as well as multi\u00ad agent distributed reasoning systems. Belief propagation between a pair of subnets plays a central role in maintenance of global consistency in a MSBN. This paper studies the operation UpdateBelief, presented orig\u00ad inally with MSBNs, for inter-subnet propaga\u00ad tion. We analyze how the operation achieves its intended functionality, which provides hints for improving its efficiency. New versions of UpdateBelief are then de\u00ad fined that reduce the computation time for inter-subnet propagation. One of them is optimal in the sense that the minimum amount of computation for coordinating multi-linkage belief propagation is required. The optimization problem is solved through the solution of a graph-theoretic problem: the minimum weight open tour in a tree.", "targets": "Optimization of Inter-Subnet Belief Updating in Multiply Sectioned Bayesian Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0dcbe3e429cf48f4a2e613ee33ddb604", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Data quality is fundamentally important to ensure the reliability of data for stakeholders to make decisions. In real world applications, such as scientific exploration of extreme environments, it is unrealistic to require raw data collected to be perfect. As data miners, when it is infeasible to physically know the why and the how in order to clean up the data, we propose to seek the intrinsic structure of the signal to identify the common factors of multivariate data. Using our new data-driven learning method\u2014the common-factor data cleaning approach, we address an interdisciplinary challenge on multivariate data cleaning when complex external impacts appear to interfere with multiple data measurements. Existing data analyses typically process one signal measurement at a time without considering the associations among all signals. We analyze all signal measurements simultaneously to find the hidden common factors that drive all measurements to vary together, but not as a result of the true data measurements. We use common factors to reduce the variations in the data without changing the base mean level of the data to avoid altering the physical meaning. We have reanalyzed the NASA Mars Phoenix mission data used in the leading effort by Kounaves\u2019s team (lead scientist for the wet chemistry experiment on the Phoenix) [1, 2] with our proposed method to show the resulting differences. We demonstrate that this new common-factor method successfully helps reducing systematic noises without definitive understanding of the source and without degrading the physical meaning of the signal.", "targets": "A Common-Factor Approach for Multivariate Data Cleaning with an Application to Mars Phoenix Mission Data"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d161509d5dcb41a6a73f55e2c995593f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present a method for learning treewidthbounded Bayesian networks from data sets containing thousands of variables. Bounding the treewidth of a Bayesian greatly reduces the complexity of inferences. Yet, being a global property of the graph, it considerably increases the difficulty of the learning process. We propose a novel algorithm for this task, able to scale to large domains and large treewidths. Our novel approach consistently outperforms the state of the art on data sets with up to ten thousand variables.", "targets": "Learning Bounded Treewidth Bayesian Networks with Thousands of Variables"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0be4b9fe39bd424d9f1401aa371df0a6", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper describes a novel approach to learning term-weighting schemes (TWSs) in the context of text classification. In text mining a TWS determines the way in which documents will be represented in a vector space model, before applying a classifier. Whereas acceptable performance has been obtained with standard TWSs (e.g., Boolean and term-frequency schemes), the definition of TWSs has been traditionally an art. Further, it is still a difficult task to determine what is the best TWS for a particular problem and it is not clear yet, whether better schemes, than those currently available, can be generated by combining known TWS. We propose in this article a genetic program that aims at learning effective TWSs that can improve the performance of current schemes in text classification. The genetic program learns how to combine a set of basic units to give rise to discriminative TWSs. We report an extensive experimental study comprising data sets from thematic and non-thematic text classification as well as from image classification. Our study shows the validity of the proposed method; in fact, we show that TWSs learned with the genetic program outperform traditional schemes and other Corresponding author. Email addresses: hugojair@inaoep.mx (Hugo Jair Escalante), mauricio.garcia.cs@gmail.com (Mauricio A. Gar\u0107\u0131a-Lim\u00f3n), a.morales@inaoep.mx (Alicia Morales-Reyes), mgraffg@gmail.com (Mario Graff), mmontesg@inaoep.mx (Manuel Montes-y-G\u00f3mez), emorales@inaoep.mx (Eduardo F. Morales) Preprint submitted to Elsevier October 8, 2014 TWSs proposed in recent works. Further, we show that TWSs learned from a specific domain can be effectively used for other tasks.", "targets": "Term-Weighting Learning via Genetic Programming for Text Classification"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c995f0f775c44f7a969880335fbdd19b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The paper describes the refinement algorithm for the Calculus of (Co)Inductive Constructions (CIC) implemented in the interactive theorem prover Matita. The refinement algorithm is in charge of giving a meaning to the terms, types and proof terms directly written by the user or generated by using tactics, decision procedures or general automation. The terms are written in an \u201cexternal syntax\u201d meant to be user friendly that allows omission of information, untyped binders and a certain liberal use of user defined sub-typing. The refiner modifies the terms to obtain related well typed terms in the internal syntax understood by the kernel of the ITP. In particular, it acts as a type inference algorithm when all the binders are untyped. The proposed algorithm is bi-directional: given a term in external syntax and a type expected for the term, it propagates as much typing information as possible towards the leaves of the term. Traditional mono-directional algorithms, instead, proceed in a bottomup way by inferring the type of a sub-term and comparing (unifying) it with the type expected by its context only at the end. We propose some novel bi-directional rules for CIC that are particularly effective. Among the benefits of bi-directionality we have better error message reporting and better inference of dependent types. Moreover, thanks to bi-directionality, the coercion system for sub-typing is more effective and type inference generates simpler unification problems that are more likely to be solved by the inherently incomplete higher order unification algorithms implemented. Finally we introduce in the external syntax the notion of vector of placeholders that enables to omit at once an arbitrary number of arguments. Vectors of placeholders allow a trivial implementation of implicit arguments and greatly simplify the implementation of primitive and simple tactics. 1998 ACM Subject Classification: D.3.1, F.3.0.", "targets": "A BI-DIRECTIONAL REFINEMENT ALGORITHM FOR THE CALCULUS OF (CO)INDUCTIVE CONSTRUCTIONS"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-74be15d16a1f407cb3581d9c882e24cf", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Global optimization of the energy consumption of dual power source vehicles such as hybrid electric vehicles, plugin hybrid electric vehicles, and plug in fuel cell electric vehicles requires knowledge of the complete route characteristics at the beginning of the trip. One of the main characteristics is the vehicle speed profile across the route. The profile will translate directly into energy requirements for a given vehicle. However, the vehicle speed that a given driver chooses will vary from driver to driver and from time to time, and may be slower, equal to, or faster than the average traffic flow. If the specific driver speed profile can be predicted, the energy usage can be optimized across the route chosen. The purpose of this paper is to research the application of Deep Learning techniques to this problem to identify at the beginning of a drive cycle the driver specific vehicle speed profile for an individual driver repeated drive cycle, which can be used in an optimization algorithm to minimize the amount of fossil fuel energy used during the trip. Keywords\u2014Deep Learning, Stacked Auto Encoders, Neural Networks, Traffic Prediction", "targets": "Vehicle Speed Prediction using Deep Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-37dafd8efcc74dcca7330b6a316e25ff", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In earlier work, we introduced flexible infer\u00ad ence and decision-theoretic metareasoning to address the intractability of normative infer\u00ad ence. Here, rather than pursuing the task of computing beliefs and actions with decision models composed of distinctions about uncer\u00ad tain events, we examine methods for inferring beliefs about mathematical truth before an automated theorem prover completes a proof. We employ a Bayesian analysis to update be\u00ad lief in truth, given theorem-proving progress, and show how decision-theoretic methods can be used to determine the value of continuing to deliberate versus taking immediate action in time-critical situations.", "targets": "Studies of Theorem Proving under Limited Resources"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-988ac8f85a374986a79266741e52daff", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Characterizing relationships between people is fundamental for the understanding of narratives. In this work, we address the problem of inferring the polarity of relationships between people in narrative summaries. We formulate the problem as a joint structured prediction for each narrative, and present a model that combines evidence from linguistic and semantic features, as well as features based on the structure of the social community in the text. We also provide a clustering-based approach that can exploit regularities in narrative types. e.g., learn an affinity for love-triangles in romantic stories. On a dataset of movie summaries from Wikipedia, our structured models provide more than a 30% errorreduction over a competitive baseline that considers pairs of characters in isolation.", "targets": "Inferring Interpersonal Relations in Narrative Summarie"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e8e446ef2ee444b8a5b58cb9aa912b07", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Real estate appraisal, which is the process of estimating the price for real estate properties, is crucial for both buys and sellers as the basis for negotiation and transaction. Traditionally, the repeat sales model has been widely adopted to estimate real estate price. However, it depends the design and calculation of a complex economic related index, which is challenging to estimate accurately. Today, real estate brokers provide easy access to detailed online information on real estate properties to their clients. We are interested in estimating the real estate price from these large amounts of easily accessed data. In particular, we analyze the prediction power of online house pictures, which is one of the key factors for online users to make a potential visiting decision. The development of robust computer vision algorithms makes the analysis of visual content possible. In this work, we employ a Recurrent Neural Network (RNN) to predict real estate price using the state-of-the-art visual features. The experimental results indicate that our model outperforms several of other state-of-the-art baseline algorithms in terms of both mean absolute error (MAE) and mean absolute percentage error (MAPE).", "targets": "Image Based Appraisal of Real Estate Properties"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-dc3ab99fe8e946dd990a208f35a59519", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We measured entropy and symbolic diversity for English and Spanish texts including literature Nobel laureates and other famous authors. Entropy, symbol diversity and symbol frequency profiles were compared for these four groups. We also built a scale sensitive to the quality of writing and evaluated its relationship with the Flesch \u0301s readability index for English and the Szigriszt \u0301s perspicuity index for Spanish. Results suggest a correlation between entropy and word diversity with quality of writing. Text genre also influences the resulting entropy and diversity of the text. Results suggest the plausibility of automated quality assessment of texts.", "targets": "Quantifying literature quality using complexity criteria"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2e5991a9f4f9434388635d5590599ce7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Training recurrent neural networks to model long term dependencies is difficult. Hence, we propose to use external linguistic knowledge as an explicit signal to inform the model which memories it should utilize. Specifically, external knowledge is used to augment a sequence with typed edges between arbitrarily distant elements, and the resulting graph is decomposed into directed acyclic subgraphs. We introduce a model that encodes such graphs as explicit memory in recurrent neural networks, and use it to model coreference relations in text. We apply our model to several text comprehension tasks and achieve new state-of-the-art results on all considered benchmarks, including CNN, bAbi, and LAMBADA. On the bAbi QA tasks, our model solves 15 out of the 20 tasks with only 1000 training examples per task. Analysis of the learned representations further demonstrates the ability of our model to encode fine-grained entity information across a document.", "targets": "Linguistic Knowledge as Memory for Recurrent Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d38c7ec9f215480083d287e57ebd88d5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Variational autoencoders (VAE) are directed generative models that learn factorial latent variables. As noted by Burda et al. (2015), these models exhibit the problem of factor overpruning where a significant number of stochastic factors fail to learn anything and become inactive. This can limit their modeling power and their ability to learn diverse and meaningful latent representations. In this paper, we evaluate several methods to address this problem and propose a more effective model-based approach called the epitomic variational autoencoder (eVAE). The so-called epitomes of this model are groups of mutually exclusive latent factors that compete to explain the data. This approach helps prevent inactive units since each group is pressured to explain the data. We compare the approaches with qualitative and quantitative results on MNIST and TFD datasets. Our results show that eVAE makes efficient use of model capacity and generalizes better than VAE.", "targets": "Tackling Over-pruning in Variational Autoencoders"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f3243e0a5d6d4f1c9b1272e8d003e78d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Expert systems prove to be suitable replacement for human experts when human experts are unavailable for different reasons. Various expert system has been developed for wide range of application. Although some expert systems in the field of fishery and aquaculture has been developed but a system that aids user in process of selecting a new addition to their aquarium tank never been designed. This paper proposed an expert system that suggests new addition to an aquarium tank based on current environmental condition of aquarium and currently existing fishes in aquarium. The system suggest the best fit for aquarium condition and most compatible to other", "targets": "An expert system for recommending suitable ornamental fish addition to an aquarium based on aquarium condition"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-33edde1ddfdf44698d7893261e15777f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Real life data often includes information from different channels. For example, in computer vision, we can describe an image using different image features, such as pixel intensity, color, HOG, GIST feature, SIFT features, etc.. These different aspects of the same objects are often called multi-view (or multi-modal) data. Lowrank regression model has been proved to be an effective learning mechanism by exploring the low-rank structure of real life data. But previous low-rank regression model only works on single view data. In this paper, we propose a multi-view low-rank regression model by imposing low-rank constraints on multi-view regression model. Most importantly, we provide a closed-form solution to the multi-view low-rank regression model. Extensive experiments on 4 multi-view datasets show that the multi-view low-rank regression model outperforms single-view regression model and reveals that multiview low-rank structure is very helpful. Introduction In many tasks, a single object can be described using information from different channels (or views). For example, a 3-D object can be described using pictures from different angles; a website can be described using the words it contains, and the hyperlinks it contains; an image can be described using different features, such as SIFT feature, and HOG feature; in daily life, a person can be characterized using age, height, weight and so on. These data all comes from different aspects and channels. Multi-view problems aim to improve existing single view model by learning a model utilizing data collected from multiple channels (R\u00fcping and Scheffer 2005) (de Sa 2005) (Zhou and Burges 2007). Low-rank regression model has been proved to be an effective learning mechanism by exploring the low-rank structure of real life data (Xiang et al. 2012) (Evgeniou and Pontil 2007) (Cai et al. 2013). Existing regression models only work on single view data. To be specific, linear regression finds a linear model with respect to the single view feature data to fit target class data (Seber and Lee 2012). Let matrix B \u2208 We study the best arm identification (Best-1-Arm) problem, which is defined as follows.We are given n stochastic bandit arms. The ith arm has a reward distributionDi with anunknown mean \u03bci. Upon each play of the ith arm, we can get a reward, sampled i.i.d. fromDi. We would like to identify the arm with the largest mean with probability at least 1 \u2212 \u03b4,using as few samples as possible. We provide a nontrivial algorithm for Best-1-Arm, whichimproves upon several prior upper bounds on the same problem. We also study an importantspecial case where there are only two arms, which we call the Sign-\u03be problem. We provide anew lower bound of Sign-\u03be, simplifying and significantly extending a classical result by Farrellin 1964, with a completely new proof. Using the new lower bound for Sign-\u03be, we obtain thefirst lower bound for Best-1-Arm that goes beyond the classic Mannor-Tsitsiklis lower bound,by an interesting reduction from Sign-\u03be to Best-1-Arm. We propose an interesting conjectureconcerning the optimal sample complexity of Best-1-Arm from the perspective of instance-wiseoptimality.", "targets": "On the Optimal Sample Complexity for Best Arm Identification"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e7104f7ec7bf40c0b06877264e34f5cf", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Automatic text summarization is widely regarded as the highly difficult problem, partially because of the lack of large text summarization data set. Due to the great challenge of constructing the large scale summaries for full text, in this paper, we introduce a large corpus of Chinese short text summarization dataset constructed from the Chinese microblogging website Sina Weibo, which is released to the public1. This corpus consists of over 2 million real Chinese short texts with short summaries given by the author of each text. We also manually tagged the relevance of 10,666 short summaries with their corresponding short texts. Based on the corpus, we introduce recurrent neural network for the summary generation and achieve promising results, which not only shows the usefulness of the proposed corpus for short text summarization research, but also provides a baseline for further research on this topic.", "targets": "LCSTS: A Large Scale Chinese Short Text Summarization Dataset"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-3101da6051f74ce9870dca95c537f78a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The recently introduced Deep Q-Networks (DQN) algorithm has gained attention as one of the first successful combinations of deep neural networks and reinforcement learning. Its promise was demonstrated in the Arcade Learning Environment (ALE), a challenging framework composed of dozens of Atari 2600 games used to evaluate general competency in AI. It achieved dramatically better results than earlier approaches, showing that its ability to learn good representations is quite robust and general. This paper attempts to understand the principles that underly DQN\u2019s impressive performance and to better contextualize its success. We systematically evaluate the importance of key representational biases encoded by DQN\u2019s network by proposing simple linear representations that make use of these concepts. Incorporating these characteristics, we obtain a computationally practical feature set that achieves competitive performance to DQN in the ALE. Besides offering insight into the strengths and weaknesses of DQN, we provide a generic representation for the ALE, significantly reducing the burden of learning a representation for each game. Moreover, we also provide a simple, reproducible benchmark for the sake of comparison to future work in the ALE.", "targets": "State of the Art Control of Atari Games Using Shallow Reinforcement Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5e95e4c53d09454b8f8e5f085aa94c99", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Online learning aims to perform nearly as well as the best hypothesis in hindsight. For some hypothesis classes, though, even finding the best hypothesis offline is challenging. In such offline cases, local search techniques are often employed and only local optimality guaranteed. For online decision-making with such hypothesis classes, we introduce local regret, a generalization of regret that aims to perform nearly as well as only nearby hypotheses. We then present a general algorithm to minimize local regret with arbitrary locality graphs. We also show how the graph structure can be exploited to drastically speed learning. These algorithms are then demonstrated on a diverse set of online problems: online disjunct learning, online Max-SAT, and online decision tree learning.", "targets": "On Local Regret"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7d4b01a0b13b40a78d2c29496a45b14f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The Schatten-p quasi-norm (0first define two tractable Schatten quasi-norms, i.e., theFrobenius/nuclear hybrid and bi-nuclear quasi-norms. Wethen prove that they are in essence the Schatten-2/3 and 1/2quasi-norms, respectively, for solving whose minimizationwe only need to perform SVDs on two much smaller fac-tor matrices as contrary to the larger ones used in existingalgorithms, e.g., IRNN. Therefore, our method is particu-larly useful for many \u201cbig data\u201d applications that need todeal with large, high dimensional data with missing values.To the best of our knowledge, this is the first paper to scaleSchatten quasi-norm solvers to the Netflix dataset. More-over, we provide the global convergence and recovery per-formance guarantees for our algorithms. In other words, thisis the best guaranteed convergence for algorithms that solvesuch challenging problems. Notations and BackgroundThe Schatten-p norm (0 < p < \u221e) of a matrix X \u2208 Rm\u00d7n(m \u2265 n) is defined as \u2016X\u2016Sp ,(n\u2211 i=1\u03c3i (X))1/p", "targets": "Scalable Algorithms for Tractable Schatten Quasi-Norm Minimization"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-49e5844853614e959d6c4b5856544f8a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present a sequential model for temporal relation classification between intrasentence events. The key observation is that the overall syntactic structure and compositional meanings of the multi-word context between events are important for distinguishing among fine-grained temporal relations. Specifically, our approach first extracts a sequence of context words that indicates the temporal relation between two events, which well align with the dependency path between two event mentions. The context word sequence, together with a parts-of-speech tag sequence and a dependency relation sequence that are generated corresponding to the word sequence, are then provided as input to bidirectional recurrent neural network (LSTM) models. The neural nets learn compositional syntactic and semantic representations of contexts surrounding the two events and predict the temporal relation between them. Evaluation of the proposed approach on TimeBank corpus shows that sequential modeling is capable of accurately recognizing temporal relations between events, which outperforms a neural net model using various discrete features as input that imitates previous feature based models.", "targets": "A Sequential Model for Classifying Temporal Relations between Intra-Sentence Events"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9c88545d245e4baea5ef81fe07d9d7d1", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Distant speech recognition is a challenge, particularly due to the corruption of speech signals by reverberation caused by large distances between the speaker and microphone. In order to cope with a wide range of reverberations in real-world situations, we present novel approaches for acoustic modeling including an ensemble of deep neural networks (DNNs) and an ensemble of jointly trained DNNs. First, multiple DNNs are established, each of which corresponds to a different reverberation time 60 (RT60) in a setup step. Also, each model in the ensemble of DNN acoustic models is further jointly trained, including both feature mapping and acoustic modeling, where the feature mapping is designed for the dereverberation as a front-end. In a testing phase, the two most likely DNNs are chosen from the DNN ensemble using maximum a posteriori (MAP) probabilities, computed in an online fashion by using maximum likelihood (ML)-based blind RT60 estimation and then the posterior probability outputs from two DNNs are combined using the ML-based weights as a simple average. Extensive experiments demonstrate that the proposed approach leads to substantial improvements in speech recognition accuracy over the conventional DNN baseline systems under diverse reverberant conditions.", "targets": "Ensemble of Jointly Trained Deep Neural Network-Based Acoustic Models for Reverberant Speech Recognition"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2ec91f882ae048d390bb376e1e493c62", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Discovering the set of closed frequent patterns is one of the fundamental problems in Data Mining. Recent Constraint Programming (CP) approaches for declarative itemset mining have proven their usefulness and flexibility. But the wide use of reified constraints in current CP approaches raises many difficulties to cope with high dimensional datasets. This paper proposes CLOSEDPATTERN global constraint which does not require any reified constraints nor any extra variables to encode efficiently the Closed Frequent Pattern Mining (CFPM) constraint. CLOSEDPATTERN captures the particular semantics of the CFPM problem in order to ensure a polynomial pruning algorithm ensuring domain consistency. The computational properties of our constraint are analyzed and their practical effectiveness is experimentally evaluated.", "targets": "A global constraint for closed itemset mining"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-4debc447ce1b4f40ae1fd045329d5c53", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we propose a context-aware keyword spotting model employing a character-level recurrent neural network (RNN) for spoken term detection in continuous speech. The RNN is end-toend trained with connectionist temporal classification (CTC) to generate the probabilities of character and word-boundary labels. There is no need for the phonetic transcription, senone modeling, or system dictionary in training and testing. Also, keywords can easily be added and modified by editing the text based keyword list without retraining the RNN. Moreover, the unidirectional RNN processes an infinitely long input audio streams without pre-segmentation and keywords are detected with low-latency before the utterance is finished. Experimental results show that the proposed keyword spotter significantly outperforms the deep neural network (DNN) and hidden Markov model (HMM) based keyword-filler model even with less computations.", "targets": "Online Keyword Spotting with a Character-Level Recurrent Neural Network"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2964ed3d9d034fcbbd5636caa26b55ff", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In many applications spanning from sensor to social networks, transportation systems, gene regulatory networks or big data, the signals of interest are defined over the vertices of a graph. The aim of this paper is to propose a least mean square (LMS) strategy for adaptive estimation of signals defined over graphs. Assuming the graph signal to be band-limited, over a known bandwidth, the method enables reconstruction, with guaranteed performance in terms of mean-square error, and tracking from a limited number of observations over a subset of vertices. A detailed mean square analysis provides the performance of the proposed method, and leads to several insights for designing useful sampling strategies for graph signals. Numerical results validate our theoretical findings, and illustrate the performance of the proposed method. Furthermore, to cope with the case where the bandwidth is not known beforehand, we propose a method that performs a sparse online estimation of the signal support in the (graph) frequency domain, which enables online adaptation of the graph sampling strategy. Finally, we apply the proposed method to build the power spatial density cartography of a given operational region in a cognitive network environment.", "targets": "Least Mean Squares Estimation of Graph Signals"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e31e3134dedb4809a6f822646b7e0285", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We consider the problem of approximate Bayesian inference in log-supermodular models. These models encompass regular pairwise MRFs with binary variables, but allow to capture highorder interactions, which are intractable for existing approximate inference techniques such as belief propagation, mean field, and variants. We show that a recently proposed variational approach to inference in log-supermodular models \u2013L-FIELD\u2013 reduces to the widely-studied minimum norm problem for submodular minimization. This insight allows to leverage powerful existing tools, and hence to solve the variational problem orders of magnitude more efficiently than previously possible. We then provide another natural interpretation of L-FIELD, demonstrating that it exactly minimizes a specific type of R\u00e9nyi divergence measure. This insight sheds light on the nature of the variational approximations produced by L-FIELD. Furthermore, we show how to perform parallel inference as message passing in a suitable factor graph at a linear convergence rate, without having to sum up over all the configurations of the factor. Finally, we apply our approach to a challenging image segmentation task. Our experiments confirm scalability of our approach, high quality of the marginals, and the benefit of incorporating higher-order potentials.", "targets": "Scalable Variational Inference in Log-supermodular Models"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-33d54828d4c24fe796c251f125681ae7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We develop novel firstand second-order features for dependency parsing based on the Google Syntactic Ngrams corpus, a collection of subtree counts of parsed sentences from scanned books. We also extend previous work on surface n-gram features from Web1T to the Google Books corpus and from first-order to second-order, comparing and analysing performance over newswire and web treebanks. Surface and syntactic n-grams both produce substantial and complementary gains in parsing accuracy across domains. Our best system combines the two feature sets, achieving up to 0.8% absolute UAS improvements on newswire and 1.4% on web text.", "targets": "Web-scale Surface and Syntactic n-gram Features for Dependency Parsing"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-51a86805219744779c8ca85079545aaf", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Incidents of organized cybercrime are rising because of criminals are reaping high financial rewards while incurring low costs to commit crime. As the digital landscape broadens to accommodate more internet-enabled devices and technologies like social media, more cybercriminals who are not native English speakers are invading cyberspace to cash in on quick exploits. In this paper we evaluate the performance of three machine learning classifiers in detecting 419 scams in a bilingual Nigerian cybercriminal community. We use three popular classifiers in text processing namely: Na\u00efve Bayes, k-nearest neighbors (IBK) and Support Vector Machines (SVM). The preliminary results on a real world dataset reveal the SVM significantly outperforms Na\u00efve Bayes and IBK at 95% confidence level. Keywords-Machine Learning; Bilingual Cybercriminals; 419 Scams;", "targets": "Evaluating Classifiers in Detecting 419 Scams in Bilingual Cybercriminal Communities"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d4cc4ae381704a218207b0dcbfa37776", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper introduces an automated skill acquisition framework in reinforcement learning which involves identifying a hierarchical description of the given task in terms of abstract states and extended actions between abstract states. Identifying such structures present in the task provides ways to simplify and speed up reinforcement learning learning algorithms. These structures also help to generalize such algorithms over multiple tasks without relearning policies from scratch. We use ideas from dynamical systems to find metastable regions in the state space and associate them with abstract states. The spectral clustering algorithm PCCA+ is used to identify suitable abstractions aligned to the underlying structure. Skills are defined in terms of the transitions between such abstract states. The connectivity information from PCCA+ is used to generate these skills or options. The skills are independent of the learning task and can be efficiently reused across a variety of tasks defined over a common state space. Another major advantage of the approach is that it does not need a prior model of the MDP and can work well even when the MDPs are constructed from sampled trajectories. Finally, we present our attempts to extend the automated skills acquisition framework to complex tasks such as learning to play video games where we use deep learning techniques for representation learning to aid our spatio-temporal abstraction framework. Proceedings of the 33 rd International Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 48. Copyright 2016 by the author(s). 1. Motivation and Introduction The core idea of hierarchical reinforcement learning is to break down the reinforcement learning problem into subtasks through a hierarchy of abstractions. Typically, in the full reinforcement learning problem, the agent is assumed to be in one state of the Markov Decision Process at every time step. The agent then performs one of several possible primitive actions. Based on the agent\u2019s state at time t, and the action it takes from that state, the agent\u2019s state at time t + 1 is determined. For large problems, however, this can lead to too much granularity: when the agent has to decide on each and every primitive action at every granular state, it can often lose sight of the bigger picture. However, if a series of actions can be abstracted out as an abstract action, the agent can just remember the series of actions that was useful in getting it to a temporally distant useful state from the initial state. This is typically referred to as an option or a skill in the reinforcement learning literature. A good analogy is a human planning his movement for a traversal from current location A to a destination B. We identify intermediate destinations Ci to lead us fromA toB when planning fromA, instead of worrying about the exact mechanisms of immediate movement at A which are abstracted over. Options are a convenient way of formalising this abstraction. In keeping with the general philosophy of reinforcement learning, we want to build agents that can automatically discover options with no prior knowledge, purely by exploring the environment. Thus, our approach falls into the broad category of automated discovery of skills. In order to exploit task structure, hierarchical decomposition introduces models defined by stand-alone policies (also known as temporally-extended actions, options, or skills) that can take multiple time steps to execute. Skills can exploit representing structure by representing subroutines that are executed multiple times during execution of a task. Such skills which are learnt in one task can be reused ar X iv :1 60 5. 05 35 9v 1 [ cs .L G ] 1 7 M ay 2 01 6 Submission and Formatting Instructions for ICML 2016 in a different task as long as it requires execution of the same subroutine. Options also make exploration more efficient by providing the decision maker with a high-level behaviour to look ahead to the completion of the corresponding subroutine. Automated discovery of skills or options has been an active area of research and several approaches have been proposed for the same. The current methods could be broadly classified into sample trajectory based and partition based methods. Some of them are: \u2022 Identifying bottlenecks in the state space, where the state space is partitioned into sets and the transitions between two sets of states that are rare can be seen as introducing bottleneck sets at the respective points of such rare transitions. Policies to reach such states are cached as options (McGovern & Barto, 2001). \u2022 Using the structure present in a factored state representation to identify sequences of actions that cause what are otherwise infrequent changes in the state variables: these sequences are cached away as options (Hengst, 2004). \u2022 Obtaining a graphical representation of an agent\u2019s interaction with its environment and using betweenness centrality measures to identify subtasks (Simsek & Barto, 2008). \u2022 Using clustering methods (spectral or otherwise) to separate out different strongly connected components of the Markov Decision Process (MDP) and identifying access-states that connect different clusters (Menache et al., 2002). While these methods have had varying amounts of success, they have certain deficiencies. Bottleneck based approaches don\u2019t have a natural way of identifying the part of the state space where options are applicable without external knowledge about the problem domain. Spectral methods need some form of regularization in order to prevent unequal splits that might lead to arbitrary splitting of the state space. We present a framework that detects well-connected or meta stable regions of the state space from a MDP model estimated from trajectories. We use PCCA+, a spectral clustering algorithm from conformal dynamics (Weber et al., 2004) that not only partitions the MDP but also returns the connectivity information between the regions. We then propose a very effective way of composing options using the same framework to take us from one metastable region to another, giving us the policy for free. Once we have these options, we can use standard reinforcement learning algorithms to learn a policy over subtasks to solve the given task. Specifically, we show results using SMDP Qlearning on the 2-room domain. For our attempt at extending it to higher dimensional state space tasks such as Atari 2600 video games, we append the learnt options to the set of primitive actions using Intra-Option Value learning to learn a policy solving the given task. One major advantage of the approach is that we get the policy for the options for free while doing the partitioning by exploiting the membership functions returned by PCCA+. Our approach is able to learn reasonably good skills even with limited sampling which makes it useful in situations where exploration is limited by the environment costs. It also provides a way to refine the abstractions in an online fashion without explicitly reconstructing the entire MDP. More importantly, we extend it to the case where the state space is so large that exact modeling is not possible. In this case, we take inspiration from the recent work on forward prediction to learn the model (Oh et al., 2015) to use Deep Convolutional Neural Networks (CNN) and Long Short Term Memory (LSTM) to learn spatio-temporal representations of the state space. Using the learnt representation, we perform state-aggregation using clustering techniques and estimate our transitional model for the abstract space on these aggregated states. We list the advantages of this approach below: \u2022 Skills are acquired online, from sampled trajectories instead of requiring a prior model of the MDP. \u2022 Instead of looking for bottleneck states, we look for well connected regions and hence, the discovered options are better aligned to the structure of the state space. \u2022 The approach returns connectivity information between the metastable regions which can be used to construct an abstract graph of the state space, combining spatial and temporal information meaningfully. \u2022 The clustering algorithm provides a fuzzy membership for every state in belonging to a particular metastable region, which provides a powerful way to compose options naturally. We organize the rest of the paper as follows: We first explain the Option Generation Framework that we propose, by delving on the important aspects of the spectral clustering algorithm PCCA+ and how PCCA+ can be used to generate options. We show results on the 2-room domain for this framework. The next part of the paper focuses on our attempt to extend this framework for more complex tasks such as playing video games like Seaquest on the Atari 2600 domain with options. We explain the motivation and usage of deep networks, the clustering algorithms used for state-aggregation, followed by the model used and Submission and Formatting Instructions for ICML 2016 initial results. We then conclude with a discussion on the challenges in this approach and also comparison with other recent attempts at Hierarchical reinforcement learning for higher dimensional tasks. 2. Option Generation Framework We divide this section into two parts: We first explain the spectral clustering algorithm PCCA+ and motivate its usage for spatial abstraction. The second part discusses the option generation using PCCA+. 2.1. Spatial Abstraction using PCCA+ Given an algebraic representation of the graph representing a MDP we want to find suitable abstractions aligned to the underlying structure. We use a spectral clustering algorithm to do this. Central to the idea of spectral clustering is the graph Laplacian which is obtained from the similarity graph. There are many tight connections between the topological properties of graphs and the graph Laplacian matrices, which spectral clustering methods exploit to partition the data into clusters. However, although the spectra of the Laplacian preserves the structural properties of the graph, clustering data in the eigenspace of the Laplacian does not guarantee this. For example, k-means clustering (Ng et al., 2001) in the eigenspace of the Laplacian will only work if the clusters lie in disjoint convex sets of the underlying eigenspace. Partitioning the data into clusters by projecting onto the largest k-eigenvectors (Meila & Shi, 2001) does not preserve the topological properties of the data in the eigenspace of the Laplacian. For the task of spatial abstraction, the proposed framework requires a clustering approach that exploits the structural properties in the configurational space of objects as well as the spectral subspace, quite unlike earlier methods. Therefore, we take inspiration from the conformal dynamics literature, where (Weber et al., 2004) do a similar analysis to detect conformal states of a dynamical system. They propose a spectral clustering algorithm PCCA+, which is based on the the principles of Perron Cluster Analysis of the transition structure of the system. We extend their analysis to detect spatial abstractions in autonomous controlled dynamical systems. In this approach, the spectra of the Laplacian L (derived from the adjacency matrix S) is constructed and the best transformation of the spectra is found such that the transformed basis aligns itself with the clusters of data points in the eigenspace. A projection method described in (Weber et al., 2004) is used to find the membership of each of the states to a set of special points lying on the transformed basis, which are identified as vertices of a simplex in the R subspace (the Spectral Gap method is used to estimate the number of clusters k). For the first order perturbation, the simplex is just a linear transformation around the origin and to find the simplex vertices, one needs to find the k points which form a convex hull such that the deviation of all the points from this hull is minimized. This is achieved by finding the data point which is farthest located from the origin and iteratively identify data points which are located farthest from the hyperplane fit to the current set of vertices. Figure 1. Simplex First order and Higher order Perturbation Algorithm 1 PCCA+ 1: Construct Laplacian L 2: Compute n (number of vertices) eigenvalues of L in descending order 3: Choose first k eigenvalues for which ek\u2212ek+1 1\u2212ek+1 > tc (Spectral Gap Threshold). 4: Compute the eigenvectors for corresponding eigenvalues (e1, e2, \u00b7 \u00b7 \u00b7 , ek) and stack them as column vectors in eigenvector matrix Y . 5: Let\u2019s denote the rows of Y as Y(1),Y(2), \u00b7 \u00b7 \u00b7 ,Y(N) \u2208 R 6: Define \u03c0(1) as that index, for which ||Y (\u03c0(1))||2 is maximal. Define \u03b31 = span{Y (\u03c0(1))} 7: For i = 2, \u00b7 \u00b7 \u00b7 , k: Define \u03c0i as that index, for which the distance to the hyperplane \u03b3i\u22121, i.e., ||Y (\u03c0i) \u2212 \u03b3i\u22121||2 is maximal. Define \u03b3i = span{Y (\u03c01), \u00b7 \u00b7 \u00b7 , Y (\u03c0i)}. ||Y (\u03c0i) \u2212 \u03b3i\u22121||2 = ||Y (\u03c0i)\u2212 \u03b3 i\u22121((\u03b3i\u22121\u03b3 i\u22121)\u03b3i\u22121)Y (\u03c0i) )|| The PCCA+ algorithm returns a membership function, \u03c7, defining the degree of membership of each state s to an abstract state Sj . The connectivity information between two abstract states (Si, Sj) is given by (i, j) entry of \u03c7L\u03c7 while the diagonal entries provide relative connectivity information within a cluster. The connectivity information is utilized to learn decision policies across abstract states which is described in the next section. There is an intrinsic mechanism to return information about the goodness of clustering of states from the presence of sharp peaks (indicates good clustering) in the eigenvalue distribution. Submission and Formatting Instructions for ICML 2016 2.2. Option generation from PCCA+", "targets": "Hierarchical Reinforcement Learning using Spatio-Temporal Abstractions and Deep Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d112107f64cf418cb37258b5aff5915a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Recent research has shown that surprisingly rich models of human activity can be learned from GPS (positional) data. However, most effort to date has concentrated on modeling single individuals or statistical properties of groups of people. Moreover, prior work focused solely on modeling actual successful executions (and not failed or attempted executions) of the activities of interest. We, in contrast, take on the task of understanding human interactions, attempted interactions, and intentions from noisy sensor data in a fully relational multi-agent setting. We use a real-world game of capture the flag to illustrate our approach in a well-defined domain that involves many distinct cooperative and competitive joint activities. We model the domain using Markov logic, a statistical-relational language, and learn a theory that jointly denoises the data and infers occurrences of high-level activities, such as a player capturing an enemy. Our unified model combines constraints imposed by the geometry of the game area, the motion model of the players, and by the rules and dynamics of the game in a probabilistically and logically sound fashion. We show that while it may be impossible to directly detect a multi-agent activity due to sensor noise or malfunction, the occurrence of the activity can still be inferred by considering both its impact on the future behaviors of the people involved as well as the events that could have preceded it. Further, we show that given a model of successfully performed multi-agent activities, along with a set of examples of failed attempts at the same activities, our system automatically learns an augmented model that is capable of recognizing success and failure, as well as goals of people\u2019s actions with high accuracy. We compare our approach with other alternatives and show that our unified model, which takes into account not only relationships among individual players, but also relationships among activities over the entire length of a game, although more computationally costly, is significantly more accurate. Finally, we demonstrate that explicitly modeling unsuccessful attempts boosts performance on other important recognition tasks.", "targets": "Location-Based Reasoning about Complex Multi-Agent Behavior"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-625e3832faa7419a949e515647ad8ea3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We describe a technique to minimize weighted tree automata (WTA), a powerful formalisms that subsumes probabilistic context-free grammars (PCFGs) and latent-variable PCFGs. Our method relies on a singular value decomposition of the underlying Hankel matrix defined by the WTA. Our main theoretical result is an efficient algorithm for computing the SVD of an infinite Hankel matrix implicitly represented as a WTA. We provide an analysis of the approximation error induced by the minimization, and we evaluate our method on real-world data originating in newswire treebank. We show that the model achieves lower perplexity than previous methods for PCFG minimization, and also is much more stable due to the absence of local optima.", "targets": "Weighted Tree Automata Approximation by Singular Value Truncation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9e325969154642bba2d085798b56b071", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We study the best-arm identification problem in linear bandit, where the rewards of the arms depend linearly on an unknown parameter \u03b8 and the objective is to return the arm with the largest reward. We characterize the complexity of the problem and introduce sample allocation strategies that pull arms to identify the best arm with a fixed confidence, while minimizing the sample budget. In particular, we show the importance of exploiting the global linear structure to improve the estimate of the reward of near-optimal arms. We analyze the proposed strategies and compare their empirical performance. Finally, as a by-product of our analysis, we point out the connection to the G-optimality criterion used in optimal experimental design.", "targets": "Best-Arm Identification in Linear Bandits"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-42b05617d9d244a39444452af6e46a0a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Most provably-efficient reinforcement learning algorithms introduce optimism about poorly-understood states and actions to encourage exploration. We study an alternative approach for efficient exploration: posterior sampling for reinforcement learning (PSRL). This algorithm proceeds in repeated episodes of known duration. At the start of each episode, PSRL updates a prior distribution over Markov decision processes and takes one sample from this posterior. PSRL then follows the policy that is optimal for this sample during the episode. The algorithm is conceptually simple, computationally efficient and allows an agent to encode prior knowledge in a natural way. We establish an \u00d5(\u03c4S \u221a AT ) bound on expected regret, where T is time, \u03c4 is the episode length and S and A are the cardinalities of the state and action spaces. This bound is one of the first for an algorithm not based on optimism, and close to the state of the art for any reinforcement learning algorithm. We show through simulation that PSRL significantly outperforms existing algorithms with similar regret bounds.", "targets": "(More) Efficient Reinforcement Learning via Posterior Sampling"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7d46e723f16d44beb5c79f548c33a994", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper presents an information theoretic approach to the concept of intelligence in the computational sense. We introduce a probabilistic framework from which computational intelligence is shown to be an entropy minimizing process at the local level. Using this new scheme, we develop a simple data driven clustering example and discuss its applications.", "targets": "The Computational Theory of Intelligence: Information Entropy"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-9a4db66a25f6449db43282104471a8a8", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper presents a model based on an hybrid system to numerically simulate the climbing phase of an aircraft. This model is then used within a trajectory prediction tool. Finally, the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) optimization algorithm is used to tune five selected parameters, and thus improve the accuracy of the model. Incorporated within a trajectory prediction tool, this model can be used to derive the order of magnitude of the prediction error over time, and thus the domain of validity of the trajectory prediction. A first validation experiment of the proposed model is based on the errors along time for a one-time trajectory prediction at the take off of the flight with respect to the default values of the theoretical BADA model. This experiment, assuming complete information, also shows the limit of the model. A second experiment part presents an on-line trajectory prediction, in which the prediction is continuously updated based on the current aircraft position. This approach raises several issues, for which improvements of the basic model are proposed, and the resulting trajectory prediction tool shows statistically significantly more accurate results than those of the default model.", "targets": "Online Learning for Ground Trajectory Prediction"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-98e4951443494f24861f0b2a1e66888a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Given samples from a distribution, how many new elements should we expect to find if we continue sampling this distribution? This is an important and actively studied problem, with many applications ranging from unseen species estimation to genomics. We generalize this extrapolation and related unseen estimation problems to the multiple population setting, where population j has an unknown distributionDj from which we observe nj samples. We derive an optimal estimator for the total number of elements we expect to find among new samples across the populations. Surprisingly, we prove that our estimator\u2019s accuracy is independent of the number of populations. We also develop an efficient optimization algorithm to solve the more general problem of estimating multi-population frequency distributions. We validate our methods and theory through extensive experiments. Finally, on a real dataset of human genomes across multiple ancestries, we demonstrate how our approach for unseen estimation can enable cohort designs that can discover interesting mutations with greater efficiency.", "targets": "Estimating the unseen from multiple populations"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c428ed26c2bd4132ac4661814f64524b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Knowing where people live is a fundamental component of many decision making processes such as urban development, infectious disease containment, evacuation planning, risk management, conservation planning, and more. While bottom-up, survey driven censuses can provide a comprehensive view into the population landscape of a country, they are expensive to realize, are infrequently performed, and only provide population counts over broad areas. Population disaggregation techniques and population projection methods individually address these shortcomings, but also have shortcomings of their own. To jointly answer the questions of \u201cwhere do people live\u201d and \u201chow many people live there,\u201d we propose a deep learning model for creating high-resolution population estimations from satellite imagery. Specifically, we train convolutional neural networks to predict population in the USA at a 0.01\u25e6 \u00d7 0.01\u25e6 resolution grid from 1-year composite Landsat imagery. We validate these models in two ways: quantitatively, by comparing our model\u2019s grid cell estimates aggregated at a county-level to several US Census county-level population projections, and qualitatively, by directly interpreting the model\u2019s predictions in terms of the satellite image inputs. We find that aggregating our model\u2019s estimates gives comparable results to the Census county-level population projections and that the predictions made by our model can be directly interpreted, which give it advantages over traditional population disaggregation methods. In general, our model is an example of how machine learning techniques can be an effective tool for extracting information from inherently unstructured, remotely sensed data to provide effective solutions to social problems.", "targets": "A Deep Learning Approach for Population Estimation from Satellite Imagery"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5fd9d5d771f34e429b3f43de34fd4a3b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We consider stochastic bandit problems with a continuous set of arms and where the expected re-ward is a continuous and unimodal function of the arm. No further assumption is made regardingthe smoothness and the structure of the expected reward function. For these problems, we proposethe Stochastic Pentachotomy (SP) algorithm, and derive finite-time upper bounds on its regret andoptimization error. In particular, we show that, for any expected reward function \u03bc that behaves as\u03bc(x) = \u03bc(x)\u2212 C|x\u2212 x| locally around its maximizer x for some \u03be, C > 0, the SP algorithm isorder-optimal. Namely its regret and optimization error scale as O(\u221aT log(T )) and O(\u221alog(T )/T ),respectively, when the time horizon T grows large. These scalings are achieved without the knowledgeof \u03be and C. Our algorithm is based on asymptotically optimal sequential statistical tests used to suc-cessively trim an interval that contains the best arm with high probability. To our knowledge, the SPalgorithm constitutes the first sequential arm selection rule that achieves a regret and optimization errorscaling as O(\u221aT ) and O(1/\u221aT ), respectively, up to a logarithmic factor for non-smooth expectedreward functions, as well as for smooth functions with unknown smoothness.", "targets": "Unimodal Bandits without Smoothness"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5fec2d676c1d477a828e445affe886c3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we analyze the spectrum occupancy using different machine learning techniques. Both supervised techniques (naive Bayesian classifier (NBC), decision trees (DT), support vector machine (SVM), linear regression (LR)) and unsupervised algorithm (hidden markov model (HMM)) are studied to find the best technique with the highest classification accuracy (CA). A detailed comparison of the supervised and unsupervised algorithms in terms of the computational time and classification accuracy is performed. The classified occupancy status is further utilized to evaluate the probability of secondary user outage for the future time slots, which can be used by system designers to define spectrum allocation and spectrum sharing policies. Numerical results show that SVM is the best algorithm among all the supervised and unsupervised classifiers. Based on this, we proposed a new SVM algorithm by combining it with fire fly algorithm (FFA), which is shown to outperform all other algorithms. Index Terms Fire fly algorithm, hidden markov model, spectrum occupancy and support vector machine. March 25, 2015 DRAFT", "targets": "Analysis of Spectrum Occupancy Using Machine Learning Algorithms"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-22623baf7f1340de886d4eeac76ef48e", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We describe TweeTIME, a temporal tagger for recognizing and normalizing time expressions in Twitter. Most previous work in social media analysis has to rely on temporal resolvers that are designed for well-edited text, and therefore suffer from the reduced performance due to domain mismatch. We present a minimally supervised method that learns from large quantities of unlabeled data and requires no hand-engineered rules or hand-annotated training corpora. TweeTIME achieves 0.68 F1 score on the end-to-end task of resolving date expressions, outperforming a broad range of state-of-the-art systems.1", "targets": "A Minimally Supervised Method for Recognizing and Normalizing Time Expressions in Twitter"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d2211bc024ed4575a32c808b018e4614", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present a new theoretical framework for analyzing and learning artificial neural networks. Our approach simultaneously and adaptively learns both the structure of the network as well as its weights. The methodology is based upon and accompanied by strong data-dependent theoretical learning guarantees, so that the final network architecture provably adapts to the complexity of any given problem.", "targets": "AdaNet: Adaptive Structural Learning of Artificial Neural Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-14dbe0950f814d48b7972219a8b6c7ee", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper we present our exploratory findings related to extracting knowledge and experiences from a community of senior tourists. By using tools of qualitative analysis as well as review of literature, we managed to verify a set of hypotheses related to the content created by senior tourists when participating in on-line communities. We also produced a codebook, representing various themes one may encounter in such communities. This codebook, derived from our own qualitative research, as well a literature review will serve as a basis for further development of automated tools of knowledge extraction. We also managed to find that older adults more often than other poster in tourists forums, mention their age in discussion, more often share their experiences and motivation to travel, however they do not differ in relation to describing barriers encountered while traveling.", "targets": "Golden Years, Golden Shores: A Study of Elders in Online Travel Communities"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-69370b10a3a04170ba18474cab4888c3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper focuses on style transfer on the basis of non-parallel text. This is an instance of a broader family of problems including machine translation, decipherment, and sentiment modification. The key technical challenge is to separate the content from desired text characteristics such as sentiment. We leverage refined alignment of latent representations across mono-lingual text corpora with different characteristics. We deliberately modify encoded examples according to their characteristics, requiring the reproduced instances to match available examples with the altered characteristics as a population. We demonstrate the effectiveness of this cross-alignment method on three tasks: sentiment modification, decipherment of word substitution ciphers, and recovery of word order.", "targets": "Style Transfer from Non-Parallel Text by Cross-Alignment"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8b6f727a83b742788e8d299ada45d657", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The Minimum Vertex Cover (MinVC) problem is a well-known NP-hard problem. Recently there has been great interest in solving this problem on real-world massive graphs. For such graphs, local search is a promising approach to finding optimal or near-optimal solutions. In this paper we propose a local search algorithm that exploits reduction rules and data structures to solve the MinVC problem in such graphs. Experimental results on a wide range of real-word massive graphs show that our algorithm finds better covers than state-of-theart local search algorithms for MinVC. Also we present interesting results about the complexities of some wellknown heuristics.", "targets": "Exploiting Reduction Rules and Data Structures: Local Search for Minimum Vertex Cover in Massive Graphs"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d94af19ba75f4ae7b8ef6765d6c84fc3", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A major challenge in paraphrase research is the lack of parallel corpora. In this paper, we present a new method to collect large-scale sentential paraphrases from Twitter by linking tweets through shared URLs. The main advantage of our method is its simplicity, as it gets rid of the classifier or human in the loop needed to select data before annotation and subsequent application of paraphrase identification algorithms in the previous work. We present the largest human-labeled paraphrase corpus to date of 51,524 sentence pairs and the first cross-domain benchmarking for automatic paraphrase identification. In addition, we show that more than 30,000 new sentential paraphrases can be easily and continuously captured every month at \u223c70% precision, and demonstrate their utility for downstream NLP tasks through phrasal paraphrase extraction. We make our code and data freely available.1", "targets": "A Continuously Growing Dataset of Sentential Paraphrases"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-619ae3e2960b4afd8cbc356eb9747ee6", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Implicit discourse relation recognition is a crucial component for automatic discourse-level analysis and nature language understanding. Previous studies exploit discriminative models that are built on either powerful manual features or deep discourse representations. In this paper, instead, we explore generative models and propose a variational neural discourse relation recognizer. We refer to this model as VIRILE. VIRILE establishes a directed probabilistic model with a latent continuous variable that generates both a discourse and the relation between the two arguments of the discourse. In order to perform efficient inference and learning, we introduce a neural discourse relation model to approximate the posterior of the latent variable, and employ this approximated posterior to optimize a reparameterized variational lower bound. This allows VIRILE to be trained with standard stochastic gradient methods. Experiments on the benchmark data set show that VIRILE can achieve competitive results against state-of-the-art baselines.", "targets": "Variational Neural Discourse Relation Recognizer"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-897edcc5b256414291231e96b3979eba", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In action domains where agents may have erroneous beliefs, reasoning about the effects of actions involves reasoning about belief change. In this paper, we use a transition system approach to reason about the evolution of an agent\u2019s beliefs as actions are executed. Some actions cause an agent to perform belief revision while others cause an agent to perform belief update, but the interaction between revision and update can be nonelementary. We present a set of rationality properties describing the interaction between revision and update, and we introduce a new class of belief change operators for reasoning about alternating sequences of revisions and updates. Our belief change operators can be characterized in terms of a natural shifting operation on total pre-orderings over interpretations. We compare our approach with related work on iterated belief change due to action, and we conclude with some directions for future research.", "targets": "Iterated Belief Change Due to Actions and Observations"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d4886a94edf346fa8424a616e44765e7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper we present a model for email authorship identification (EAI) by employing a Cluster-based Classification (CCM) technique. Traditionally, stylometric features have been successfully employed in various authorship analysis tasks; we extend the traditional feature-set to include some more interesting and effective features for email authorship identification (e.g. the last punctuation mark used in an email, the tendency of an author to use capitalization at the start of an email, or the punctuation after a greeting or farewell). We also included Info Gain feature selection based content features. It is observed that the use of such features in the authorship identification process has a positive impact on the accuracy of the authorship identification task. We performed experiments to justify our arguments and compared the results with other base line models. Experimental results reveal that the proposed CCM -based email authorship identification model, along with the proposed feature set, outperforms the state-of-the-art support vector machine (SVM)-based models, as well as the models proposed by Iqbal et al. [1, 2]. The proposed model attains an accuracy rate of 94% for 10 authors, 89% for 25 authors, and 81% for 50 authors, respectively on Enron dataset, while 89.5% accuracy has been achieved on authors' constructed real email dataset. The results on Enron dataset have been achieved on quite a large number of authors as compared to the models proposed by Iqbal et al. [1, 2].", "targets": "CEAI: CCM based Email Authorship Identification Model"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-690561439c35445eae904b2a313d3a78", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper we present applications of different machine learning algorithms in aquaculture. Machine learning algorithms learn models from historical data. In aquaculture historical data are obtained from farm practices, yields, and environmental data sources. Associations between these different variables can be obtained by applying machine learning algorithms to historical data. In this paper we present applications of different machine learning algorithms in aquaculture applications.", "targets": "Application of Machine Learning Techniques in Aquaculture"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c703b90efb484578b54b3913192ffea9", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present decentralized rollout sampling policy iteration (DecRSPI) \u2014 a new algorithm for multi-agent decision problems formalized as DEC-POMDPs. DecRSPI is designed to improve scalability and tackle problems that lack an explicit model. The algorithm uses MonteCarlo methods to generate a sample of reachable belief states. Then it computes a joint policy for each belief state based on the rollout estimations. A new policy representation allows us to represent solutions compactly. The key benefits of the algorithm are its linear time complexity over the number of agents, its bounded memory usage and good solution quality. It can solve larger problems that are intractable for existing planning algorithms. Experimental results confirm the effectiveness and scalability of the approach.", "targets": "Rollout Sampling Policy Iteration for Decentralized POMDPs"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f3f3d48492f741f4be3a301d2f7f9dca", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Deep neural networks can be obscenely wasteful. When processing video, a convolutional network expends a fixed amount of computation for each frame with no regard to the similarity between neighbouring frames. As a result, it ends up repeatedly doing very similar computations. To put an end to such waste, we introduce SigmaDelta networks. With each new input, each layer in this network sends a discretized form of its change in activation to the next layer. Thus the amount of computation that the network does scales with the amount of change in the input and layer activations, rather than the size of the network. We introduce an optimization method for converting any pre-trained deep network into an optimally efficient Sigma-Delta network, and show that our algorithm, if run on the appropriate hardware, could cut at least an order of magnitude from the computational cost of processing video data.", "targets": "SIGMA-DELTA QUANTIZED NETWORKS"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-efd2fcbc1ce148249d346d7a93aebb0a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Although, the fair amount of works in sentiment analysis (SA) and opinion mining (OM) systems in the last decade and with respect to the performance of these systems, but it still not desired performance, especially for morphologically-Rich Language (MRL) such as Arabic, due to the complexities and challenges exist in the nature of the languages itself. One of these challenges is the detection of idioms or proverbs phrases within the writer text or comment. An idiom or proverb is a form of speech or an expression that is peculiar to itself. Grammatically, it cannot be understood from the individual meanings of its elements and can yield different sentiment when treats as separate words. Consequently, In order to facilitate the task of detection and classification of lexical phrases for automated SA systems, this paper presents AIPSeLEX a novel idioms/ proverbs sentiment lexicon for modern standard Arabic (MSA) and colloquial. AIPSeLEX is manually collected and annotated at sentence level with semantic orientation (positive or negative). The efforts of manually building and annotating the lexicon are reported. Moreover, we build a classifier that extracts idioms and proverbs, phrases from text using n-gram and similarity measure methods. Finally, several experiments were carried out on various data, including Arabic tweets and Arabic microblogs (hotel reservation, product reviews, and TV program comments) from publicly available Arabic online reviews websites (social media, blogs, forums, e-commerce web sites) to evaluate the coverage and accuracy of AIPSeLEX. General Terms Sentiment Analysis, modern standard Arabic, colloquial, natural language processing.", "targets": "Idioms-Proverbs Lexicon for Modern Standard Arabic and Colloquial Sentiment Analysis"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d30c28ad40784ea1814929e517196461", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "One way to approach end-to-end autonomous driving is to learn a policy function that maps from a sensory input, such as an image frame from a front-facing camera, to a driving action, by imitating an expert driver, or a reference policy. This can be done by supervised learning, where a policy function is tuned to minimize the difference between the predicted and ground-truth actions. A policy function trained in this way however is known to suffer from unexpected behaviours due to the mismatch between the states reachable by the reference policy and trained policy functions. More advanced algorithms for imitation learning, such as DAgger, addresses this issue by iteratively collecting training examples from both reference and trained policies. These algorithms often requires a large number of queries to a reference policy, which is undesirable as the reference policy is often expensive. In this paper, we propose an extension of the DAgger, called SafeDAgger, that is query-efficient and more suitable for end-to-end autonomous driving. We evaluate the proposed SafeDAgger in a car racing simulator and show that it indeed requires less queries to a reference policy. We observe a significant speed up in convergence, which we conjecture to be due to the effect of automated curriculum learning.", "targets": "Query-Efficient Imitation Learning for End-to-End Autonomous Driving"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e776a681c20f43a29fe49f623371c77c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "NLP tasks differ in the semantic information they require, and at this time no single semantic representation fulfills all requirements. Logic-based representations characterize sentence structure, but do not capture the graded aspect of meaning. Distributional models give graded similarity ratings for words and phrases, but do not adequately capture overall sentence structure. So it has been argued that the two are complementary. In this paper, we adopt a hybrid approach that combines logic-based and distributional semantics through probabilistic logic inference in Markov Logic Networks (MLNs). We focus on textual entailment (RTE), a task that can utilize the strengths of both representations. Our system is three components, 1) parsing and task representation, where input RTE problems are represented in probabilistic logic. This is quite different from representing them in standard first-order logic. 2) knowledge base construction in the form of weighted inference rules from different sources like WordNet, paraphrase collections, and lexical and phrasal distributional rules generated on the fly. We use a variant of Robinson resolution to determine the necessary inference rules. More sources can easily be added by mapping them to logical rules; our system learns a resource-specific weight that counteract scaling differences between resources. 3) inference, where we show how to solve the inference problems efficiently. In this paper we focus on the SICK dataset, and we achieve a state-of-the-art result. Our system handles overall sentence structure and phenomena like negation in the logic, then uses our Robinson resolution variant to query distributional systems about words and short phrases. Therefor, we use our system to evaluate distributional lexical entailment approaches. We also publish the set of rules queried from the SICK dataset, which can be a good resource to evaluate them.", "targets": "Representing Meaning with a Combination of Logical Form and Vectors"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1b95f71cecbf4f4d95b8aaedb87c496d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "There has been a long history of using fuzzy language equivalence to compare the behavior of fuzzy systems, but the comparison at this level is too coarse. Recently, a finer behavioral measure, bisimulation, has been introduced to fuzzy finite automata. However, the results obtained are applicable only to finite-state systems. In this paper, we consider bisimulation for general fuzzy systems which may be infinite-state or infiniteevent, by modeling them as fuzzy transition systems. To help understand and check bisimulation, we characterize it in three ways by enumerating whole transitions, comparing individual transitions, and using a monotonic function. In addition, we address composition operations, subsystems, quotients, and homomorphisms of fuzzy transition systems and discuss their properties connected with bisimulation. The results presented here are useful for comparing the behavior of general fuzzy systems. In particular, this makes it possible to relate an infinite fuzzy system to a finite one, which is easier to analyze, with the same behavior.", "targets": "Bisimulations for Fuzzy Transition Systems"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b606834326eb4824a119df365f048631", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we present the DifferenceBased Causality Learner (DBCL), an algorithm for learning a class of discrete-time dynamic models that represents all causation across time by means of difference equations driving change in a system. We motivate this representation with real-world mechanical systems and prove DBCL\u2019s correctness for learning structure from time series data, an endeavour that is complicated by the existence of latent derivatives that have to be detected. We also prove that, under common assumptions for causal discovery, DBCL will identify the presence or absence of feedback loops, making the model more useful for predicting the effects of manipulating variables when the system is in equilibrium. We argue analytically and show empirically the advantages of DBCL over vector autoregression (VAR) and Granger causality models as well as modified forms of Bayesian and constraintbased structure discovery algorithms. Finally, we show that our algorithm can discover causal directions of alpha rhythms in human brains from EEG data.", "targets": "Learning Why Things Change: The Difference-Based Causality Learner"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-c75e1ea32ffc421a881e646e0e4aeee7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper aims to introduces a new algorithm for automatic speech-to-text summarization based on statistical divergences of probabilities and graphs. The input is a text from speech conversations with noise, and the output a compact text summary. Our results, on the pilot task CCCS Multiling 2015 French corpus are very encouraging.", "targets": "LIA-RAG: a system based on graphs and divergence of probabilities applied to Speech-To-Text Summarization"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-cc06c796a2f44c6a93f26c236b6eb012", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Generating texts from structured data (e.g., a table) is important for various natural language processing tasks such as question answering and dialog systems. In recent studies, researchers use neural language models and encoder-decoder frameworks for table-to-text generation. However, these neural network-based approaches do not model the order of contents during text generation. When a human writes a summary based on a given table, he or she would probably consider the content order before wording. In a biography, for example, the nationality of a person is typically mentioned before occupation in a biography. In this paper, we propose an order-planning text generation model to capture the relationship between different fields and use such relationship to make the generated text more fluent and smooth. We conducted experiments on the WIKIBIO dataset and achieve significantly higher performance than previous methods in terms of BLEU, ROUGE, and NIST scores.", "targets": "Order-Planning Neural Text Generation From Structured Data"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-f62c798362144a35b83552c351b1294c", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The first step of processing a question in Question Answering(QA) Systems is to carry out a detailed analysis of the question for the purpose of determining what it is asking for and how to perfectly approach answering it. Our Question analysis uses several techniques to analyze any question given in natural language: a Stanford POS Tagger & parser for Arabic language, a named entity recognizer, tokenizer, Stop-word removal, Question expansion, Question classification and Question focus extraction components. We employ numerous detection rules and trained classifier using features from this analysis to detect important elements of the question, including: 1) the portion of the question that is a referring to the answer (the focus); 2) different terms in the question that identify what type of entity is being asked for (the lexical answer types); 3) Question expansion ; 4) a process of classifying the question into one or more of several and different types; and We describe how these elements are identified and evaluate the effect of accurate detection on our question-answering system using the Mean Reciprocal Rank(MRR) accuracy measure.", "targets": "QUESTION ANALYSIS FOR ARABIC QUESTION ANSWERING SYSTEMS"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-efedc6daba27493b85e792a1996bf62a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper, we theoretically justify an approach popular among participants of the Higgs Boson Machine Learning Challenge to optimize approximate median significance (AMS). The approach is based on the following two-stage procedure. First, a real-valued function f is learned by minimizing a surrogate loss for binary classification, such as logistic loss, on the training sample. Then, given f , a threshold \u03b8\u0302 is tuned on a separate validation sample, by direct optimization of AMS. We show that the regret of the resulting classifier (obtained from thresholding f on \u03b8\u0302) measured with respect to the squared AMS, is upperbounded by the regret of f measured with respect to the logistic loss. Hence, we prove that minimizing logistic surrogate is a consistent method of optimizing AMS.", "targets": "Consistent optimization of AMS by logistic loss minimization"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-eb7579d813e14f10a19070317703a9bf", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The current trend in object detection and localization is to learn predictions with high capacity deep neural networks trained on a very large amount of annotated data and using a high amount of processing power. In this work, we propose a new neural model which directly predicts bounding box coordinates. The particularity of our contribution lies in the local computations of predictions with a new form of local parameter sharing which keeps the overall amount of trainable parameters low. Key components of the model are spatial 2D-LSTM recurrent layers which convey contextual information between the regions of the image. We show that this model is more powerful than the state of the art in applications where training data is not as abundant as in the classical configuration of natural images and Imagenet/Pascal VOC tasks. We particularly target the detection of text in document images, but our method is not limited to this setting. The proposed model also facilitates the detection of many objects in a single image and can deal with inputs of variable sizes without resizing.", "targets": "Learning to detect and localize many objects from few examples"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-93913c1765ab40ffaaf732f9c44ed6b4", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The attention model has become a standard component in neural machine translation (NMT) and it guides translation process by selectively focusing on parts of the source sentence when predicting each target word. However, we find that the generation of a target word does not only depend on the source sentence, but also rely heavily on the previous generated target words, especially the distant words which are difficult to model by using recurrent neural networks. To solve this problem, we propose in this paper a novel look-ahead attention mechanism for generation in NMT, which aims at directly capturing the dependency relationship between target words. We further design three patterns to integrate our look-ahead attention into the conventional attention model. Experiments on NIST Chinese-to-English and WMT English-to-German translation tasks show that our proposed look-ahead attention mechanism achieves substantial improvements over state-of-the-art baselines.", "targets": "Look-ahead Attention for Generation in Neural Machine Translation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1082937af4e649d4805e913b9c9911fd", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We evaluate the character-level translation method for neural semantic parsing on a large corpus of sentences annotated with Abstract Meaning Representations (AMRs). Using a seq2seq model, and some trivial preprocessing and postprocessing of AMRs, we obtain a baseline accuracy of 53.1 (F-score on AMR-triples). We examine four different approaches to improve this baseline result: (i) reordering AMR branches to match the word order of the input sentence increases performance to 58.3; (ii) adding part-of-speech tags (automatically produced) to the input shows improvement as well (57.2); (iii) So does the introduction of super characters (conflating frequent sequences of characters to a single character), reaching 57.4; (iv) adding silver-standard training data obtained by an off-the-shelf parser yields the biggest improvement, resulting in an F-score of 64.0. Combining all four techniques leads to an F-score of 69.0, which is state-of-the-art in AMR parsing. This is remarkable because of the relatively simplicity of the approach: the only explicit linguistic knowledge that we use are part-of-speech tags.", "targets": "Neural Semantic Parsing by Character-based Translation: Experiments with Abstract Meaning Representations"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0fbb72b334cd44aabae34fcbd0f75565", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Theoretical analyses of the Dendritic Cell Algorithm (DCA) have yielded several criticisms about its underlying structure and operation. As a result, several alterations and fixes have been suggested in the literature to correct for these findings. A contribution of this work is to investigate the effects of replacing the classification stage of the DCA (which is known to be flawed) with a traditional machine learning technique. This work goes on to question the merits of those unique properties of the DCA that are yet to be thoroughly analysed. If none of these properties can be found to have a benefit over traditional approaches, then \u201cfixing\u201d the DCA is arguably less efficient than simply creating a new algorithm. This work examines the dynamic filtering property of the DCA and questions the utility of this unique feature for the anomaly detection problem. It is found that this feature, while advantageous for noisy, time-ordered classification, is not as useful as a traditional static filter for processing a synthetic dataset. It is concluded that there are still unique features of the DCA left to investigate. Areas that may be of benefit to the Artificial Immune Systems community are suggested.", "targets": "Quiet in Class : Classification, Noise and the Dendritic Cell Algorithm"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-da64b9af2ba049b8b402d47d96af4950", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We present a suite of algorithms for Dimension Independent Similarity Computation (DISCO) to compute all pairwise similarities between very high dimensional sparse vectors. All of our results are provably independent of dimension, meaning apart from the initial cost of trivially reading in the data, all subsequent operations are independent of the dimension, thus the dimension can be very large. We study Cosine, Dice, Overlap, Conditional, and the Jaccard similarity measures. For Jaccard similiarity we include an improved version of MinHash. Our results are geared toward the MapReduce framework. We empirically validate our theorems at large scale using data from the social networking site Twitter.", "targets": "Dimension Independent Similarity Computation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-11e977ff07b84639b28f5ebe4b10d670", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Auto-encoders are perhaps the best-known non-probabilistic methods for representation learning. They are conceptually simple and easy to train. Recent theoretical work has shed light on their ability to capture manifold structure, and drawn connections to density modeling. This has motivated researchers to seek ways of auto-encoder scoring, which has furthered their use in classification. Gated auto-encoders (GAEs) are an interesting and flexible extension of auto-encoders which can learn transformations among different images or pixel covariances within images. However, they have been much less studied, theoretically or empirically. In this work, we apply a dynamical systems view to GAEs, deriving a scoring function, and drawing connections to Restricted Boltzmann Machines. On a set of deep learning benchmarks, we also demonstrate their effectiveness for single and multi-label classification.", "targets": "Scoring and Classifying with Gated Auto-encoders"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-67c6be8069cf4e54b461cfce36858d32", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We propose a method for embedding twodimensional locations in a continuous vector space using a neural network-based model incorporating mixtures of Gaussian distributions, presenting two model variants for text-based geolocation and lexical dialectology. Evaluated over Twitter data, the proposed model outperforms conventional regression-based geolocation and provides a better estimate of uncertainty. We also show the effectiveness of the representation for predicting words from location in lexical dialectology, and evaluate it using the DARE dataset.", "targets": "Continuous Representation of Location for Geolocation and Lexical Dialectology using Mixture Density Networks"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7496c276b2bb4fcd9186d04b77e251ee", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We explore beyond existing work on learning from demonstration by asking the question: \u201cCan robots learn to teach?\u201d, that is, can a robot autonomously learn an instructional policy from expert demonstration and use it to instruct or collaborate with humans in executing complex tasks in uncertain environments? In this paper we pursue a solution to this problem by leveraging the idea that humans often implicitly decompose a higher level task into several subgoals whose execution brings the task closer to completion. We propose Dirichlet process based non-parametric Inverse Reinforcement Learning (DPMIRL) approach for reward based unsupervised clustering of task space into subgoals. This approach is shown to capture the latent subgoals that a human teacher would have utilized to train a novice. The notion of \u201caction primitive\u201d is introduced as the means to communicate instruction policy to humans in the least complicated manner, and as a computationally efficient tool to segment demonstration data. We evaluate our approach through experiments on hydraulic actuated scaled model of an excavator and evaluate and compare different teaching strategies utilized by the robot.", "targets": "Can Co-robots Learn to Teach?"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e1bab8fbcfa94fbbb3acd66565648f0f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "With the advancement of web technology and its growth, there is a huge volume of data present in the web for internet users and a lot of data is generated too. Internet has become a platform for online learning, exchanging ideas and sharing opinions. Social networking sites like Twitter, Facebook, Google+ are rapidly gaining popularity as they allow people to share and express their views about topics,have discussion with different communities, or post messages across the world. There has been lot of work in the field of sentiment analysis of twitter data. This survey focuses mainly on sentiment analysis of twitter data which is helpful to analyze the information in the tweets where opinions are highly unstructured, heterogeneous and are either positive or negative, or neutral in some cases. In this paper, we provide a survey and a comparative analyses of existing techniques for opinion mining like machine learning and lexicon-based approaches, together with evaluation metrics. Using various machine learning algorithms like Naive Bayes, Max Entropy, and Support Vector Machine, we provide a research on twitter data streams. We have also discussed general challenges and applications of Sentiment Analysis on Twitter.", "targets": "Sentiment Analysis of Twitter Data :A Survey of Techniques"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-008fd29a66954f549b0e306d0ce0894a", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Path planning is typically considered in Artificial Intelligence as a graph searching problem and R* is state-of-the-art algorithm tailored to solve it. The algorithm decomposes given path finding task into the series of subtasks each of which can be easily (in computational sense) solved by well-known methods (such as A*). Parameterized random choice is used to perform the decomposition and as a result R* performance largely depends on the choice of its input parameters. In our work we formulate a range of assumptions concerning possible upper and lower bounds of R* parameters, their interdependency and their influence on R* performance. Then we evaluate these assumptions by running a large number of experiments. As a result we formulate a set of heuristic rules which can be used to initialize the values of R* parameters in a way that leads to algorithm\u2019s best performance.", "targets": "Finetuning Randomized Heuristic Search For 2D Path Planning: Finding The Best Input Parameters For R* Algorithm Through Series Of Experiments"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-752c20d5b2874f63a7fb97f03ce2f2e8", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper presents an approach to formalizing and enforcing a class of use privacy properties in data-driven systems. In contrast to prior work, we focus on use restrictions on proxies (i.e. strong predictors) of protected information types. Our definition relates proxy use to intermediate computations that occur in a program, and identify two essential properties that characterize this behavior: 1) its result is strongly associated with the protected information type in question, and 2) it is likely to causally affect the final output of the program. For a specific instantiation of this definition, we present a program analysis technique that detects instances of proxy use in a model, and provides a witness that identifies which parts of the corresponding program exhibit the behavior. Recognizing that not all instances of proxy use of a protected information type are inappropriate, we make use of a normative judgment oracle that makes this inappropriateness determination for a given witness. Our repair algorithm uses the witness of an inappropriate proxy use to transform the model into one that provably does not exhibit proxy use, while avoiding changes that unduly affect classification accuracy. Using a corpus of social datasets, our evaluation shows that these algorithms are able to detect proxy use instances that would be difficult to find using existing techniques, and subsequently remove them while maintaining acceptable classification performance.", "targets": "Use Privacy in Data-Driven Systems"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0c92a1de0fe34584acc1249252c38d71", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A language independent Stemmer has always been looked for. Single N-gram tokenization technique works well, however, it often generates stems that start with intermediate characters, rather than initial ones. We present a novel technique that takes the concept of N-grams one step ahead and compare our method with an established algorithm in the fieldPorter\u2019s Stemmer. Porter\u2019s Stemmer is language dependent, and performance of our proposed method is not inferior to it.", "targets": "Generation, Implementation and Appraisal of a Language Independent Stemming Algorithm"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-42463462c5044da59a6074ff95d2e284", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The early classifications of the computational complexity of planning under various restrictions in STRIPS (Bylander) and SAS (B\u00e4ckstr\u00f6m and Nebel) have influenced following research in planning in many ways. We go back and reanalyse their subclasses, but this time using the more modern tool of parameterized complexity analysis. This provides new results that together with the old results give a more detailed picture of the complexity landscape. We demonstrate separation results not possible with standard complexity theory, which contributes to explaining why certain cases of planning have seemed simpler in practice than theory has predicted. In particular, we show that certain restrictions of practical interest are tractable in the parameterized sense of the term, and that a simple heuristic is sufficient to make a well-known partial-order planner exploit this fact.", "targets": "The Complexity of Planning Revisited \u2013 A Parameterized Analysis"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-de650fd0ec3a42cba8841fc1ade92500", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Opinion mining from customer reviews has become pervasive in recent years. Sentences in reviews, however, are usually classified independently, even though they form part of a review\u2019s argumentative structure. Intuitively, sentences in a review build and elaborate upon each other; knowledge of the review structure and sentential context should thus inform the classification of each sentence. We demonstrate this hypothesis for the task of aspect-based sentiment analysis by modeling the interdependencies of sentences in a review with a hierarchical bidirectional LSTM. We show that the hierarchical model outperforms two non-hierarchical baselines, obtains results competitive with the state-of-the-art, and outperforms the state-of-the-art on five multilingual, multi-domain datasets without any handengineered features or external resources.", "targets": "A Hierarchical Model of Reviews for Aspect-based Sentiment Analysis"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-8b6ca43179bb482d94ccf6e03c19c849", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We consider the task of identifying attitudes towards a given set of entities from text. Conventionally, this task is decomposed into two separate subtasks: target detection that identifies whether each entity is mentioned in the text, either explicitly or implicitly, and polarity classification that classifies the exact sentiment towards an identified entity (the target) into positive, negative, or neutral. Instead, we show that attitude identification can be solved with an end-to-end machine learning architecture, in which the two subtasks are interleaved by a deep memory network. In this way, signals produced in target detection provide clues for polarity classification, and reversely, the predicted polarity provides feedback to the identification of targets. Moreover, the treatments for the set of targets also influence each other \u2013 the learned representations may share the same semantics for some targets but vary for others. The proposed deep memory network, the AttNet, outperforms methods that do not consider the interactions between the subtasks or those among the targets, including conventional machine learning methods and the state-of-the-art deep learning models.", "targets": "Deep Memory Networks for Attitude Identification"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-7656a75d7f8040c98367575c42c84eb7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "The shortest path between two concepts in a taxonomic ontology is commonly used to represent the semantic distance between concepts in the edge-based semantic similarity measures. In the past, the edge counting is considered to be the default method for the path computation, which is simple, intuitive and has low computational complexity. However, a large lexical taxonomy of such as WordNet has the irregular densities of links between concepts due to its broad domain but. The edge counting-based path computation is powerless for this non-uniformity problem. In this paper, we advocate that the path computation is able to be separated from the edge-based similarity measures and form various general computing models. Therefore, in order to solve the problem of non-uniformity of concept density in a large taxonomic ontology, we propose a new path computing model based on the compensation of local area density of concepts, which is equal to the number of direct hyponyms of the subsumers of concepts in their shortest path. This path model considers the local area density of concepts as an extension of the edge-based path and converts the local area density divided by their depth into the compensation for edge-based path with an adjustable parameter, which idea has been proven to be consistent with the information theory. This model is a general path computing model and can be applied in various edge-based similarity algorithms. The experiment results show that the proposed path model improves the average correlation between edge-based measures with human judgments on Miller and Charles benchmark from less than 0.8 to more than 0.85, and has a big advantage in efficiency than information content (IC) computation in a dynamic ontology, thereby successfully solving the non-uniformity problem of taxonomic ontology.", "targets": "A density compensation-based path computing model for measuring semantic similarity"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-eacfb44c73e145f09f8027c2e96d8a18", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Learning in probabilistic models is often severely hampered by the general intractability of the normalization factor and its derivatives. Here we propose a new learning technique that obviates the need to compute an intractable normalization factor or sample from the equilibrium distribution of the model. This is achieved by establishing dynamics that would transform the observed data distribution into the model distribution, and then setting as the objective the minimization of the initial flow of probability away from the data distribution. Score matching, minimum velocity learning, and certain forms of contrastive divergence are shown to be special cases of this learning technique. We demonstrate the application of minimum probability flow learning to parameter estimation in Ising models, deep belief networks, multivariate Gaussian distributions and a continuous model with a highly general energy function defined as a power series. In the Ising model case, minimum probability flow learning outperforms current state of the art techniques by approximately two orders of magnitude in learning time, with comparable error in the recovered parameters. It is our hope that this technique will alleviate existing restrictions on the classes of probabilistic models that are practical for use.", "targets": "Minimum Probability Flow Learning"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-68ed2d7534b3466793a8662826beb3bd", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We examine the meaning and the complexity of probabilistic logic programs that consist of a set of rules and a set of independent probabilistic facts (that is, programs based on Sato\u2019s distribution semantics). We focus on two semantics, respectively based on stable and on well-founded models. We show that the semantics based on stable models (referred to as the \u201ccredal semantics\u201d) produces sets of probability models that dominate infinitely monotone Choquet capacities; we describe several useful consequences of this result. We then examine the complexity of inference with probabilistic logic programs. We distinguish between the complexity of inference when a probabilistic program and a query are given (the inferential complexity), and the complexity of inference when the probabilistic program is fixed and the query is given (the query complexity, akin to data complexity as used in database theory). We obtain results on the inferential and query complexity for acyclic, stratified, and cyclic propositional and relational programs; complexity reaches various levels of the counting hierarchy and even exponential levels.", "targets": "On the Semantics and Complexity of Probabilistic Logic Programs"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-b59287e247d84eb8b185f797c5c3579f", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Multi-instance multi-label (MIML) learning is a challenging problem in many aspects. Such learning approaches might be useful for many medical diagnosis applications including breast cancer detection and classification. In this study subset of digiPATH dataset (whole slide digital breast cancer histopathology images) are used for training and evaluation of six state-ofthe-art MIML methods. At the end, performance comparison of these approaches are given by means of effective evaluation metrics. It is shown that MIML-kNN achieve the best performance that is %65.3 average precision, where most of other methods attain acceptable results as well.", "targets": "Evaluation of Joint Multi-Instance Multi-Label Learning For Breast Cancer Diagnosis"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-0c011bfbb8b64048919d3b2497938cbc", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We introduce a new approach for disfluency detection using a Bidirectional Long-Short Term Memory neural network (BLSTM). In addition to the word sequence, the model takes as input pattern match features that were developed to reduce sensitivity to vocabuary size in training, which lead to improved performance over the word sequence alone. The BLSTM takes advantage of explicit repair states in addition to the standard reparandum states. The final output leverages integer linear programming to incorporate constraints of disluency structure. In experiments on the Switchboard corpus, the model achieves state-of-the-art performance for both the standard disfluency detection task and the correction detection task. Analysis shows that the model has better detection of non-repetition disfluencies, which tend to be much harder to detect.", "targets": "Disfluency Detection using a Bidirectional LSTM"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-74652770153a4a1e9745c5563d9e2bb7", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper presents an investigation of the entropy of the Telugu script. Since this script is syllabic, and not alphabetic, the computation of entropy is somewhat complicated.", "targets": "Entropy of Telugu"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-e0b3dafd0136493fbeaf8d50d208363d", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "In this paper we introduce RankPL, a modeling language that can be thought of as a qualitative variant of a probabilistic programming language with a semantics based on Spohn\u2019s ranking theory. Broadly speaking, RankPL can be used to represent and reason about processes that exhibit uncertainty expressible by distinguishing \u201cnormal\u201d from \u201csurprising\u201d events. RankPL allows (iterated) revision of rankings over alternative program states and supports various types of reasoning, including abduction and causal inference. We present the language, its denotational semantics, and a number of practical examples. We also discuss an implementation of RankPL that is available for download.", "targets": "RankPL: A Qualitative Probabilistic Programming Language"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-2a78bcf9e0254440a60afb1d99ceeed1", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "We create a transition-based dependency parser using a general purpose learning to search system. The result is a fast and accurate parser for many languages. Compared to other transition-based dependency parsing approaches, our parser provides similar statistical and computational performance with best-known approaches while avoiding various downsides including randomization, extra feature requirements, and custom learning algorithms. We show that it is possible to implement a dependency parser with an open-source learning to search library in about 300 lines of C++ code, while existing systems often requires several thousands of lines.", "targets": "Learning to Search for Dependencies"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5f87dca6bead4cd98f5201eee8f7fb70", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Knowledge base (KB) completion adds new facts to a KB by making inferences from existing facts, for example by inferring with high likelihood nationality(X,Y) from bornIn(X,Y). Most previous methods infer simple one-hop relational synonyms like this, or use as evidence a multi-hop relational path treated as an atomic feature, like bornIn(X,Z)\u2192 containedIn(Z,Y). This paper presents an approach that reasons about conjunctions of multi-hop relations non-atomically, composing the implications of a path using a recurrent neural network (RNN) that takes as inputs vector embeddings of the binary relation in the path. Not only does this allow us to generalize to paths unseen at training time, but also, with a single high-capacity RNN, to predict new relation types not seen when the compositional model was trained (zero-shot learning). We assemble a new dataset of over 52M relational triples, and show that our method improves over a traditional classifier by 11%, and a method leveraging pre-trained embeddings by 7%.", "targets": "Compositional Vector Space Models for Knowledge Base Completion"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-5681dcf9cbae47b2b4fe6fc40bdd4a70", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Sequence-to-sequence neural translation models learn semantic and syntactic relations between sentence pairs by optimizing the likelihood of the target given the source, i.e., p(y|x), an objective that ignores other potentially useful sources of information. We introduce an alternative objective function for neural MT that maximizes the mutual information between the source and target sentences, modeling the bi-directional dependency of sources and targets. We implement the model with a simple re-ranking method, and also introduce a decoding algorithm that increases diversity in the N-best list produced by the first pass. Applied to the WMT German/English and French/English tasks, the proposed models offers a consistent performance boost on both standard LSTM and attention-based neural MT architectures.", "targets": "Mutual Information and Diverse Decoding Improve Neural Machine Translation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-a9efca9a29224a669624ac4b6fd8357b", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper presents Centre for Development of Advanced Computing Mumbai\u2019s (CDACM) submission to the NLP Tools Contest on Part-Of-Speech (POS) Tagging For Code-mixed Indian Social Media Text (POSCMISMT) 2015 (collocated with ICON 2015). We submitted results for Hindi (hi), Bengali (bn), and Telugu (te) languages mixed with English (en). In this paper, we have described our approaches to the POS tagging techniques, we exploited for this task. Machine learning has been used to POS tag the mixed language text. For POS tagging, distributed representations of words in vector space (word2vec) for feature extraction and Log-linear models have been tried. We report our work on all three languages hi, bn, and te mixed with en.", "targets": "Experiments with POS Tagging Code-mixed Indian Social Media Text"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-d1445ebe52cf4840a6bbc66dc6181981", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "A key issue in statistics and machine learning is to automatically select the \u201cright\u201d model complexity, e.g., the number of neighbors to be averaged over in k nearest neighbor (kNN) regression or the polynomial degree in regression with polynomials. We suggest a novel principle the Loss Rank Principle (LoRP) for model selection in regression and classification. It is based on the loss rank, which counts how many other (fictitious) data would be fitted better. LoRP selects the model that has minimal loss rank. Unlike most penalized maximum likelihood variants (AIC, BIC, MDL), LoRP depends only on the regression functions and the loss function. It works without a stochastic noise model, and is directly applicable to any non-parametric regressor, like kNN.", "targets": "Model Selection with the Loss Rank Principle"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-88a0408a37324570a51a70998dc89a59", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "Direct quantile regression involves estimating a given quantile of a response variable as a function of input variables. We present a new framework for direct quantile regression where a Gaussian process model is learned, minimising the expected tilted loss function. The integration required in learning is not analytically tractable so to speed up the learning we employ the Expectation Propagation algorithm. We describe how this work relates to other quantile regression methods and apply the method on both synthetic and real data sets. The method is shown to be competitive with state of the art methods whilst allowing for the leverage of the full Gaussian process probabilistic framework.", "targets": "Direct Gaussian Process Quantile Regression using Expectation Propagation"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-89f815b4c7ae44f4a68deda205128000", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper is an empirical study of the distributed deep learning for a question answering subtask: answer selection. Comparison studies of SGD, MSGD, DOWNPOUR and EASGD/EAMSGD algorithms have been presented. Experimental results show that the message passing interface based distributed framework can accelerate the convergence speed at a sublinear scale. This paper demonstrates the importance of distributed training: with 120 workers, an 83x speedup is achievable and running time is decreased from 107.9 hours to 1.3 hours, which will benefit the productivity significantly.", "targets": "Distributed Deep Learning for Answer Selection"} {"task_name": "task1540_parsed_pdfs_summarization", "id": "task1540-1cf961df4be04ff6b9e76698656b2bb5", "definition": "In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words.", "inputs": "This paper introduces conceptual relations tlrat syntlresizc utilitarian and logical con\u00ad cepts, extending the logics of preference of [{escher. We define first, in the context of a possible\u00ad worlds modeL constraint-dependent mea\u00ad sures that quantify the relative quality of al\u00ad temative solutions of reasoning problems or the relative desirabi lity of various policies in coutrol. decision, and planning problerns. \\Yc sho\\\\' tbat. these llH\ufffdasures rnay be inter\u00ad preted as truth values in a multivaluecl logic ami propose rnechanisms for the representa\u00ad tion of complex constraints as combinations of simpler restrictions. These extended log\u00ad iced operations permit also the combination and Rggregation of goal-specific quality l11ea. \u00ad sures into global measnres of utility . We iden\u00ad tify also relations that represent differential prd!'rcnces hdll\"ecn ill t ernativ<' solutions and rel<1tc thcrrr to t.IJ<\u2022 pr< ' viously defined desir\u00ad ability tne